Uploaded by Prajwal Chari

Digital Logic Design Course Material

advertisement
APSC 262 Digital Logic Design
Dr. Ayman Elnaggar, Ph.D., P.Eng.
School of Engineering, Faculty of Applied Science
The University of British Columbia
Office: EME 4261
Tel: 250-807-8198
Email:ayman.elnaggar@ubc.ca
TEXTBOOK
Chapter 1 Digital Systems and Binary Numbers
Chapter 2 Boolean Algebra and Logic Circuits
Chapter 3 Gate-Level Minimization
Chapter 4 Combinational Logic
Chapter 5 Synchronous Sequential Logic
Chapter 6 Register, Counters, and FSM
Chapter 7 Memory & Programmable Logic
Digital Design with an Introduction
to Verilog HDL, 5th edition.
MODULE 1.0
1.1 Number Representation
1.1.1 Binary numbers
1.1.2 Octal numbers
1.1.3 Hexadeciaml numbers
1.1.4 Number conversion
1.1.5 Addition of unsigned integers
1.2 Signed Integers
1.2.1 Number representation of signed integers
1.2.2 Addition and subtraction of signed numbers (2’s Complement)
1.3 Binary Codes
1.2.1 Binary Coded Decimal (BCD) numbers
1.2.2 ASCII code
1.2.3 Parity bit
PAGE 1
1.0 Number Systems.
In this module, we will study arithmetic operations and develop digital logic circuits to implement the operations.
1.1 Unsigned Numbers.
We often take numbers for granted. We use decimal numbers and routinely carry out arithmetic operations using
the 10 digits (0, 1, 2, …., 9). With digital logic design, it becomes necessary to generalize the steps used for
arithmetic operations and apply them in other number systems. Weighted Number Systems
A decimal number D consists of n digits and each digit has a position. Every digit position is associated with a fixed
weight. If the weight associated with the ith position is w , then the value of D is given by:
i
D = d w + d w + ……+ d w + d w
n-1
n-1
n-2
n-2
1
1
0
0
Also called positional number system
9375
Decimal (Radix) Point
A number D has n integral digits and m fractional digits. Digits to the left of the radix point (integral digits) have
positive position indices, while digits to the right of the radix point (fractional digits) have negative position
indices
The weight for a digit position i is given by wi = ri
PAGE 2
The Radix (Base) of a number
• A digit di, has a weight which is a power of some constant value called radix (r) or base such that wi = ri.
• A number D with base r can be denoted as (D)r,
o Decimal number 128 can be written as (128)10
• A number system of radix r, has r allowed digits {0,1,… (r-1)}
• The leftmost digit has the highest weight and called Most Significant Digit (MSD)
• The rightmost digit has the lowest weight and called Least Significant Digit (LSD)
• The Largest value that can be expressed in n digits is (rn - 1)
o For example, the largest decimal number (r=10) that can be written in 3 digits (n=3) is 10 3-1=999.
Are these valid numbers?
• (9478)
→ Yes, digits between 0 to 9
10
•
(1289)
•
(111000)
•
(55)
5
→ No, for base 2 only 0’s and1’s
2
2
→ Yes
→ No, for base 5 only digits 0 to 4
We are going to consider the following number systems:
Base-2 (binary), Base-8 (octal), Base-10 (decimal), and Base-16 (hexadecimal).
To avoid ambiguity in representing numbers, we can explicitly express the base of an integer with a subscript on
the number (written in parentheses). For example, (123)10 , (1111011)2 , (753)8.
We will begin with a look at number representation for the simplest system of unsigned numbers, with no
positive or negative sign.
Binary Numbers (Converting Binar to Decimal)
Consider the base-2 (binary) system. An unsigned binary integer with n binary digits (or bits) will have each bit
be 0 or 1. We can also talk about groups of bits: a group of four bits is a nibble; a group of eight bits is a byte.
We can express all the bits in a positional number representation:
B
=
bn-1 bn-2 … b1 b0.b-1b-2..
V(B) = bn-1×2n-1 + bn-2×2n-2+ bn-3×2n-3+ bn-4×2n-4+ … + b1×21 +b0×20+ b-1×2-1 +b-2×2-2 +…….
For example, to convert the binary number (1111011)2 to decimal, we can write it as
1×26 + 1×25 + 1×24+ 1×23 + 0×22 + 1×21 + 1×20 = (123)10
For unsigned integer with n bits, we will be able to express integers in the range from 0 to 2n - 1.
Convert the binary number (101.01)2 to decimal
1 *22 + 0 *21 + 1 *20 + 0 *2-1 + 1 *2-2 = ( 5.25)10
PAGE 3
The Power of 2
Kilo
Mega
Giga
Tera
Conversion from decimal to binary number. Integer conversion to the binary number system from other number
systems involves division, while monitoring remainders. Fortunately, this process of division and monitoring
remainders can be carried out in a well-structured algorithm.
We take the original decimal integer and divide by 2 repeatedly, while listing our successive integer answers
below the current integer, in a column. The remainders are recorded in an adjacent column. Digits (bits) in the
remainder column form the final answer, expressed from top (LSB) to bottom (MSB). Consider these examples:
(184)10
Quotient
Remainder
184
/2
92
0
92
/2
46
0
46
/2
23
0
23
/2
11
1
11
/2
5
1
5
/2
2
1
2
/2
1
0
1
/2
0
1
(LSB)
(MSB)
(10111000)2
PAGE 4
Convert (789.625)10 to binary.
1. Integer part:
(789)10
Quotient
Remainder
789
/2
394
1
394
/2
197
0
197
/2
98
1
98
/2
49
0
49
/2
24
1
24
/2
12
0
12
/2
6
0
6
/2
3
0
3
/2
1
1
1
/2
0
1
(LSB)
(MSB)
(1100010101)2
2. Fraction part:
o Multiply the fraction part (0.625) by the ‘Base’ (=2)
o Take the integer (either 0 or 1) as a coefficient
o Take the resultant fraction and repeat the multiplication
Fraction
. 25
Coefficient
a-1 = 1
(MSB)
0.625
*2=
Integer
1
0.25
*2=
0
. 5
a-2 = 0
0.5
*2=
1
.0
a-3 = 1
(LSB)
(0.625)10 = (0.a-1 a-2 a-3)2 = (0.101)2
MSB
LSB
Grouping both parts → (789.625)10 = (1100010101.101)2
The method can be generalized to convert any decimal number (base 10) to any other base system such as
base 8 (divide by 8 instead) or hexadecimal (divide by 16 instead).
PAGE 5
Binary Coding
Digital systems use signals that have two distinct values and circuit elements that have two stable states. A binary
number of n digits, for example, may be represented by n binary circuit elements, each having an output signal
equivalent to 0 or 1.
Digital systems represent and manipulate not only binary numbers, but also many other discrete elements of
information (image, video, audio, music, etc.). Any discrete element of information that is distinct among a group
of quantities can be represented with a binary code (i.e., a pattern of 0’s and 1’s). For example, let us say we want
to code 256 colors associated of the color of pixels on an image. In this case we need 8-bit of coding. We can say
00000000 will code white, 00000001 will code blue, 00000010 for yellow, and so on till 11111111 for black color.
The codes must be in binary because, in today’s technology, only circuits that represent and manipulate patterns
of 0’s and 1’s can be manufactured economically for use in computers. However, it must be realized that binary
codes merely change the symbols, not the meaning of the elements of information that they represent. If we
inspect the bits of a computer at random, we will find that most of the time they represent some type of coded
information rather than binary numbers.
An n‐bit binary code is a group of n bits that assumes up to 2n distinct combinations of 1’s and 0’s, with each
combination representing one element of the set that is being coded (color, amplitude, brightness, intensity,
etc.). A set of four elements can be coded with two bits, with each element assigned one of the following bit
combinations: 00, 01, 10, 11. A set of eight elements requires a three‐bit code and a set of 16 elements requires a
four‐bit code (or (log2 n) in general).
1.3.1 Binary Coded Decimal (BCD)
Is used when decimal input is applied to a digital circuit. In this case we have 10 (n=10) different digits that we
would like to code in binary. The closest log2 n is 4. Therefore, we are going to use 4 bits to represent each of the
10 decimal digit from 0 to 9. However, since we can represent decimal digits 11, 12, .., 15 in 4 bit too, these
numbers are not considered and we normally put them as X (don’t care) on our Truth Table or K-Map. More on
this later on.
(0 – 9)  Valid combinations
(10 – 15)  Invalid combinations
Decimal
0
1
2
3
4
5
6
7
8
9
BCD
0000
0001
0010
0011
0100
0101
0110
0111
1000
1001
It is important to realize that BCD numbers are decimal numbers and not binary numbers, although they use bits
in their representation. The only difference between a decimal number and BCD is that decimals are written with
the symbols 0, 1, 2, c, 9 and BCD numbers use the binary code 0000, 0001, 0010, …., 1001. The decimal value is
exactly the same. Decimal 10 is represented in BCD with eight bits as 0001 0000 and decimal 15 as 0001 0101.
The corresponding binary values are 1010 and 1111 and have only four bits.
PAGE 6
ASCII Code
Many applications of digital computers require the handling not only of numbers, but also of other characters or
symbols, such as the letters of the alphabet. For you to text a message on your cell phone, or typing in a Word
file, it is necessary to formulate a binary code for the letters of the alphabet. In addition, the same binary code
must represent numerals and special characters (such as $). An alphanumeric character set is a set of elements
that includes the 10 decimal digits, the 26 letters of the alphabet, and a number of special characters. Such a set
contains between 36 and 64 elements if only capital letters are included, or between 64 and 128 elements if both
uppercase and lowercase letters are included. In the first case, we need a binary code of seven bits.
The standard binary code for the alphanumeric characters is the American Standard Code for Information
Interchange (ASCII), which uses seven bits to code 128 characters, as shown in the Table below. The seven bits of
the code are designated combination corresponding to each character. The letter A, for example, is represented
in ASCII as 0100 0001. The ASCII code also contains 34 nonprinting characters used for various control functions
such as line feed and back space.
Normally, your keypad or touchpad will do this conversion. So when you text a message or type in your file, every
character is replaced and saved by its ASCII code. Remember, you may need additional binary coding for
formatting such as color, font, size, etc. associated with each character.
PAGE 7
Parity Bit – Error Detection
To detect errors in data communication and processing, an eighth bit is sometimes added to the ASCII character
to indicate its parity. A parity bit is an extra bit included with a message to make the total number of 1’s either
even or odd. Consider the following two characters and their even and odd parity:
ASCII A = 1000001
ASCII T = 1010100
With even parity
01000001
11010100
With odd parity
11000001
01010100
In each case, we insert an extra bit in the leftmost position of the code to produce an even number of 1’s in the
character for even parity or an odd number of 1’s in the character for odd parity. In general, one or the other
parity is adopted, with even parity being more common.
The parity bit is helpful in detecting errors during the transmission of information from one location to another.
This function is handled by generating an even parity bit at the sending end for each character. The eight‐bit
characters that include parity bits are transmitted to their destination. The parity of each character is then
checked at the receiving end. If the parity of the received character is not even, then at least one bit has changed
value during the transmission. This method detects one, three, or any odd combination of errors in each
character that is transmitted. An even combination of errors, however, goes undetected, and additional error
detection codes may be needed to take care of that possibility.
What is done after an error is detected depends on the particular application. One possibility is to request
retransmission of the message on the assumption that the error was random and will not occur again.
7-bit Example
4-bit Example
1
0
Even Parity
Even Parity
1
0
Odd Parity
Odd Parity
Example: Decode the following ASCII string (with MSB = parity). Is it an even or odd parity?
11010101 11000010 01000011 01001111
PAGE 8
Representation of Binary Numbers by Electrical Signals
So how 1’s and 0’s are really stored, processed, or transmitted?
•
•
•
Binary ‘0’ is represented by a “low” voltage (range of voltages)
Binary ‘1’ is represented by a “high” voltage (range of voltages)
The “voltage ranges” guard against noise
PAGE 9
APSC 262 Digital Logic Design
Dr. Ayman Elnaggar, Ph.D., P.Eng.
School of Engineering, Faculty of Applied Science
The University of British Columbia
Office: EME 4261
Tel: 250-807-8198
Email:ayman.elnaggar@ubc.ca
TEXTBOOK
Chapter 1 Digital Systems and Binary Numbers
Chapter 2 Boolean Algebra and Logic Circuits
Chapter 3 Gate-Level Minimization
Chapter 4 Combinational Logic
Chapter 5 Synchronous Sequential Logic
Chapter 6 Register, Counters, and FSM
Chapter 7 Memory & Programmable Logic
Digital Design with an Introduction
to Verilog HDL, 5th edition.
MODULE 2.0
2.0 Introduction to logic circuits
2.1 Logic gates
2.3 Logic network analysis
2.4 Boolean algebra
2.5 Logic design
2.6 Standard logic chips
2.7 Implementation technologies (transistor-level)
PAGE 10
2.0 Introduction to logic circuits.
In this module, we will look at logic circuits. We will first explore the input variables and output functions that
facilitate the operation of a logic circuit. We will then identify logic gates—being the fundamental elements that
carry out logic processes.
2.1 Logic variables and functions.
In this section we will identify fundamental elements of a digital system. The first is an input variable, being a usercontrolled input to the system, and the second is an output function, being the overall output from the system.
Consider the input variable and output function. Simple representations for an input variable and output
function are a switch and an LED light, respectively. Imagine that the switch, defined by input variable x, controls
the state of the LED light, defined by output function L(x). The switch input variable x can be 0 (open) or 1
(closed), and the LED light output function L(x), that depends on x, can be 0 (off) or 1 (on). The dependency
between the input variable and output function can be tabulated in a truth table.
Logic function:
L(x) = x.
Truth table:
x
0
1
|
|
|
L(x) = x
0
1
Consider the AND logic function. The AND logic function has on two (or more) input variables, x1 and x2. The
output logic function is L(x1,x2) = 1 if and only if x1 and x2 are 1. All other input variable combinations yield L(x1,x2)
= 0. The AND logic function can be visualized as a series connection of two switches. The AND operator is "·" or is
assumed to be there by default with two adjoining variables.
Logic function:
L(x1, x2) = x1 · x2 = x1x2
Truth table:
x1
0
0
1
1
x2
0
1
0
1
|
|
|
|
|
L = x1 · x2
0
0
0
1
PAGE 11
Consider the OR logic function. The OR logic function has on two (or more) input variables, x1 and x2. The OR
logic function is L(x1,x2) = 1 if one or both of x1 or x2 are 1. Thus, the OR logic function is L(x1,x2) = 0 if and only if x1
= 0 and x2 = 0. The OR logic function can be visualized as a parallel connection of switches. The OR operator is "+".
Logic function:
L(x1, x2) = x1 + x2
Truth table:
x1
0
0
1
1
x2
0
1
0
1
|
|
|
|
|
L= x1 + x2
0
1
1
1
Consider the NOT logic function. The NOT logic function has one input variable, x. The output logic function is
L(x1,x2) = 0 if the input variable is x = 1, and L(x1,x2) = 1 if the input variable is x = 0. Thus, the NOT function is an
inversion function. The NOT logic function is realized by a circuit with a closed (x = 1) switch shorting the light to
ground. The NOT operator is " ' " or "!", or it is denoted with an overhead bar on the input variable. The output of
a NOT function on an input variable is called the "complement" of that variable.
Logic function:
L(x) = x'
Truth table:
x
0
1
|
|
|
L= x'
1
0
PAGE 12
Example. Create a circuit with switches x1, x2, x3, and x4 that can light up an LED according to the logic function
L(x1,x2,x3,x4) = x1 · (x2 + x3) · x4. The operation of the circuit is demonstrated by way of a truth table.
Truth table:
x1
0
0
0
0
0
0
0
0
1
1
1
1
1
1
1
1
x2
0
0
0
0
1
1
1
1
0
0
0
0
1
1
1
1
x3
0
0
1
1
0
0
1
1
0
0
1
1
0
0
1
1
x4
0
1
0
1
0
1
0
1
0
1
0
1
0
1
0
1
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
(x2 + x3)
0
0
1
1
1
1
1
1
0
0
1
1
1
1
1
1
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
For n inputs, how do we list all combinations without missing any?
PAGE 13
x1 · (x2 + x3) · x4
0
0
0
0
0
0
0
0
0
0
0
1
0
1
0
1
2.2 Logic gates.
The AND, OR, and NOT logic functions in the prior section (and a few other logic functions) are implemented in
circuits with digital logic gates. In this section, we define a distinct logic gate for each of our logic functions.
Consider the AND gate. The AND gate has two (typically) input variables, x1 and x2. The output is 1 if both x1 and
x2 are 1, and the output is 0 for all other cases of x1 and x2 values. The AND gate is denoted as x1 · x2.
AND gate
Truth table:
x1
0
0
1
1
x2
0
1
0
1
|
|
|
|
|
f = x1 · x2
0
0
0
1
Consider the OR gate. The OR gate has two (typically) input variables, x1 and x2. The output is 1 if either or both
of x1 and x2 are 1, and the output is 0 only if both x1 and x2 are 0. The OR gate is denoted as x1 + x2.
OR gate
Truth table:
x1
0
0
1
1
x2
0
1
0
1
|
|
|
|
|
f = x1 + x2
0
1
1
1
Consider the NOT gate. The NOT gate has one input variable, x. The output is 1 if x is 0 and is 0 if x is 1. The
output of the NOT gate is the complement of x. The NOT gate is denoted as x' or !x or x . The NOT gate uses a
bubble to signify the complement operation (and may include a triangle to signify gain from an amplifier/buffer).
NOT gate
Truth table:
x
0
1
|
|
|
f = x'
1
0
PAGE 14
Consider the NAND gate. The NAND gate is an NOT-AND gate with two (typically) input variables, x1 and x2. The
output is the complement of the ANDed input variables. The NAND gate is denoted as (x1·x2)'. The NAND gate
uses a bubble to signify the complement operation after the AND gate.
NAND gate
Truth table:
x1
0
0
1
1
x2
0
1
0
1
|
|
|
|
|
AND
x1 · x2
0
0
0
1
NAND
f = (x1 · x2)'
1
1
1
0
Consider the NOR gate. The NOR gate is an NOT-OR gate with two (typically) input variables, x1 and x2. The
output is the complement of the ORed input variables. The NOR gate is denoted as (x1+x2)'. The NOR gate uses a
bubble to signify the complement operation after the OR gate.
NOR gate
Truth table:
x1
0
0
1
1
x2
0
1
0
1
|
|
|
|
|
OR
x1 + x2
0
1
1
1
NOR
f = (x1 + x2)'
1
0
0
0
Consider the XOR gate. The XOR gate is an exclusive-OR gate with two (typically) input variables, x1 and x2. The
output is 1 only when the input variables differ in value. For any number of input variables, in general, the output
is 1 if and only if there is an odd number of input variables equal to 1. The XOR gate is denoted as x1⊕x2. The XOR
gate looks similar to the OR gate, but there is a double band on the input side.
XOR gate
Truth table:
x1
0
0
1
x2
0
1
0
|
|
|
|
XOR
f = x1 ⊕ x2
0
1
1
PAGE 15
Example. Draw the logic gate circuit for a digital logic system with input variables x1 and x2 and an output
function L(x1,x2) = x1' + (x1 · x2).
Example. Draw the logic gate circuit for a digital logic system with input variables x1, x2, x3, and x4 and an output
function L(x1,x2,x3,x4) = x1 · (x2 + x3) · x4.
PAGE 16
2.3 Logic network analysis.
This section looks at the analysis of logic networks. Logic networks are given to us and their operation is analyzed.
2.3.1 Logic network analysis with truth tables.
When logic networks become increasingly complicated it becomes necessary to track input variables, logic states
of intermediate nodes, and the corresponding output functions. This can be accomplished with truth tables.
A truth table organizes all possible combinations of input variables and lists the corresponding logic values for the
intermediate nodes (when necessary) and output functions.
Example. Derive the Boolean function and create the truth table for the logic gate circuit below
x1
0
0
1
1
x2
0
1
0
1
|
|
|
|
|
x 1'
1
1
0
0
(x1·x2)
0
0
0
1
|
|
|
|
|
y=x1' + (x1·x2)
1
1
0
1
Example. Derive the Boolean function and create the truth table for the logic gate circuit below
x1
0
0
0
0
0
0
0
0
1
1
1
1
1
1
1
1
x2
0
0
0
0
1
1
1
1
0
0
0
0
1
1
1
1
x3
0
0
1
1
0
0
1
1
0
0
1
1
0
0
1
1
x4
0
1
0
1
0
1
0
1
0
1
0
1
0
1
0
1
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
(x2+x3)
0
0
1
1
1
1
1
1
0
0
1
1
1
1
1
1
x1·(x2+x3)
0
0
0
0
0
0
0
0
0
0
1
1
1
1
1
1
PAGE 17
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
y=x1·(x2+x3)·x4
0
0
0
0
0
0
0
0
0
0
0
1
0
1
0
1
2.3.2 Logic network analysis with timing diagrams.
The tracking of logic values in a circuit becomes more complex when one considers that digital logic networks are
typically used with data streams—having input variables and nodes taking on 0 and 1 values as a function of time.
We can keep track of the input variables and their effects on nodes in our system with timing diagrams.
With timing diagrams, we track the sequential series of 0 and 1 bits for the input variables. We also track the 0
and 1 values of the intermediate nodes and output function for the overall system in the same sequential order.
tim
e
tim
e
tim
e
tim
e
tim
e
Propagation delay (tpd) is the time for a change in the input of a gate to propagate to the output.
• High-to-low (tphl) and low-to-high (tplh) output signal changes may have different propagation delays
• tpd = max {tphl, tphl)
• A circuit is considered to be fast, if its propagation delay is less (ideally as close to 0 as possible)
Delay is usually measured
between the 50% levels
of the signal
PAGE 18
X
Z
Y
Inputs
X
Propagation Delay of
the Circuit = τ
Y
Output
Z
Timing Diagram for an AND gate
Time
For the sake of our course, we are ignoring the propagation delay in timing analysis of logic gates. However, it
is a very important parameter in evaluating the performance of digital circuits.
Delay in multi-level logic diagrams
How do we decompose 3-input AND gates of 2-input ones?
What about 4-input AND gates?
PAGE 19
2.4 Logic network analysis with Boolean algebra.
Logic operations can be separated with parentheses to distinguish order of operations, but this would introduce
unnecessary parentheses. Instead, it is possible to define an implicit order of operations and save parentheses for
situations requiring specific ordering. The implicit order of operations is as follows:
i. parentheses,
ii. NOT,
iii. AND,
iv. OR.
For example, x · y + y' · z + x' · z can be written as (x · y) + ((y') · z) + ((x') · z), but the parentheses are unnecessary.
Let's now use the operations to define Boolean algebra axioms and derive Boolean algebra theorems.
Boolean algebra axioms and theorems. We can carry out Boolean algebra operations for complex logic networks
by considering fundamental axioms, single-variable theorems, and two/three-variable theorems.
Axioms.
1a
0·0=0
1b
1+1=1
2a
1·1=1
2b
0+0=0
3a
0·1=1·0=0
3b
1+0=0+1=1
4a
If x = 0, then x' = 1
4b
If x = 1, then x' = 0
Single-variable theorems.
5a
x·0=0
5b
x+1=1
6a
x·1=x
6b
x+0=x
7a
x·x=x
7b
x+x=x
8a
x · x' = 0
8b
x + x' = 1
9
x'' = x
Two/three-variable theorems.
10a
x·y=y·x
10b x + y = y + x
11a
x · (y · z) = (x · y) · z
11b x + (y + z) = (x + y) + z
12a
x · (y + z) = x · y + x · z
12b x + y · z = (x + y) · (x + z)
13a
x+x·y=x
13b x · (x + y) = x
14a
x · y + x · y' = x
14b (x + y) · (x + y') = x
15a
(x · y)' = x' + y'
15b (x + y)' = x' · y'
16a
x + x' · y = x + y
16b x · (x' + y) = x · y
17a
x · y + y · z + x' · z = x · y + x' · z
17b (x + y) · (y + z) · (x' + z) = (x + y) · (x' + z)
(Commutative)
(Commutative)
(Associative)
(Associative)
(Distributive)
(Distributive)
(Absorption) proof: x + xy = x(y+1) = x
(Absorption) proof: x(x+y) = xx+xy = x(y+1) = x
(Combining) proof: xy+xy' = x(y+y') = x
(Combining) proof: (x+y)(x+y') = xx+xy+xy'+yy' = x+x(y+y') = x
(DeMorgan's) proof: truth table
(DeMorgan's) proof: truth table
(Elimination) proof: x+x'y=x(y+1)+x'y=x+xy+x'y=x+y(x+x')=x+y
(Elimination) proof: x(x'+y) = xx'+xy = xy
(Consensus) proof: xy+yz+x'z = xy+yz(x+x')+x'z = xy+x'z
(Consensus) proof: (x+y)(y+z)(x'+z) = (xy+yz+xz+y)(x'+z)
=x'y+x'yz+xyz+yz+xz=x'y+yz+xz=(x+y)(x'+z)
PAGE 20
Example. Prove the logic identity x1' · x2' + x1 · x2 + x1 · x2' = x1 + x2'.
LHS = x1' · x2' + x1 · x2 + x1 · x2'
LHS = x1' · x2' + x1 · (x2 + x2')
(using 12a)
LHS = x1' · x2' + x1 · 1
(using 8b)
LHS = x1' · x2' + x1
(using 6a)
LHS = x1 + x1' · x2'
(using 10b)
LHS = x1 + x2' = RHS
(using 16a
or
x1 + x1' · x2' = x1 (x2' + 1) + x1' · x2' = x1 + x2')
Example. Prove the logic identity x1 · x3' + x2'· x3' + x1 · x3 + x2'· x3 = x1 + x2'.
LHS = x1 · x3' + x2'· x3' + x1 · x3 + x2'· x3
LHS = x1 · x3' + x1 · x3 + x2'· x3' + x2'· x3
(using 10b)
LHS = x1 · (x3' + x3) + x2'· (x3' + x3)
(using 12a)
LHS = x1 · (1) + x2'· (1)
(using 8b)
LHS = x1 + x2' = RHS
(using 6a)
Example. Prove the logic identity (x1 + x3) · (x1' + x3') = (x1· x3' + x1'· x3).
LHS = (x1 + x3) · x1' + (x1 + x3) · x3'
(using 12a)
LHS = x1 · x1' + x3 · x1' + x1 · x3' + x3 · x3'
(using 12a)
LHS = 0 + x3 · x1' + x1 · x3' + 0
(using 8a)
LHS = x3 · x1' + x1 · x3'
(using 6b)
LHS = x1 · x3' + x1' · x3 = RHS
(using 10a and 10b)
Another way to prove any identity is to form a truth table for the RHS and another table for the LHS and prove
they are of equal values.
PAGE 21
Example. Express the following logic circuit in terms of Boolean algebra. Then apply Boolean algebra to show that
it can be simplified. Compare the original and simplified circuits (gate count).
Original circuit:
Simplified circuit:
Original circuit 3 gates.
Simplified circuit 2 gates.
Algebra:
(x1 + x2) · x1' = (x1· x1' + x2· x1') = (0 + x2· x1') = x2· x1'
Why is simplifying logic circuits important?
• Boolean algebra identities and properties help reduce the size of expressions
• In effect, smaller sized expressions will require fewer logic gates for building the circuit
• As a result simpler circuits will gain the following features:
•
less cost,
•
less size,
•
less power consumption,
•
less delay
• The speed of simpler circuits is also high
PAGE 22
2.5 Logic network design.
This section introduces design, i.e., synthesis, of digital logic systems. We will do this by way of an example.
Design a warning system at a train intersection with three tracks. Each track has an input variable x1, x2, and x3. If
a train is present on a track, its input variable has a value of 1, otherwise it has a value of 0. The system must give
a warning, designated by the output function f, if two or more trains are present on the tracks.
We start digital design work with a truth table having all combinations of inputs and their function values.
Row
0
1
2
3
4
5
6
7
x1
0
0
0
0
1
1
1
1
x2
0
0
1
1
0
0
1
1
x3
0
1
0
1
0
1
0
1
|
|
|
|
|
|
|
|
|
f
0
0
0
1
0
1
1
1
Minterm label
m0 = x1' · x2'· x3'
m1 = x1' · x2'· x3
m2 = x1' · x2 · x3'
m3 = x1' · x2 · x3
m4 = x1 · x2'· x3'
m5 = x1 · x2'· x3
m6 = x1 · x2 · x3'
m7 = x1 · x2 · x3
First, we identify minterms. Minterms are identified for each truth table row, as shown above. (With experience,
it is not necessary to label these terms on your truth tables.) Consider the labels:
I. Minterms are products of all (uncomplemented or complemented) variables. Minterm mi is labelled as a
product in a row of the truth table by stating xi in the product if xi = 1 or stating xi' in the product if xi = 0. You can
quickly identify the input values for a minterm, mi, by looking at the binary representation of the integer i.
Second, we assemble sum-of-products.
Sum-of-products (SOP) expressions are a sum of all product terms in the rows having f = 1. This fully defines the
design, as (uncomplemented or complemented) variables in each of these product terms must be 1 to yield f = 1.
f (x1 , x2 , x3 ) =
mi = m3 + m5 + m6 + m7 = x1'x2  x3 + x1  x2 'x3 + x1  x2  x3'+ x1  x2  x3 .
i
SOP/sum of minterms are easily implemented by allocating each product term/minterm to an AND gate, in
addition to one OR gate that combine all the AND gates outputs. This topology is called 2-level AND-OR gates.
Example. State the canonical (i.e., complete) SOP expression for the following function and simplify it:
f (x1 , x2 , x3 ) =
 (m2 ,m3 ,m4 ,m6 ,m7 ).
i
f = m2 + m3 + m4 + m6 + m7 ,
f = x1 ' x2 x3 '+ x1 ' x2 x3 + x1 x2 ' x3 ' + x1 x2 x3 ' + x1 x2 x3 ,
f = x1 ' x2 x3 ' + x1 ' x2 x3 + x1 x2 ' x3 ' + x1 x2 x3 ' + x1 x2 x3 + x1 x2 x3 ',
f = x1 ' x2 (x3 '+ x3 ) + x1 x3 '(x2 '+ x2 ) + x1 x2 (x3 '+ x3 ),
f = x1 ' x2 + x1 x2 + x1 x3 ' ,
f = x2 (x1 '+ x1 ) + x1 x3 ' ,
f = x2 + x1 x3 '.
Compare both logic diagrams and gate count.
PAGE 23
Example. State the canonical (i.e., complete) SOP expression for the following function and simplify it:
f (x1 , x2 , x3 , x4 ) =
 (m3 , m7 , m9 , m12 , m13 , m14 , m15 ) .
i
f = m3 + m7 + m9 + m12 + m13 + m14 + m15 ,
f = x1 ' x2 ' x3 x4 + x1 ' x2 x3 x4 + x1 x2 ' x3 ' x4 + x1 x2 x3 ' x4 ' + x1 x2 x3 ' x4 + x1 x2 x3 x4 ' + x1 x2 x3 x4 ,
f = x1 ' x2 ' x3 x4 + x1 ' x2 x3 x4 + x1 x2 ' x3 ' x4 + x1 x2 x3 ' x4 ' + x1 x2 x3 ' x4 + x1 x2 x3 x4 ' + x1 x2 x3 x4 + x1 x2 x3 ' x4 ,
f = x1 ' x3 x4 (x2 '+ x2 ) + x1 x3 ' x4 (x2 '+ x2 ) + x1 x2 x3 '(x4 + x4 ') + x1 x2 x3 (x4 '+ x4 ),
f = x1 ' x3 x4 + x1 x3 ' x4 + x1 x2 x3 ' + x1 x2 x3 ,
f = x1 ' x3 x4 + x1 x3 ' x4 + x1 x2 (x3 '+ x3 ),
f = x1 ' x3 x4 + x1 x3 ' x4 + x1 x2 .
Compare both logic diagrams and gate count.
PAGE 24
It is often advantageous to implement digital circuits with NAND and NOR gates, rather than AND and OR gates,
as fewer transistors are needed in NAND and NOR gates. (NOT gates are easily implemented and are denoted in
digital circuits with a small bubble.) With this practical motivation, we can use our knowledge of digital design
with AND and OR and make use of deMorgan's theorem to implement digital designs with NAND and NOR.
Consider the NAND gate. A NAND gate is a NOT-AND gate with two (typically) input variables, x1 and x2. The
output is the complement of the ANDed input variables. The NAND gate operation is denoted by (x1·x2)'. The
NAND gate uses a bubble on the output to signify the complement and an AND gate to signify the AND operation.
Let's apply deMorgan's theorem to this NAND gate. We know that deMorgan's theorem will take the complement
of the inputs, change the AND gate to an OR gate, and yield the complement of our output function. Thus, we can
create an analogy between a NAND gate and an OR gate with complemented inputs.
NAND gate
x1
0
0
1
1
x2
0
1
0
1
|
|
|
|
|
Analogous OR gate (with complemented inputs)
(x1 · x2)'
1
1
1
0
x1
0
0
1
1
x2
0
1
0
1
|
|
|
|
|
x 1' + x 2'
1
1
1
0
Consider the NOR gate. A NOR gate is an OR-NOT gate with two (typically) input variables, x1 and x2. The output
is the complement of the ORed input variables. The NOR gate operation is denoted by (x1+x2)'. The NOR gate uses
a bubble on the output to signify the complement and an OR gate to signify the OR operation.
Let's now apply deMorgan's theorem to this NOR gate. We know that deMorgan's theorem will take the
complement of the inputs, change the OR gate to an AND gate, and yield the complement of our output function.
Thus, we can create an analogy between a NOR gate and an AND gate with complemented inputs.
NOR gate
x1
0
0
1
1
x2
0
1
0
1
|
|
|
|
|
Analogous AND gate (with complemented inputs)
(x1 + x2)'
1
0
0
0
x1
0
0
1
1
PAGE 25
x2
0
1
0
1
|
|
|
|
|
x 1' · x 2'
1
0
0
0
Example. Sketch a digital logic circuit with AND and OR gates and a digital logic circuit with only NAND gates for
the SOP function f = x1 · x2 + x3 · x4 · x5.
AND/OR gates:
AND/OR gates (complemented):
PAGE 26
NAND gates:
2.6 Standard logic chips.
The logic gates in the prior sections can be implemented in digital circuits with 7400-series standard logic chips.
The implementation is straightforward, with connections made between appropriate pins of the logic chips.
Four standard logic chips are shown here.
The 7404 chip includes six hex inverters.
The 7407 chip includes six hex buffers.
The 7408 chip includes four AND gates.
The 7432 chip includes four OR gates.
Example. Wire together the 7400-series standard logic chips to implement the function f = (x1 + x2')·x3.
PAGE 27
2.7 Logic network implementation technology.
Transistors. A transistor has three terminals:
I. source, which can be thought of as the input terminal,
II. drain, which can be thought of as the output terminal, and
III. gate, the terminal to which an electrical signal is applied to control current flow from the source to the drain.
Consider NMOS transistor realizations of the NOT gate. We state the logic relating the input variable, x, and
output function, f, and sketch the NMOS and PMOS transistor circuits that form this logic relationship.
x |
0 |
1 |
f = x'
1
0
NMOS realization (pull-down):
Consider the NMOS transistor realizations of the AND gate and NAND gate. We relate the input variables, x1 and
x2, to the output function, f, for an AND gate and a NAND gate, and then sketch the NMOS transistor circuits that
form these logic relations.
x1
0
0
1
1
x2
0
1
0
1
|
|
|
|
|
f = x 1 · x2
0
0
0
1
x1
x2
0
0
0
1
1
0
1
1
NMOS realization (pull-down):
PAGE 28
|
|
|
|
|
f = (x1 · x2)'
1
1
1
0
Consider the NMOS transistor realizations of the OR gate and NOR gate. We first state the logic relationships
between input variables, x1 and x2, and output function, f, for an OR gate and a NOR gate and then sketch the
NMOS transistor circuits that can form these logic relationships.
x1
0
0
1
1
x2
0
1
0
1
|
|
|
|
|
f = x1 + x2
0
1
1
1
x1
0
0
1
1
x2
0
1
0
1
NMOS realization (pull-down):
A note on programmable logic.
PAGE 29
|
|
|
|
|
f = (x1 + x2)'
1
0
0
0
APSC 262 Digital Logic Design
Dr. Ayman Elnaggar, Ph.D., P.Eng.
School of Engineering, Faculty of Applied Science
The University of British Columbia
Office: EME 4261
Tel: 250-807-8198
Email:ayman.elnaggar@ubc.ca
TEXTBOOK
Chapter 1 Digital Systems and Binary Numbers
Chapter 2 Boolean Algebra and Logic Circuits
Chapter 3 Gate-Level Minimization
Chapter 4 Combinational Logic
Chapter 5 Synchronous Sequential Logic
Chapter 6 Register, Counters, and FSM
Chapter 7 Memory & Programmable Logic
Digital Design with an Introduction
to Verilog HDL, 5th edition.
MODULE 3.0
3.0 Logic function minimization
3.1 Karnough Map (K-Map)
3.2 2-, 3-, and 4-variable K-Map
3.3 Don’t Care Conditions
PAGE 30
3.0 Logic function minimization.
In this module, we will look at optimization techniques to create minimized designs for logic functions. Before we
begin, it is best to define some terminology that is needed to do this minimization of logic functions.
Literal. A given product term will contain many variables, and each appearance of a variable, either in
complemented or uncomplemented form, is defined as a literal. The product term x1·x2'·x3 contains three literals,
while the product term x1·x2'·x3·x4' contains four literals.
Objective: In our previous analysis we have considered implementing Boolean functions in the form of sum of
minterms (or a minimized SOP) using 2-level AND-OR gates.
• 3-input and 4-input Gates are practically implemented using 2, 3 or more 2-input Gates. So reducing
number of inputs leads to reduced number of gates.
• Given two or more Boolean functions for the same design, our goal is to implement the simplified one
that has less number of product terms (less number of AND gates and eventually less inputs to the OR
gate).
Therefore, in our minimization analysis, we will be aiming to not only eliminating minterms/product terms, but
also to reducing the number of literals in most of the product terms.
But that is not always easy through algebraic simplifications. In the next sections, we will see that the minimized
SOP logic function can be found easily through the use of a Karnaugh map.
3.1 Multivariable SOP Karnaugh maps.
3.1.1 Two-variable SOP Karnaugh maps.
Consider the truth table and sum of minterms for a function having two input variables, f ( x , y ) =  mi . We can
i
organize the sum of minterms within a grid, and the result is the two-variable Karnaugh map.
The two-variable map is shown below;
There are four minterms for two variables; hence, the map consists of four squares, one for each minterm. The
map is redrawn in to show the relationship between the squares and the two variables x and y.
The 0 and 1 marked in each row and column designate the values of variables. Variable x appears primed in row 0
and unprimed in row 1. Similarly, y appears primed in column 0 and unprimed in column 1. We mark the squares
whose minterms belong to a given function with 1.
PAGE 31
Step 1: Mapping of a given sum of minterms into K-Map
As an example, let us say we want to minimize the function
f (x, y ) = m1 + m2 + m3 = x' y + xy'+ xy
The marking of xy is shown in Fig. (a) below. Since xy is equal to m3, a 1 is placed inside the square that belongs to
m3. Fig. (b) shows the marking of all terms by three squares marked with 1’s
Step 2: Combine maximum number of 1’s following rules:
1. Only adjacent squares can be combined
2. All 1’s must be covered
3. Covering rectangles must be of size 1,2,4,8, … 2n
4. Check if all covering are really needed
we combine 1's in adjacent cells with circled groups. We look for adjacent (horizontal or vertical, but not
diagonal) 1's in the grid and circle groups of the 1's in groups of one, two, or four. All 1's must be circled, and we
always draw a circled group of 1's with the largest possible size (containing one, two, or four 1's).
Step 3: Write minimized product terms for each circled group. We write a product term for each circled group:
· a circled group with one 1 is specified by a product term containing two input variables;
· a circled group with two 1's is specified by a product term containing one input variable;
· a circled group with four 1's is specified by a product term that is simply 1 (as the output function is always 1).
The horizontal circle is on the row of x=1 → x part of the product term. The other variable, y, changed from 0 to 1
over the circle, therefore y is not part of the product term. As a result, the grouping of m 2 and m3 in one circle can
be simplified to a product term of single literal x.
Similarly, the vertical circle that covers m1 and m3 can be simplified to a product term of single literal y.
The simplified SOP is the ORing of both product terms = x+y
Compare the number of gates required to implement both; the original sum of minterms, and the simplified SOP.
PAGE 32
3.1.2 Three-variable SOP Karnaugh maps.
Consider the truth table and SOP minterms for a function having three input variables, f ( x , y , z ) =  mi . There
i
are eight minterms for three binary variables; therefore, the map consists of eight squares. Note that the
minterms are arranged, not in a binary sequence. The characteristic of this sequence is that only one bit changes
in value from one adjacent column to the next. The map drawn in part (b) is marked with numbers in each row
and each column to show the relationship between the squares and the three variables.
Step 1: Mapping of a given sum of minterms into K-Map
as before
Step 2: Combine maximum number of 1’s following rules:
As before but we combine 1's in adjacent cells with circled groups. We look for adjacent (horizontal or vertical,
but not diagonal) 1's in the grid and circle groups of the 1's in groups of one, two, four, or eight. All 1's must be
circled, and we always draw a circled group of 1's with the largest possible size (containing one, two, four, or
eight 1's). Circled groupings can wrap off the top, bottom, sides and corners of the grid, onto opposing sides.
Step 3: Write minimized product terms for each circled group.
Third, we write minimized product terms for each circled group. We write a product term for each circled group:
· a circled group with one 1 is specified by a product term containing three input variables;
· a circled group with two 1's is specified by a product term containing two input variables;
· a circled group with four 1's is specified by a product term containing one input variable;
· a circled group with eight 1's is specified by a product term that is simply 1 (as the output function is always 1).
PAGE 33
Example
Simplify the Boolean function f (x, y, z ) =  (2, 3, 4, 5)
Step 1: Mark 1’s on each square representing a minterm of the function.
Step 2: Circle groups of 8, 4, or 2 ones (only group of 2 ones exist)
Step 3: Write simplified product terms:
For the first circle on the first row, x is complemented (=0) → X’ is on the product term, y =1→ y is on the
product term, z changes → removed. Therefore, the first product term = X’y.
For the second circle on the second row, x =1 → X is on the product term, y is complemented → y’ is on the
product term, z changes → removed. Therefore, the first product term = xy’.
Therefore, the simplified SOP is x’y+xy’
Is it easy to come to the same result using Boolean algebra? How? Compare the number of gates
required to
implement both; the original sum of minterms, and the simplified SOP
Example
Simplify the Boolean function f (x, y, z ) =  (3, 4, 6, 7)
Note that m4 and m6 are considered as adjacent squares. Why? As one of the variables stays the same (z) and
the other variable change (y). On the second row x =1 → X is on the product term, y changes → removed, z is
complemented → z’ is on the product term. Therefore, the first product term = xz’.
Is it easy to come to the same result using Boolean algebra? How?
Compare the number of gates required to implement both; the original sum of minterms, and the simplified SOP
PAGE 34
Example
Simplify the Boolean function f (x, y, z ) =  (0, 2, 4, 5, 6)
Note that m0, m2, m4 and m6 are considered as one group of 4 adjacent squares. Within this group, x changes →
removed, y changes → removed, z is 0 (complemented) → the product term has only one literal z’. the second
product term is xy’
Therefore, the simplified SOP is z’+xy’
Notice that m4 has been circled in more than one group. You can do that long as long as there are no redundant
groups. Notice also that m5 has no adjacent squares but m4.
To better understand this concept, let us say we add m7 to the previous function? How the group of m5 will look
like now? Now m5 and m7 can form a group by themselves. If you grouped m4 and m5 in one group as in the
previous example, you form a redundant group (groups that have a common literal that can be further
simplified).
What if we are given a function that is not in the form of sum of minterms?
Example
Simplify the Boolean function f ( A, B, C ) = A' C + A' B + AB ' C + BC
The first step is to rewrite the function as a sum of minterms so the terms can be mapped to squares on the Kmaps. This can be achieved by multiplying each product term by (missing literal + its complement) which is equal
to 1.
f ( A, B, C ) = A' ( B + B' )C + A' B(C + C ' ) + AB' C + ( A + A' ) BC
= A' BC + A' B' C + A' BC + A' BC '+ AB' C + ABC + A' BC
= m3 + m1 + m2 + m5 + m7
Remember that X+X=X that is why we have only one copy of m3. Remember also to keep the order of the literals
in each minterm as A, followed by B, followed by C
PAGE 35
3.1.3 Four-variable SOP Karnaugh maps.
The map for Boolean functions of four binary variables f(w, x, y, z) is shown the Fig. below . In Fig. (a) are listed
the 16 minterms and the squares assigned to each. In Fig. (b), the map is redrawn to show the relationship
between the squares and the four variables.
Notice the order of the minterms on the third and fourth rows/columns.
Step 1: Mapping of a given sum of minterms into K-Map
as before
Step 2: Combine maximum number of 1’s following rules:
As before but we combine 1's in adjacent cells with circled groups. We look for adjacent (horizontal or vertical,
but not diagonal) 1's in the grid and circle groups of 1's in groups of one, two, four, eight, or sixteen. All 1's must
be circled, and we always draw the circled group of 1's with the largest possible size (with one, two, four, eight, or
sixteen 1's). Circled groupings can wrap off the top, bottom, sides and corners of the grid, onto opposing sides.
Step 3: Write minimized product terms for each circled group.
We write a product term for each circled group:
· a circled group with one 1 is specified by a product term containing four input variables;
· a circled group with two 1's is specified by a product term containing three input variables;
· a circled group with four 1's is specified by a product term containing two input variables;
· a circled group with eight 1's is specified by a product term containing one input variable;
· a circled group with sixteen 1's is specified by a product term that is simply 1 (as the output function is always 1).
PAGE 36
Example
Simplify the Boolean function f (w, x, y, z ) =  (0, 1, 2, 4, 5, 6, 8, 9, 12, 13, 14)
For the circle of eight ones, w and x change → both are removed, y=0 → y’ is on the product term, z change
Example
Simplify the Boolean function f ( A, B, C, D ) =  (0, 1, 2, 6, 8, 9, 10)
Notice the 4 minterms on the 4 corners can be circled in one group
The simplified SOP = B’C’ + B’D’ + A’CD’
PAGE 37
Don't-care condition.
In some cases, the output of the function (1 or 0) is not specified for certain input combinations either because:
- The input combination never occurs (Example BCD codes), or
- We don’t care about the output of this particular combination
Such a situation leads to a don't-care condition and the unspecified minterms are called don’t cares. While
minimizing a k-map with don’t care minterms, it would be to our advantage to leave the output function value
unspecified and use this unspecified value as a 0 or 1, depending upon whether a 0 or 1 yields a simplified design.
Don't-care values are specified as a set in the shorthand notation of minterms
f (x1 , x2 ,...xn ) = mi + D j

i

j
Designing with don't-care conditions is straightforward. A don't-care condition is labeled within a Karnaugh map
with a "d". We go about our Karnaugh map creation and optimization using the don't-care d values to our best
advantage. Consider this process in the following example.
Example: Simplify the function with the don’t care conditions;
f ( A, B, C ) =  m(1, 3, 7) +  d (0, 5)
Notice that if we consider d5 as 1, then we can circle the 4 minterms m1, m3, d5 and m7 as one group with product
term = C. For d0, being 1 or 0 will not simplify the product terms any better so will ignore it.
The simplified SOP = C
Example: Simplify the function with the don’t care conditions;
f ( A, B, C, D ) =  m(1, 3, 7, 11, 15) +  d (0, 2, 5)
-
Two possible solutions! Both are acceptable as all 1’s are grouped
PAGE 38
APSC 262 Digital Logic Design
Dr. Ayman Elnaggar, Ph.D., P.Eng.
School of Engineering, Faculty of Applied Science
The University of British Columbia
Office: EME 4261
Tel: 250-807-8198
Email:ayman.elnaggar@ubc.ca
TEXTBOOK
Chapter 1 Digital Systems and Binary Numbers
Chapter 2 Boolean Algebra and Logic Circuits
Chapter 3 Gate-Level Minimization
Chapter 4 Combinational Logic
Chapter 5 Synchronous Sequential Logic
Chapter 6 Register, Counters, and FSM
Chapter 7 Memory & Programmable Logic
Digital Design with an Introduction
to Verilog HDL, 5th edition.
MODULE 4.0
4.1 Introduction
4.2 Combinational Circuits: Analysis & Design Procedures
4.3 Half- and Full-Adders
4.4 Design Using Standard Building Blocks (Components):
4.4.1 Binary Adders (Ripple Carry Adders)
4.4.2 Subtractos
4.4.3 Decoders
4.4.4 Encoders
4.4.5 Priority Encoders
4.4.6 Multiplixers
PAGE 39
4.1 Introduction
Logic circuits for digital systems may be combinational or sequential. A combinational circuit consists of logic
gates whose outputs at any time are determined from only the present combination of inputs. A combinational
circuit performs an operation that can be specified logically either by a set of Boolean functions or by a truth
table. In contrast, sequential circuits employ storage elements in addition to logic gates. Their outputs are a
function of the inputs and the state of the storage elements. Because the state of the storage elements is a
function of previous inputs, the outputs of a sequential circuit depend not only on present values of inputs, but
also on past inputs, and the circuit behavior must be specified by a time sequence of inputs and internal states.
Sequential circuits are the building blocks of digital systems and are discussed in modules 5 and 6 .
4.2 Combinational Circuits
A combinational circuit consists of an interconnection of logic gates. Combinational logic gates react to the values
of the signals at their inputs and produce the value of the output signal, transforming binary information from the
given input data to a required output data. A block diagram of a combinational circuit is shown below. The n
input binary variables come from an external source; the m output variables are produced by the internal
combinational logic circuit and go to an external destination. Each input and output variable exists as a binary
signal that represents logic 1 and logic 0. The relation between n and m is given as an explicit Boolean function or
as a word problem that needs to be mapped to a truth table.
Combinational
Circuits
n inputs
m outputs
4.2.1 Analysis Procedure
The analysis of a combinational circuit requires that we determine the function that the circuit implements. This
task starts with a given logic diagram and it is required to derive the Boolean functions.
Example: What are the functions of the logic diagram shown below?
A
B
C
F1
A
B
C
A
B
A
C
F2
B
C
F1=AB'C'+A'BC'+A'B'C+ABC
F2=AB+AC+BC
PAGE 40
4.2.2 Design Procedure
The design/implementation of a combinational circuit requires that we draw the logic diagram that implements a
given design specifications/word problem. This task starts with defining and labeling inputs/output, then deriving
the truth table, simplifying the Boolean functions using K-map, and drawing the resultant logic diagram.
1. Specification
• Write a specification for the circuit if one is not already available
• Specify/Label input and output
2. Formulation
• Derive a truth table or initial Boolean equations that define the required relationships between the
inputs and outputs, if not in the specification
• Apply hierarchical design if appropriate
3. Optimization
• Apply multiple-level optimization (K-Map)
• Draw a logic diagram for the resulting circuit using 2-level AND-OR schematic.
4. Verification (Using CAD Tools)
• Verify the correctness of the final design using simulation
Practical Considerations:
• Number of gates (size, area, power, and cost)
• Maximum allowed delay
• Maximum consumed power
• Working conditions (temp., vibration, water resistance, etc.)
PAGE 41
Example: Design a circuit that has a 3-bit input and a single output (F) specified as follows:
• F = 0, when the input is less than (5)10
• F = 1, otherwise
⚫
Step 1 (Specification):
 Label the inputs (3 bits) as X, Y, Z
 X is the most significant bit, Z is the least significant bit
 The output (1 bit) is F:
 F = 1 → (101)2 , (110)2 , (111)2
 F = 0 → other inputs
• Step 3 (Optimization)
• Step 2 (Formulation)
Obtain Truth table
YZ
00
01
11
10
X 0
0
0
0
0
1
0
1
1
1
F = XY’Z + XYZ’+XYZ
Logic Diagram
X
Z
X
Y
PAGE 42
F
Example: Design a BCD to Excess-3 Code Converter
 Code converters convert from one code to another (BCD to Excess-3 in this example)
 The inputs are defined by the code that is to be converted (BCD in this example)
 The outputs are defined by the converted code (Excess-3 in this example)
 Excess-3 code is a decimal digit plus three converted into binary,
 i.e. 0 → 0011, 1 → 0100, and so on
Step 1 (Specification)
⚫ 4-bit BCD input (A,B,C,D)
⚫ 4-bit E-3 output (W,X,Y,Z)
Step 2 (Formulation)
Obtain Truth table
Step 3 (Optimization)
PAGE 43
Example: Design a BCD-to-Seven-Segment Decoder
 A BCD-to-Seven-Segment decoder is a combinational circuit that:
 Accepts a decimal digit in BCD (input)
 Generates appropriate outputs for the segments to display the input decimal digit (output)
Step 1 (Specification):
 4 inputs (A, B, C, D)
 7 outputs (a, b, c, d, e, f, g)
Step 2 (Formulation):
w
x
y
z
?
a
b
c
d
e
f
g
Step 3 (Optimization)
a
f
g
e
b
c
d
a = w + y + xz + x’z’
w x y z
abcdefg
0 0 0 0
1111110
0 0 0 1
0110000
0 0 1 0
1101101
0 0 1 1
1111001
0 1 0 0
0110011
0 1 0 1
1011011
0 1 1 0
1011111
0 1 1 1
1110000
1 0 0 0
1111111
1 0 0 1
1111011
1 0 1 0
xxxxxxx
1 0 1 1
xxxxxxx
1 1 0 0
xxxxxxx
1 1 0 1
xxxxxxx
1 1 1 0
xxxxxxx
1 1 1 1
xxxxxxx
We show the steps required to design/implement output a only. Another 6 K-maps are required; one for each of
the 6 outputs (b through g). They are left as an exercise.
PAGE 44
4.3 Half- and Full-Adders.
Digital computers perform a variety of information-processing tasks. Among the functions encountered are the
various arithmetic operations. The most basic arithmetic operation is the addition of two binary digits. This
simple addition consists of four possible elementary operations: 0 + 0 = 0, 0 + 1 = 1, 1 + 0 = 1, and 1 + 1 = 10. The
first three operations produce a sum of one digit, but when both augend and addend bits are equal to 1, the
binary sum consists of two digits. The higher significant bit of this result is called a carry. When the augend and
addend numbers contain more significant digits, the carry obtained from the addition of two bits is added to the
next higher order pair of significant bits. A combinational circuit that performs the addition of two bits is called a
half adder. One that performs the addition of three bits (two significant bits and a previous carry) is a full adder.
4.3.1 Half Adder (HA)
The HA needs two binary inputs and two binary outputs. The input variables designate the augend and addend
bits; the output variables produce the sum and carry. We assign symbols x and y to the two inputs and S (for sum)
and C (for carry) to the outputs. The truth table for the half adder is listed below.
The C output is 1 only when both inputs are 1. The S output represents the least significant bit of the sum. The
simplified Boolean functions for the two outputs can be obtained directly from the truth table. The simplified SOP
as well as the logic diagram are shown;
4.3.2 Full Adder (FA)
Addition of n-bit binary numbers requires the use of a full adder, and the process of addition proceeds on a bitby-bit basis, right to left, beginning with the least significant bit. After the least significant bit, addition at each
position adds not only the respective bits of the words, but must also consider a possible carry bit from addition
at the previous position.
A full adder is a combinational circuit that forms the arithmetic sum of three bits. It consists of three inputs and
two outputs. Two of the input variables, denoted by x and y, represent the two significant bits to be added. The
third input, z, represents the carry from the previous lower significant position. Two outputs are necessary
because the arithmetic sum of three binary bits ranges in value from 0 to 3, and binary representation of 2 or 3
needs two bits. The two outputs are designated by the symbols S for sum and C for carry. The binary variable S
gives the value of the least significant bit of the sum. The binary variable C gives the output carry formed by
adding the input carry and the bits of the words.
The truth table of the full adder is listed in
PAGE 45
The simplified Boolean functions of C and S as well as the corresponding logic diagrams are shown below;
The FA can also be implemented by cascading two HAs as shown below;
PAGE 46
4.4 Design Using Standard Building Blocks (Components)
4.4.1 Binary Adder (Ripple-Carry Adder)
A binary adder is a digital circuit that produces the arithmetic sum of two binary numbers. It can be constructed
with full adders connected in cascade, with the output carry from each full adder connected to the input carry of
the next full adder in the chain. Addition of n-bit numbers requires a chain of n full adders with the input carry to
the least significant position is fixed at 0. The figure below shows the interconnection of four full-adder (FA)
circuits to provide a four-bit binary ripple carry adder.
Why don’t we simply use the design procedure we have learned for designing combinational circuits?
The four-bit adder is a typical example of a standard component (or modular design). It can be used in many
applications involving arithmetic operations. Observe that the design of this circuit by the classical method would
require a truth table with 29 = 512 entries, since there are nine inputs to the circuit. By using an iterative method
of cascading a standard function, it is possible to obtain a simple and straightforward implementation.
Carry Propagation
The addition of two binary numbers in parallel implies that all the bits of the augend and addend are available for
computation at the same time. As in any combinational circuit, the signal must propagate through the gates
before the correct output sum is available in the output terminals. The total propagation time is equal to the
propagation delay of a typical gate, times the number of gate levels in the circuit. The longest propagation delay
time in an adder is the time it takes the carry to propagate through the full adders (C4 in the 4-bit adder shown
above). Since each bit of the sum output depends on the value of the input carry, the value of Si at any given
stage in the adder will be in its steady-state final value only after the input carry to that stage has been
propagated (S3 will waiting the most for C3 to arrive). From the logic diagram of the FA shown before, the signal
from the input carry Ci to the output carry Ci+1 propagates through an AND gate and an OR gate, which constitute
two gate levels. If there are four FAs in the adder, the output carry C4 would have 2 * 4 = 8 gate levels from C0 to
C4. For an n -bit adder, there are 2 n gate levels for the carry to propagate from input to output.
PAGE 47
Even though the designing binary adders from FAs is very simple and regular, as the size of the adder increases,
this carry propagation delay becomes a design issue specially if high performance is required. However, there
exists other adder architectures that reduces the delay but on the price of increasing the architecture complexity.
Examples are Carry Look-Ahead Adder and Carry-Select Adders that will be discussed in details in the ENGR468
course.
4.4.2 Binary Subtractor
The subtraction of unsigned binary numbers (A – B) can be done by taking the 2’s complement of B and adding it
to A. The 2’s complement can be obtained by taking the 1’s complement and adding 1 to the least significant pair
of bits. The 1’s complement can be implemented with inverters, and a 1 can be added to the sum through the
input carry.
The addition and subtraction operations can be combined into one circuit with one common binary adder by
including an exclusive-OR gate with each full adder. A four-bit adder–subtractor circuit is shown below.
M
0 : Add
1: Subtract
The mode input M controls the operation. When M = 0, the circuit is an adder, and when M = 1, the circuit
becomes a subtractor. Each exclusive-OR gate receives input M and one of the inputs of B. When M = 0, we have
B  0 = B . The full adders receive the value of B , the input carry is 0, and the circuit performs A plus B . When
'
M = 1, we have B  1 = B and C0 = 1. The B inputs are all complemented and a 1 is added through the input
carry. The circuit performs the operation A plus the 2’s complement of B.
PAGE 48
4.4.3 Decoders
In this section, we will consider the operation of decoders. A decoder is a combinational circuit that converts
binary information from n input lines to a maximum of 2n unique output lines. That is at any time, there will be
exactly one output bit set to 1 (activated) while all others are equal to 0s. (if it is an active-low output decoder,
one output is equal to 0 at any time while others are equal to1s). If the n -bit coded information has unused
combinations, the decoder may have fewer than 2n outputs.
In particular, we will look at the n-bit binary decoder. The n-bit binary decoder has n inputs and 2n outputs. There
is also an enable input, En, controlling the encoder operation, such that En = 0 yields no output and En = 1 has the
input valuation activate the appropriate output.
Let's analyze the decoder operation for the case of a 2-bit (2-to-4) decoder. The inputs I1 and I0 are treated as a
two-bit integer, the value of which specifies the precise output, Y0, Y1, Y2 or Y3, to be made equal to 1. We can
represent the relationship between the outputs and inputs with a truth table, graphical symbol, and logic circuit:
I1 I0
Y3 Y2 Y1 Y0
Y3
0 0
0 0 0 1
Y2
0 1
0 0 1 0
Y1
1 0
0 1 0 0
Y0
1 1
1 0 0 0
I1
I0
The 3-bit (3-to-8) decoder is shown below.
PAGE 49
Combinational Circuits Implementation (Logic Function Synthesis) Using Decoders
A decoder provides the 2n minterms of n input variables. Each asserted output of the decoder is associated with a
unique pattern of input bits. Since any Boolean function can be expressed in sum-of-minterms form, a decoder
that generates the minterms of the function, together with an external OR gate that forms their logical sum,
provides a hardware implementation of the function. In this way, any combinational circuit with n inputs and m
outputs can be implemented with an n -to-2n decoder and m OR gates (one OR gate per output function).
In order to implement a combinational circuit by means of a decoder and OR gates, it is required that the
Boolean function for the circuit be expressed as a sum of minterms. A decoder is then chosen that generates all
the minterms of the input variables. The inputs to each OR gate are selected from the decoder outputs according
to the list of minterms of each function.
This procedure will be illustrated by an example that implements a full-adder circuit. From the truth table of the
full adder shown before, we obtain the functions for the combinational circuit in sum-of-minterms form:
S ( x, y, z ) =  (1, 2, 4, 7)
C ( x, y, z ) =  (3, 5, 6, 7)
Since there are three inputs and a total of eight minterms, we need a three-to-eight-line decoder. The
implementation is shown below. The decoder generates the eight minterms for x , y , and z . The OR gate for
output S forms the logical sum of minterms 1, 2, 4, and 7. The OR gate for output C forms the logical sum of
minterms 3, 5, 6, and 7.
PAGE 50
4.4.4 Encoders
An encoder is a digital circuit that performs the inverse operation of a decoder. An encoder has 2 n (or fewer)
input lines and n output lines. The output lines, as an aggregate, generate the binary code corresponding to the
input value. An example of an encoder is the 8-to-3 encoder whose truth table is given below. It has eight inputs
(one for each of the input digits) and three outputs that generate the corresponding binary number. It is assumed
that only one input has a value of 1 at any given time.
The encoder can be implemented with OR gates whose inputs are determined directly from the truth table.
Output Z is equal to 1 when the input digit is 1, 3, 5, or 7. Output Y is 1 for octal digits 2, 3, 6, or 7, and output X is
1 for digits 4, 5, 6, or 7. These conditions can be expressed by the following Boolean output functions:
Output functions:
X = D4 + D5 + D6 + D7
Y = D2 + D3 + D6 + D7
Z = D1 + D3 + D5 + D7
ID7
7
ID6
6
ID5
5
ID4
4
ID3
3
ID2
2
D1
I1
ID0
0
PAGE 51
X Y2
Y Y1
Z Y0
4.4.5 Priority Encoders
It is assumed that only one input has a value of 1 at any given time of the encoder that is described above. But
what if two or more inputs happen to have a value 1 at the same time?
A priority encoder is an encoder circuit that includes the priority function. The operation of the priority encoder is
such that if two or more inputs are equal to 1 at the same time, the input having the highest priority will take
precedence. The truth table of a four-input priority encoder is given below.
In addition to the two outputs x and y, the circuit has a third output designated by V; this is a valid bit indicator
that is set to 1 when one or more inputs are equal to 1. If all inputs are 0, there is no valid input and V is equal to
0. The other two outputs are not inspected when V equals 0 and are specified as don’t-care conditions. Note that
whereas X ’s in output columns represent don’t-care conditions, the X ’s in the input columns are useful for
representing a truth table in condensed form. Instead of listing all 16 minterms of four variables, the truth table
uses an X to represent either 1 or 0. For example, X 100 represents the two minterms 0100 and 1100.
According to table, the higher the subscript number, the higher the priority of the input. Input D3 has the highest
priority, so, regardless of the values of the other inputs, when this input is 1, the output for xy is 11 (binary 3). D2
has the next priority level. The output is 10 if D2 = 1, provided that D3 = 0, regardless of the values of the other
two lower priority inputs. The output for D1 is generated only if higher priority inputs are 0, and so on down the
priority levels.
PAGE 52
4.4.6 Multiplexers
An m-to-1 multiplexer operates with 2m data inputs and m select inputs. The select inputs can be used to route
data from the 2m inputs to the single output.
Simple data input selection with multiplexers
Simple data input selection with 2-to-1, 4-to-1, and 16-to-1 multiplexers. We can use our knowledge of SOP
expressions and truth tables to characterize the data input selection process with multiplexers.
2-to-1 multiplexer
4-to-1 multiplexer
16-to-1 multiplexer
f = s0 'w0 + s0  w1
f = s1 's0 'w0 + s1 's0  w1
+ s1  s0 'w2 + s1  s0  w3
f = s3 's2 's1 's0 'w0
s0 |
0 |
1 |
f
w0
w1
s3
s2
s1
s0 | f
0
0
0
0 | w0
0
0
0
1 | w1
0
0
1
0 | w2
0
0
1
1 | w3
0
1
0
0 | w4
0
1
0
1 | w5
0
1
1
0 | w6
0
1
1
1 | w7
1
0
0
0 | w8
1
0
0
1 | w9
1
0
1
0 | w10
1
0
1
1 | w11
1
1
0
0 | w12
1
1
0
1 | w13
1
1
1
0 | w14
1
1
1
1 | w15
The data input selection process of a multiplexer can be realized through elementary digital logic circuits:
2-to-1 multiplexer
s1
0
0
1
1
s0 |
0 |
1 |
0 |
1 |
+ ... + s3  s2  s1  s0  w15
f
w0
w1
w2
w3
4-to-1 multiplexer
PAGE 53
16-to-1 multiplexer (not shown)
Logic Function Synthesis with Multiplexers.
It was shown that a decoder can be used to implement Boolean functions by employing external OR gates. An
examination of the logic diagram of a multiplexer reveals that it is essentially a decoder that includes the OR gate
within the unit. The minterms of a function are generated in a multiplexer by the circuit associated with the
selection inputs. The individual minterms can be selected by the data inputs, thereby providing a method of
implementing a Boolean function of n variables with a multiplexer that has n selection inputs and 2n data inputs,
one for each minterm. So the procedure is:
 For n-input, use a 2n-to-1 multiplexer with n-select lines.
 Connect your inputs to the select lines in order.
 Hard-wire the output values of your design (the output column of the truth table) to the inputs
Example: Implement F ( x, y, z ) =  (1, 2, 6, 7) using multiplexers
Step1: Form the truth table.
A
B
C
F
0
0
0
0
0
0
1
1
0
1
0
1
0
1
1
0
1
0
0
0
1
0
1
0
1
1
0
1
1
1
1
1
Step 2: 3-input → use 8-to-1 multiplexer
Step 3: Connect inputs of the function to the select lines.
Hard-wire the output values of the output column
to the inputs of the multiplexer.
0
1
1
0
0
0
1
1
D0
D1
D2
D3
D4
F
D5
D6
D7
S
S
2
S
0
1
A B C
PAGE 54
APSC 262 Digital Logic Design
Dr. Ayman Elnaggar, Ph.D., P.Eng.
School of Engineering, Faculty of Applied Science
The University of British Columbia
Office: EME 4261
Tel: 250-807-8198
Email:ayman.elnaggar@ubc.ca
TEXTBOOK
Chapter 1 Digital Systems and Binary Numbers
Chapter 2 Boolean Algebra and Logic Circuits
Chapter 3 Gate-Level Minimization
Chapter 4 Combinational Logic
Chapter 5 Synchronous Sequential Logic
Chapter 6 Register, Counters, and FSM
Chapter 7 Memory & Programmable Logic
Digital Design with an Introduction
to Verilog HDL, 5th edition.
MODULE 5.0
5.1 Introduction
5.2 Flip Flops
5.3 Analysis of Sequential Circuits
5.4 Verilog-HDL of Flip Flops
PAGE 55
5.0 Flip-flops, registers, and counters.
In this module we will investigate digital logic circuits that can store data. Flip-flops can be used to store a single
bit, registers can be used to store multiple bits, and counters can be used to store multiple bits and
increment/decrement the values by one. These are the fundamental elements one needs to build a processor.
5.1 Why do we need to store or latch a Signal?
Up to now, our digital logic circuit designs have only used input variables to define the state of output functions.
We do not yet have a way to define output functions according to both the input variables and past/present
states of the digital logic circuit. To hold the past/present states of the digital logic circuit, we apply data storage.
An example of a simple system requiring data storage is a basic alarm system. A basic alarm system has a motion
sensor to monitor the environment by detecting motion. If the motion sensor detects motion, the sensor state
goes from 0 to 1, and this activates an alarm. But there is a critical issue with this. If motion stops, the sensor
state goes from 1 to 0, and the alarm is deactivated. We require the alarm to stay activated, perhaps until a reset
is triggered, and this requirement is met by introducing data storage with a memory element.
A sensor and alarm.
A sensor, memory element and alarm.
5.2 Flip Flops
A Flip flop (FF) is a one-bit memory element that is capable of storing one-bit at its output. Flip-Flops are edgetriggered or edge-sensitive memory element. Active only at transitions of the clock signal; i.e. either from 0 → 1
or 1 → 0.
A Flip Flop is designated by a rectangular block with inputs on the left and outputs on the right. The clock is
designated with an arrowhead. A bubble designates a negative-edge triggered flip flops
There are different types of flip flops. We are going to limit our discussions to the D-FF, JK-FF, and T-FF.
PAGE 56
The D flip-flop.
The D-FF is the simplest and the most common FF. It has one data input D and one output Q. It may have another
complemented output Q’. The output-input relation is defined either by the characteristic table or by the
characteristic equation.
A characteristic table defines the operation of a flip flop in a tabular form. It is Similar to the truth table in
combinational circuits
The next state Q(t+1) refers to next state (after the clock edge arrives). It can be defined in terms of the input D,
or as in other types of FFs, in terms of current state Q(t) (before the clock-edge arrives) as well.
You may have noticed that we use the term “state” instead of “output” when we refer to the output of the FF.
This is to indicate the implied time-dependency.
A characteristic equation defines the operation of a flip flop in an algebraic form (similar to a Boolean function in
combinational circuits.)
For D-FF
Direct Inputs
Some flip-flops have inputs to set/reset their states. They could be asynchronous independently of the clock, or
synchronous with the clock.
Preset or direct set, sets the flip-flop to 1
Clear or direct reset, set the flip-flop to 0
When power is turned on, a flip-flop state is unknown. Direct inputs are useful to put in a known state. The figure
shows a positive-edge D-FF with active-low asynchronous reset.
PAGE 57
Timing Behaviors of FFs
Example: Draw the output waveform of this positive-edge triggered D-type FF. Assume the state of the FF is
initially reset to 0.
The JK flip-flop.
Q(t+1) = J Q’ + K’ Q
Example: Draw the output waveform of this positive-edge triggered JK-type FF. Assume the state of the FF is
initially reset to 0.
PAGE 58
The T flip-flop.
Q(t+1)
=T  Q
Example: Draw the output waveform of this positive-edge triggered T-type FF. Assume the state of the FF is
initially reset to 0.
CLK
T
PAGE 59
5.5 Analysis of Sequential Circuits
Analysis describes what a given circuit will do under certain operating conditions. The behavior of a clocked
sequential circuit is determined from the inputs, the outputs, and the state of its flip-flops. The outputs and the
next state are both a function of the inputs and the present state. The analysis of a sequential circuit consists of
obtaining a table or a diagram for the time sequence of inputs, outputs, and internal states.
A logic diagram is recognized as a clocked sequential circuit if it includes flip-flops with clock inputs. The flip-flops
may be of any type, and the logic diagram may or may not include combinational logic gates. In this section, we
introduce an algebraic representation for specifying the next-state condition in terms of the present state and
inputs.
A state table and state diagram are then presented to describe the behavior of the sequential circuit. Examples
are used to illustrate the various procedures.
The State Equations.
The behavior of a clocked sequential circuit can be described algebraically by means of state equations. A state
equation (also called a transition equation ) specifies the next state as a function of the present state and inputs.
Example: Consider the sequential circuit shown below,
It consists of two D flip-flops A and B, an input x and an output y. Since the D input of a flip-flop determines the
value of the next state (i.e., the state reached after the clock transition), it is possible to write a set of state
equations for the circuit:
PAGE 60
A state equation is an algebraic expression that specifies the condition for a flip-flop state transition. The left side
of the equation, with (t+1), denotes the next state of the flip-flop one clock edge later. The right side of the
equation is a Boolean expression that specifies the present state and input conditions that make the next state
equal to 1. Since all the variables in the Boolean expressions are a function of the present state, we can omit the
designation (t) after each variable for convenience and can express the state equations in the more compact form
The Boolean expressions for the state equations can be derived directly from the gates that form the
combinational circuit part of the sequential circuit, since the D values of combinational circuit determine the next
state. Similarly, the present-state value of the output can be expressed algebraically as
By removing the symbol (t) for the present state, we obtain the output Boolean equation:
The State Table.
The time sequence of inputs, outputs, and flip-flop states can be enumerated in a state table (sometimes called a
transition table ). The state table for the previous circuit is shown below
The table consists of four sections labeled present state, input, next state, and output . The present-state section
shows the states of flip-flops A and B at any given time t . The input section gives a value of x for each possible
present state. The next-state section shows the states of the flip-flops one clock cycle later, at time (t+1). The
output section gives the value of y at time t for each present state and input condition.
PAGE 61
The derivation of a state table requires listing all possible binary combinations of present states and inputs. In this
case, we have eight binary combinations from 000 to 111. The next-state values are then determined from the
logic diagram or from the state equations. The next state of flip-flop A must satisfy the state equation
The next-state section in the state table under column A has three 1’s where the present state of A and input x
are both equal to 1 or the present state of B and input x are both equal to 1. Similarly, the next state of flip-flop B
is derived from the state equation
and is equal to 1 when the present state of A is 0 and input x is equal to 1. The output column is derived from the
output equation
The State Diagram.
The information available in a state table can be represented graphically in the form of a state diagram. In this
type of diagram, a state is represented by a circle, and the (clock-triggered) transitions between states are
indicated by directed lines connecting the circles. The state diagram of the sequential circuit of the previous
circuit is shown below
The state diagram provides the same information as the state table.
The binary number inside each circle identifies the state of the flip-flops. The directed lines are labeled with two
binary numbers separated by a slash. The input value during the present state is labeled first, and the number
after the slash gives the output during the present state with the given input. A directed line connecting a circle
with itself indicates that no change of state occurs
The steps presented in this example are summarized below:
Circuit diagram → State Equations → State table → State diagram
PAGE 62
Example: Analyze the circuit shown below (input equations, state equations, state table, and state diagram),
To analyze a circuit, ask yourself the following questions;
• Is this a sequential circuit? Why?
• How many inputs?
• What type of FFs?
• How many FFs?
• How many states?
• How many outputs?
1. Input Equations
TA = BX
TB = X
y = AB
2. State Equations
Given the characteristic equation of the T FF,
Q(t+1) = T  Q
Which can be rewritten for the A FF and the B FF as
A(t+1)= TA  A,
and B(t+1)= TB  B, respectively.
Replacing TA and TB by their input equations, we have the state equations in the form
A(t+1)= BX  A
B(t+1)= X  B
y = AB
PAGE 63
Sta
te Table
Substituting every combination of A, B, and X in the three state equations, we can derive the associated
values of next state A (A(t+1)) , next state B (B(t+1)), and the output y as shown in the state table below,
3. State Diagram
As explained during the previous lecture, the state diagram can be derived as shown below,
Mealy and Moore Machines
This circuit is an example of a Moore machine (output depends only on current state, i.e. the FF outputs). This can
be seen from the output state equation y = AB. In this case, the output doesn’t depend on the input X. You may
have noticed that on the state diagram, the outputs are shown inside the state bubble, and not on the transition
arrow as shown before.
Mealy machines is the other type (output depends on inputs and current states). The example we have discussed in
the previous class notes is considered a Mealy machine where the output was y=(a+b)x’, which a function in x. on
the state diagram, the outputs shown on the transition arrow as shown before.
PAGE 64
Example: Analyze the circuit shown below (input equations, state equations, state table, and state diagram),
1. Input Equations
JA = B,
KA = B X’
JB = X’,
KB = A  X
2. State Equations
Given the characteristic equation of the JK FF,
Q(t+1) = JQ’ + K’Q
the state equations can be written in the form
A(t+1) = A’ B + A B’ + A X
B(t+1) = X’ B’ + A X B + A’ X’ B
PAGE 65
3. State Table
4. State Diagram
PAGE 66
Exercise: Analyze the circuit shown below (input equations, state equations, state table, and state diagram),
PAGE 67
5.6 Design of Sequential Circuits
Design procedures or methodologies specify hardware that will implement a desired behavior. The design effort for
small circuits may be manual, but industry relies on automated synthesis tools for designing massive integrated
circuits. The sequential building blocks used by synthesis tools are any of the FFs we have studies together with
additional logic. In fact, designers generally do not concern themselves with the type of flip-flop; rather, their focus
is on correctly describing the sequential functionality that is to be implemented by the synthesis tool. For the sake
of our course, we will illustrate manual methods using either the D or the T flip-flops only.
The design of a clocked sequential circuit starts from a set of specifications and is complete by drawing the logic
diagram. In contrast to a combinational circuit, which is fully specified by a truth table, a sequential circuit requires
a state table or a state diagram for its specification. The first step in the design of sequential circuits is to obtain a
state table or a state diagram, A synchronous sequential circuit is made up of flip-flops and combinational gates.
The design of the circuit consists of finding the number of the FFs, choosing the type of the flip-flops and then
finding a combinational that, together with the flip-flops, produces a circuit which fulfills the stated specifications.
The number of flip-flops is determined from the number of states needed in the circuit and the choice of state
assignment codes. The combinational circuit is derived from the state table by deriving the input equations using
by mapping the relation of the current states and the next states to the Excitation Tables of the chosen FFs.
The procedure for designing synchronous sequential circuits can be summarized by a list of recommended steps:
1. From the word description and specifications of the desired operation, derive the state diagram for the circuit.
2. Assign binary values to the states.
3. Obtain the binary-coded state table.
4. Choose the type of flip-flops to be used.
6. Derive the input equations using next state values and excitation tables.
7. Simplify the functions using K-Map.
8. Draw the logic diagram.
Excitation Tables
The flip-flop characteristic tables presented before provide the value of the next state when the inputs and the
present state are known. These tables are useful for the analysis of sequential circuits and for defining the operation
of the flip-flops. During the design process though, we usually know the transition from the current state to the
next state and wish to find inputs to the FFs we could have applied that will cause the required transition. For this
reason, we need a table that lists the required inputs for a given change of state. Such a table is called an excitation
table.
Characteristic
Tables
Excitation
Tables
PAGE 68
Example: Use T-tpye FFs to design a counter that counts in binary form as follows 000, 001, 010, … 111, and
repeats.
To analyze a circuit, ask yourself the following questions;
• Is this a sequential circuit? Why?
• How many inputs?
• What type of FFs?
• How many FFs?
• How many states?
• How many outputs?
4. Input Equations
TA = BX
TB = X
y = AB
PAGE 69
Example: Design a Mealy state circuit that reads inputs continuous bits, and generates an output of ‘1’ if the
sequence (1011) is detected (M SB arrives first)
X
Sequence
Detector
Y
Input (X)
1 1 1 0 0 1 0 0 1 0 0 1 1 0 1 1 0 1 0 1 1 0 1 1 1 1 1 1
Output (Y)
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 1 0 0 0 0
PAGE 70
PAGE 71
APSC 262 Digital Logic Design
Dr. Ayman Elnaggar, Ph.D., P.Eng.
School of Engineering, Faculty of Applied Science
The University of British Columbia
Office: EME 4261
Tel: 250-807-8198
Email:ayman.elnaggar@ubc.ca
TEXTBOOK
Chapter 1 Digital Systems and Binary Numbers
Chapter 2 Boolean Algebra and Logic Circuits
Chapter 3 Gate-Level Minimization
Chapter 4 Combinational Logic
Chapter 5 Synchronous Sequential Logic
Chapter 6 Register, Counters, and FSM
Chapter 7 Memory & Programmable Logic
Digital Design with an Introduction
to Verilog HDL, 5th edition.
MODULE 6.0
6.1 Registers
Shift Registers
Universal Shift Registers
6.2 Counters
Ripple Counters
Synchronous counters
Binary Counters with Parallel Load
Other Types of Counters
PAGE 72
6.1 Registers
A register is a group of flip‐flops, each one of which shares a common clock and is capable of storing one bit of
information. An n ‐bit register consists of a group of n flip‐flops capable of storing n bits of binary information. In
addition to the flip‐flops, a register may have combinational gates that perform certain data‐processing tasks. In its
broadest definition, a register consists of a group of flip‐flops together with gates that affect their operation. The
flip‐flops hold the binary information, and the gates determine how the information is transferred into the register.
Registers with Parallel Load
Various types of registers are available commercially. The simplest register is one
that consists of only flip‐flops, without any gates. The figure shows 4-bit register
with parallel load, constructed with four D ‐type flip‐flops to form a four‐bit data
storage register. The common clock input triggers all flip‐flops on the positive edge
of each pulse, and the binary data available at the four inputs are transferred into
the register. The value of (I3, I2, I1, I0) immediately before the clock edge
determines the value of (A3, A2 , A1, A0) after the clock edge. The four outputs can
be sampled at any time to obtain the binary information stored in the register. The
input Clear_b goes to the active‐low R (reset) input of all four flip‐flops. When this
input goes to 0, all flip‐flops are reset asynchronously. The Clear_b input is useful
for clearing the register to all 0’s prior to its clocked operation. The R inputs must
be maintained at logic 1 (i.e., de-asserted) during normal clocked operation.
Shift Registers
A register capable of shifting the binary information held in each cell to its
neighboring cell, in a selected direction, is called a shift register. The logical
configuration of a shift register consists of a chain of flip‐flops in cascade, with the
output of one flip‐flop connected to the input of the next flip‐flop. All flip‐flops
receive common clock pulses, which activate the shift of data from one stage to the
next.
The simplest possible shift register is one that uses only flip‐flops, as shown in the figure. The output of a given
flip‐flop is connected to the D input of the flip‐flop at its right. This shift register is unidirectional (left‐to‐right).
Each clock pulse shifts the contents of the register one bit position to the right. The configuration does not support
a left shift. The serial input determines what goes into the leftmost flip‐flop during the shift. The serial output is
taken from the output of the rightmost flip‐flop.
Example: A 4-bit shift-left register with initial states 1001. A serial 1010 input is applied LSB first with one bit
per clock cycle. Show the states and the serial output bits by the end of each of the four cycles.
PAGE 73
Universal Shift Registers
If the flip‐flop outputs of a shift register are accessible, then information entered serially by shifting can be taken
out in parallel from the outputs of the flip‐flops. If a parallel load capability is added to a shift register, then data
entered in parallel can be taken out in serial fashion by shifting the data stored in the register. Some shift registers
provide the necessary input and output terminals for parallel transfer. They may also have both shift‐right and
shift‐left capabilities. A register capable of shifting in both directions is a bidirectional shift register. If the register
has both shifts and parallel‐load capabilities, it is referred to as a universal shift register. The block diagram
symbol and the circuit diagram of a four‐bit universal shift register that has all the capabilities just listed are shown
below,
The circuit consists of four D flip‐flops and four multiplexers. The four multiplexers have two common selection
inputs s1 and s0. Input 0 in each multiplexer is selected when s1s0 = 00, input 1 is selected when s1s0 = 01, and
similarly for the other two inputs. The selection inputs control the mode of operation of the register according to
the function entries shown in the table above. When s1s0 = 00, the present value of the register is applied to the D
inputs of the flip‐flops. This condition forms a path from the output of each flip‐flop into the input of the same
flip‐flop, so that the output recirculates to the input in this mode of operation. The next clock edge transfers into
each flip‐flop the binary value it held previously, and no change of state occurs.
PAGE 74
6.2 Counters
A counter is essentially a register that goes through a predetermined sequence of binary states. The gates in the
counter are connected in such a way as to produce the prescribed sequence of states. Although counters are a
special type of register, it is common to differentiate them by giving them a different name. The sequence of states
may follow the binary number sequence or any other sequence of states. A counter that follows the binary number
sequence is called a binary counter. An n ‐bit binary counter consists of n flip‐flops and can count in binary from 0
through 2n - 1. We have seen how to design a binary counter in module 5: Design of Sequential Circuits.
Counters are available in two categories: ripple counters and synchronous counters.
Binary Ripple Counters
A binary ripple counter consists of a series connection of complementing
flip‐flops, with the output of each flip‐flop connected to the C input of the next
higher order flip‐flop. The flip‐flop holding the least significant bit receives the
external clk (count pulses). A complementing flip‐flop can be obtained from a
T flip‐flop as shown in the figure. The output of each flip‐flop is connected to
the C input of the next flip‐flop in sequence. The T inputs of all the flip‐flops
are connected to a permanent logic 1, making each flip flop complement if the
signal in its C input goes through a negative transition. The negative transition
occurs when the output of the previous flip‐flop to which C is connected goes
from 1 to 0.
The timing diagram of such flip flop was previously presented in one of our
classes for the 3-bit ripple counter though. The timing diagram of the 4-bit
binary ripple counter is left as an exercise.
Synchronous Counters
Synchronous counters are different from ripple counters in that clock pulses are
applied to the inputs of all flip‐flops. A common clock triggers all flip‐flops
simultaneously, rather than one at a time in succession as in a ripple counter.
The design procedure for synchronous counters was presented in module 5, and
the design of a three‐bit binary counter was carried out. In this section, we
present some typical synchronous counters and explain their operation.
BCD Counters
A BCD counter counts in binary‐coded decimal from 0000 to 1001 and back to
0000. Because of the return to 0 after a count of 9, a BCD counter does not
have a regular pattern, unlike a straight binary count. To derive the circuit of a
BCD synchronous counter, it is necessary to go through a sequential circuit
design procedure. The flip‐flop input equations can be simplified by means of
K-map. The unused states for minterms 10 to 15 are taken as don’t‐care terms.
PAGE 75
Binary Counters with Parallel Load
Counters employed in digital systems quite often require a parallel‐load capability for transferring an initial binary
number into the counter prior to the count operation.
The figure shows the top‐level block diagram symbol as well as the control table of a four‐bit counter. When equal
to 1, the input load control disables the count operation and causes a transfer of data from the four data inputs into
the four flip‐flops. If both control inputs are 0, clock pulses do not change the state of the register. The carry output
becomes a 1 if all the flip‐flops are equal to 1 while the count input is enabled. The carry output is useful for
expanding (cascading) the counter to more than four bits.
The four control inputs—Clear, CLK, Load, and Count —determine the next state. The Clear input is
asynchronous and, when equal to 0, causes the counter to be cleared regardless of the presence of clock pulses or
other inputs. This relationship is indicated in the table by the X entries, which symbolize don’t‐care conditions for
the other inputs. The Clear input must be in the 1 state for all other operations. With the Load and Count inputs
both at 0, the outputs do not change, even when clock pulses are applied. A Load input of 1 causes a transfer from
the 4-bit inputs Data_in into the counter during a positive edge of CLK . The input data are loaded into the register
regardless of the value of the Count input, because the Count input is inhibited when the Load input is enabled. The
Load input must be 0 for the Count input to control the operation of the counter. A counter with a parallel load can
be used to generate any desired count sequence.
Example: Show the external circuitry required to convert the shown 4-bit binary counter to a BCD counter.
Exercise: Show the external circuitry required to convert the shown 4-bit binary counter to a 3-to-12 counter.
Other Common Counters
⚫ Ring Counter
 4 bit ring count: 1000 → 0100 → 0010 → 0001 → 1000 …
 One-hot output that cycles in a ring
⚫ Johnson Counter
 4 bit count: 0000 → 1000 → 1100 → 1110 → 1111 → 0111 → 0011 → 0001 → 0000 …
PAGE 76
APSC 262 Digital Logic Design
Dr. Ayman Elnaggar, Ph.D., P.Eng.
School of Engineering, Faculty of Applied Science
The University of British Columbia
Office: EME 4261
Tel: 250-807-8198
Email:ayman.elnaggar@ubc.ca
TEXTBOOK
Chapter 1 Digital Systems and Binary Numbers
Chapter 2 Boolean Algebra and Logic Circuits
Chapter 3 Gate-Level Minimization
Chapter 4 Combinational Logic
Chapter 5 Synchronous Sequential Logic
Chapter 6 Register, Counters, and FSM
Chapter 7 Memory & Programmable Logic
Digital Design with an Introduction
to Verilog HDL, 5th edition.
MODULE 7.0
7.1 Introdcution
7.2 Random Access Memory (RAM)
7.5 Read-Only Memory (ROM)
7.6 Programmable Logic Arrays (PLAs)
7.7 Programmable Array Logic (PALs)
PAGE 77
7.1 Introduction
A memory unit is a device to which binary information is transferred for storage and from which information is
retrieved when needed for processing. When data processing takes place, information from memory is transferred
to selected registers in the processing unit.
Intermediate and final results obtained in the processing unit are transferred back to be stored in memory. Binary
information received from an input device is stored in memory, and information transferred to an output device is
taken from memory. A memory unit is a collection of FFs/cells capable of storing a large quantity of binary
information.
There are two types of memories that are used in digital systems: random‐access memory (RAM) and read‐only
memory (ROM). RAM stores new information for later use. The process of storing new information into memory
is referred to as a memory write operation. The process of transferring the stored information out of memory is
referred to as a memory read operation. RAM can perform both write and read operations. ROM can perform only
the read operation. This means that suitable binary information is already stored inside memory and can be
retrieved or read at any time. However, that information cannot be altered by writing. ROM is a programmable
logic device (PLD). The binary information that is stored within such a device is specified in some fashion and then
embedded within the hardware in a process is referred to as programming the device. The word “programming”
here refers to a hardware procedure which specifies the bits that are inserted into the hardware configuration of the
device.
ROM is one example of a PLD. Other such units are the programmable logic array (PLA), programmable array
logic (PAL), and the field‐programmable gate array (FPGA). A PLD is an integrated circuit with internal logic
gates connected through electronic paths that behave similarly to fuses. In the original state of the device, all the
fuses are intact. Programming the device involves blowing those fuses along the paths that must be removed in
order to obtain the particular configuration of the desired logic function.
A typical PLD may have hundreds to millions of gates interconnected through hundreds to thousands of internal
paths. In order to show the internal logic diagram of such a device in a concise form, it is necessary to employ a
special gate symbology applicable to array logic. The figure below shows the conventional and array logic symbols
for a multiple-input OR gate.
X
Instead of having multiple input lines into the gate, we draw a single line entering the gate. The input lines are
drawn perpendicular to this single line and are connected to the gate through internal fuses. In a similar fashion, we
can draw the array logic for an AND gate. This type of graphical representation for the inputs of gates will be used
throughout the chapter in array logic diagrams.
In this chapter, we introduce the configuration of RAM, ROM, PLAs, and PALs and indicate procedures for their
use in the design of digital systems. FPGAs are covered in details in the ENGR 468 Advanced Digital System
Design course.
PAGE 78
7.2 Random Access Memory (RAM)
A memory unit is a collection of storage cells (or FFs), together with associated circuits needed to transfer
information into and out of a device. The architecture of memory is such that information can be selectively
retrieved from any of its internal locations. The time it takes to transfer (access) information to or from any desired
“random” location is always the same—hence the name random‐access memory, abbreviated RAM. In contrast, the
time required to retrieve information that is stored on magnetic tape depends on the location of the data.
A memory unit stores binary information in groups of bits called words. A word in memory is an entity of bits that
move in and out of storage as a unit. A memory word is a group of 1’s and 0’s and may represent a number, an
instruction, one or more alphanumeric characters, or any other binary‐coded information. A group of 8 bits is
called a byte . Most computer memories use words that are multiples of 8 bits in length. Thus, a 16‐bit word
contains two bytes, and a 32‐bit word is made up of four bytes. The capacity of a memory unit is usually stated as
the total number of bytes that the unit can store.
Communication between memory and its environment is achieved through data input and output lines, address
selection lines, and control lines that specify the direction of transfer. A block diagram of a memory unit is shown
below .
The n data input lines provide the information to be stored in memory, and the n data output lines supply the
information coming out of memory. The k address lines specify the particular word chosen among the many
available. The two control inputs specify the direction of transfer desired: The Write input causes binary data to be
transferred into the memory, and the Read input causes binary data to be transferred out of memory. The memory
unit is specified by the number of words it contains and the number of bits in each word. The address lines select
one particular word. Each word in memory is assigned an identification number, called an address, starting from 0
up to 2k - 1, where k is the number of address lines. The selection of a specific word inside memory is done by
applying the k ‐bit address to the address lines. An internal decoder accepts this address and opens the paths needed
to select the word specified. Memories vary greatly in size and may range from 1,024 (210 or 1K) words, requiring
an address of 10 bits, to 232 (or 4G) words, requiring 32 address bits.
It is customary to refer to the number of words (or bytes) in memory with one of the letters K (kilo), M (mega), and
G (giga). K is equal to 210, M is equal to 220, and G is equal to 230. Thus, 64K = 216, 2M = 221, and 4G = 232.
Consider, for example, a memory unit with a capacity of 1K words of 16 bits each. Since 1K = 1,024 = 210 and 16
bits constitute two bytes, we can say that the memory can accommodate 2,048 = 2K bytes. The organization of
such a memory is shown below,
PAGE 79
Read & Write Operations
The two operations that RAM can perform are the write and read operations. As mentioned earlier, the write signal
specifies a transfer‐in operation and the read signal specifies a transfer‐out operation. On accepting one of these
control signals, the internal circuits inside the memory provide the desired operation.
The steps that must be taken for writing data into memory are as follows:
1. Apply the binary address of the desired word to the address lines.
2. Apply the data bits that must be stored in memory to the data input lines.
3. Activate the write input.
The memory unit will then take the bits from the input data lines and store them in the word specified by the
address lines.
The steps that must be taken for reading data out of memory are as follows:
1. Apply the binary address of the desired word to the address lines.
2. Activate the read input.
The memory unit will then take the bits from the word that has been selected by the address and apply them to the
output data lines. The contents of the selected word do not change after the read operation, i.e., the word operation
is non-destructive.
Commercial memory components available in integrated‐circuit chips of different sizes and of different access
times.
Types of Memories
Integrated circuit RAM units are available in two operating modes: static and dynamic. Static RAM (SRAM)
consists essentially of internal FFs that store the binary information. The stored information remains valid as long
as power is applied to the unit. Dynamic RAM (DRAM) stores the binary information in the form of electric
charges on capacitors provided inside the chip by MOS transistors. The stored charge on the capacitors tends to
discharge with time, and the capacitors must be periodically recharged by refreshing the dynamic memory.
Refreshing is done by cycling through the words every few milliseconds to restore the decaying charge. DRAM
offers reduced power consumption and larger storage capacity in a single memory chip. SRAM is easier to use and
has shorter read and write cycles (faster but more expensive) and normally used in cache memories.
Memory units that lose stored information when power is turned off are said to be volatile. CMOS integrated
circuit RAMs, both static and dynamic, are of this category, since the binary cells need external power to maintain
the stored information. In contrast, a nonvolatile memory, such as hard drives and flash memory disks, retains its
stored information after the removal of power. ROM is another nonvolatile memory.
PAGE 80
7.3 Programmable Logic Devices (PLDs)
A Programmable Logic Device (PLD) is an integrated circuit with internal logic gates and/or connections that can
in some way be reconfigures/changed by a programming process
• Examples:
• ROM, PROM, EPROM, and EEPROM
• Programmable Logic Array (PLA)
• Programmable Array Logic (PAL) device
• Complex Programmable Logic Device (CPLD)
• Field-Programmable Gate Array (FPGA)
• A PLD’s function is not fixed
• Can be programmed to perform different functions
Why PLDs?
• Fact:
• It is most economical to produce an IC in large volumes
• But:
• Many situations require only small volumes of ICs
• Many situations require changes to be done in the field, e.g. Firmware of a product under
development
• A programmable logic device can be:
• Produced in large volumes
• Programmed to implement many different low-volume designs
•
•
•
In the Factory - Cannot be erased/reprogrammed by user
• Mask programming (changing the VLSI mask) during manufacturing
Programmable only once
• Fuse
• Anti-fuse
Reprogrammable (Erased & Programmed many times)
• Volatile - Programming lost if chip power lost
• Single-bit storage element
• Non-Volatile - Programming survives power loss
• UV Erasable
• Electrically Erasable
• Flash (as in Flash Memory)
PAGE 81
Read-Only Memory (ROM)
A read‐only memory (ROM) is essentially a memory device in which permanent binary information is stored. The
binary information must be specified by the designer and is then embedded in the unit to form the required
interconnection pattern. Once the pattern is established, it stays within the unit even when power is turned off and
on again. A block diagram of a ROM consisting of k inputs and n outputs is shown below,
The inputs provide the address for memory, and the outputs give the data bits of the stored word that is selected by
the address. The number of words in a ROM is determined from the fact that k address input lines are needed to
specify 2k words. Note that ROM does not have data inputs, because it does not have a write operation. Integrated
Consider, for example, a 32 * 8 ROM. The unit consists of 32 words of 8 bits each. There are five input lines that
form the binary numbers from 0 through 31 for the address as shown below,
The five inputs are decoded into 32 distinct outputs by means of a 5 * 32 decoder. Each output of the decoder
represents a memory address. The 32 outputs of the decoder are connected to each of the eight OR gates. The
diagram shows the array logic convention used in complex circuits. Each OR gate must be considered as having 32
inputs. Each output of the decoder is connected to one of the inputs of each OR gate. Since each OR gate has 32
input connections and there are 8 OR gates, the ROM contains 32 * 8 = 256 internal connections. In general, a 2k *
n ROM will have an internal k * 2k decoder and n OR gates. Each OR gate has 2k inputs, which are connected to
each of the outputs of the decoder.
The internal binary storage of a ROM is specified by a truth table that shows the word content in each address. For
example, the content of a 32 * 8 ROM may be specified with a truth table similar to the one shown below,
PAGE 82
The truth table shows the five inputs under which are listed all 32 addresses. Each address stores a word of 8 bits,
which is listed in the outputs columns. The table shows only the first four and the last four words in the ROM. The
complete table must include the list of all 32 words.
The hardware procedure that programs the ROM blows fuse links in accordance with a given truth table. For
example, programming the ROM according to the truth table given by the above table results in the configuration
shown below,
Every 0 listed in the truth table specifies the absence of a connection, and every 1 listed specifies a path that is
obtained by a connection. For example, the table specifies the eight‐bit word 10110010 for permanent storage at
address 3. The four 0’s in the word are programmed by blowing the fuse links between output 3 of the decoder and
the inputs of the OR gates associated with outputs A6, A3, A2, and A0. The four 1’s in the word are marked with a
X to denote a temporary connection, in place of a dot used for a permanent connection in logic diagrams. When the
input of the ROM is 00011, all the outputs of the decoder are 0 except for output 3, which is at logic 1. The signal
equivalent to logic 1 at decoder output 3 propagates through the connections to the OR gate outputs of A7, A5, A4,
and A1. The other four outputs remain at 0. The result is that the stored word 10110010 is applied to the eight data
outputs.
Combinational Circuit Implementation using ROM
We have seen in Chapter 4 how we can use Decoders and Multiplexers in the implementation of combinational
circuits. We can use ROM in a similar way. By choosing connections for those minterms which are included in the
function, the ROM outputs can be programmed to represent the Boolean functions of the output variables in a
combinational circuit. For example, the ROM configuration shown before may be considered to be a combinational
circuit with eight outputs, each a function of the five input variables. Output A7 can be expressed in sum of
minterms as
A connection marked with X in the figure produces a minterm for the sum. All other crosspoints are not connected
and are not included in the sum. In practice, when a combinational circuit is designed by means of a ROM, it is not
necessary to design the logic or to show the internal gate connections inside the unit. All that the designer has to do
is specify the particular ROM by its IC number and provide the applicable truth table. The truth table gives all the
information for programming the ROM. No internal logic diagram is needed to accompany the truth table.
PAGE 83
Example: Design a combinational circuit using a ROM. The circuit accepts a three‐bit number and outputs a
binary number equal to the square of the input number.
PAGE 84
Types of ROM
The required paths in a ROM may be programmed in four different ways.
• The first is called mask programming and is done by the semiconductor company during the last fabrication
process of the unit. The procedure for fabricating a ROM requires that the customer fill out the truth table
he or she wishes the ROM to satisfy. The truth table may be submitted in a special form provided by the
manufacturer or in a specified format on a computer output medium. The manufacturer makes the
corresponding mask for the paths to produce the 1’s and 0’s according to the customer’s truth table. This
procedure is costly because the vendor charges the customer a special fee for custom masking the particular
ROM. For this reason, mask programming is economical only if a large quantity of the same ROM
configuration is to be ordered.
• For small quantities, it is more economical to use a second type of ROM called programmable read‐only
memory, or PROM. When ordered, PROM units contain all the fuses intact, giving all 1’s in the bits of the
stored words. The fuses in the PROM are blown by the application of a high‐voltage pulse to the device
through a special pin. A blown fuse defines a binary 0 state and an intact fuse gives a binary 1 state. This
procedure allows the user to program the PROM in the laboratory to achieve the desired relationship
between input addresses and stored words. Special instruments called PROM programmers are available
commercially to facilitate the procedure. In any case, all procedures for programming ROMs are hardware
procedures, even though the word programming is used. The hardware procedure for programming ROMs
or PROMs is irreversible, and once programmed, the fixed pattern is permanent and cannot be altered.
Once a bit pattern has been established, the unit must be discarded if the bit pattern is to be changed.
• A third type of ROM is the erasable PROM, or EPROM, which can be restructured to the initial state even
though it has been programmed previously. When the EPROM is placed under a special ultraviolet light for
a given length of time, the shortwave radiation discharges the internal floating gates that serve as the
programmed connections. After erasure, the EPROM returns to its initial state and can be reprogrammed to
a new set of values.
• The fourth type of ROM is the electrically erasable PROM (EEPROM or E2PROM ). This device is like the
EPROM, except that the previously programmed connections can be erased with an electrical signal instead
of ultraviolet light. The advantage is that the device can be erased without removing it from its socket.
Flash memory devices are similar to EEPROMs, but have additional built‐in circuitry to selectively
program and erase the device in‐circuit, without the need for a special programmer. They have widespread
application in modern technology in cell phones, digital cameras, set‐top boxes, digital TV,
telecommunications, nonvolatile data storage, and microcontrollers. Their low consumption of power
makes them an attractive storage medium for laptop and notebook computers. Flash memories incorporate
additional circuitry, too, allowing simultaneous erasing of blocks of memory, for example, of size 16 to 64
K bytes. Like EEPROMs, flash memories are subject to fatigue, typically having about 105 block erase
cycles.
PAGE 85
Programmable Logic Configurations
⚫ Read Only Memory (ROM) - a fixed array of AND gates and a programmable array of OR gates
⚫ Programmable Array Logic (PAL) - a programmable array of AND gates feeding a fixed array of OR gates.
⚫ Programmable Logic Array (PLA) - a programmable array of AND gates feeding a programmable array of
OR gates.
7.6 Programmable Logic Array (PLA)
The PLA is similar in concept to the PROM, except that the PLA does not provide full decoding of the variables
and does not generate all the minterms. The decoder is replaced by an array of AND gates that can be programmed
to generate any product term of the input variables. The product terms are then connected to OR gates to provide
the sum of products for the required Boolean functions. The internal logic of a PLA with three inputs and two
outputs is shown below,
Such a circuit is too small to be useful commercially, but is presented here to demonstrate the typical logic
configuration of a PLA. The diagram uses the array logic graphic symbols for complex circuits. Each input goes
through a buffer–inverter combination, that has both the true and complement outputs. Each input and its
complement are connected to the inputs of each AND gate. The outputs of the AND gates are connected to the
inputs of each OR gate. The output of the OR gate goes to an XOR gate, where the other input can be programmed
to receive a signal equal to either logic 1 or logic 0.
PAGE 86
Example: Implement the following Boolean function using the PLA shown before,
F1(A, B, C) =  m(0,1,2,4)
F2(A, B, C) =  m(0,5,6,7)
•
•
•
The two given functions of this example have 8 different minterms → try K-map first.
If you group the 0s on k-map you minimize F’ instead.
So for each given function we minimize F (grouping the 1s) and minimize F’ (grouping the 0s.)
The 4 common terms between
F1’ and F2 are: AB, AC, BC, and A’B’C’
•
We select out of the 4 combinations the
two that have 4 common terms. In this
case F1’ and F2
F1 = AB + AC + BC
F2 = AB + AC + A BC
PAGE 87
7.7 Programmable Array Logic (PAL)
The PAL is a programmable logic device with a fixed OR array and a programmable AND array. Because only the
AND gates are programmable, the PAL is easier to program than, but is not as flexible as, the PLA. Figure 7.16
shows the logic configuration of a typical PAL with four inputs and four outputs. Each input has a buffer–inverter
gate, and each output is generated by a fixed OR gate. There are four sections in the unit, each composed of an
AND–OR array that is three wide, the term used to indicate that there are three programmable AND gates in each
section and one fixed OR gate. Each AND gate has 10 programmable input connections, shown in the diagram by
10 vertical lines intersecting each horizontal line. The horizontal line symbolizes the multiple‐input configuration
of the AND gate. One of the outputs is connected to a buffer–inverter gate and then fed back into two inputs of the
AND gates. Commercial PAL devices contain more gates than the one shown in Fig. 7.16 . A typical PAL
integrated circuit may have eight inputs, eight outputs, and eight sections, each consisting of an eight‐wide AND–
OR array. The output terminals are sometimes driven by three‐state buffers or inverters.
In designing with a PAL, the Boolean functions must be simplified to fit into each section. Unlike the situation
with a PLA, a product term cannot be shared among two or more OR gates. Therefore, each function can be
simplified by itself, without regard to common product terms. The number of product terms in each section is
fixed, and if the number of terms in the function is too large, it may be necessary to use two sections to implement
one Boolean function.
PAGE 88
Example: Implement the following Boolean function using the PAL shown before,
Simplifying the functions,
Note that the function for z has four product terms. The logical sum of two of these terms is equal to w . By using
w, it is possible to reduce the number of terms for z from four to .three.
PAGE 89
If the AND gate is not used, we leave all its input fuses intact. Since the corresponding input receives both the true
value and the complement of each input variable, we have AA_ = 0 and the output of the AND gate is always 0.
As with all PLDs, the design with PALs is facilitated by using CAD techniques. The blowing of internal fuses is a
hardware procedure done with the help of special electronic instruments.
PAGE 90
Download