Digital Switching Overview

advertisement
Introduction to Digital Logic
in Telecom
EETS8320
SMU
Lecture 8
(print slides only, no notes pages)
Page 1
©1997-2005 R.Levine
Digital Hardware
• Typically uses non-linear properties of
electronic devices
– Electro-mechanical relays, vacuum tubes were
used in the past
– Electronic, Dielectric and Magnetic devices
were also used historically
– Semiconductor electronic devices are
used primarily today due to high
component density, rapid operation,
and low cost of integrated circuits
Page 2
©1997-2005 R.Levine
Semiconductor Devices
– Diodes
– Transistors
• Junction
• Field Effect (and Metal Oxide Silicon--MOS)
– Advantages:
• Small size
– small, high-functionality in integrated circuit package
• High reliability
– failure mechanisms studied, understood, and avoided
• Fast switching operation
– but faster switching requires higher power!
• Low power consumption vis-à-vis prior art
– No heated filament, as in vacuum tubes
• Low cost of major raw materials
– Silicon is readily available everywhere, some dopant
elements are scarce but are used in very small quantities
Page 3
©1997-2005 R.Levine
Linear Electric Circuit
vout = (R2•R3) •v1 + (R1•R3) •v2
R1•R2 + R2•R3 + R1•R3
R1
v1
R2
v2
R3
vout
vout = (2) •v1 + (2) •v2
4 + 2 +2
Page 4
©1997-2005 R.Levine
Non-linear Electric Circuit
Analysis using piece-wise linear model of
diode, illustrated on voltage-current graph
vout
Ideal voltmeter reading: 0 volts
vout
v1
Ideal, vout=v1
10 V
9.37 V
10 V
Actual reading: -10µA•10k= -0.1 V,
due to non-zero reverse current in
diode.
Page 5
©1997-2005 R.Levine
v1
One Stage Electronic Amplifiers
• Various different graphic symbols are
used to represent the amplifier
“ground return circuit”
for +5 V power
supply is customarily
omitted on drawings.
vout
6
Cutoff state
4
approximately
linear amplification
state of operation.
Note negative slope.
vout
2
vin
-2
Page 6
©1997-2005 R.Levine
-1
0
Vin =0.2 V is
-2
“edge” of
cutoff
-4
1
2
vin
Saturation state
Vin =0.9 V is
“edge” of
saturation
Amplifier Distinctions I
• Applications which use the approximately linear
state:
– Hi-fidelity audio amplifiers
– Radio frequency amplifiers (some types)
– Analog amplifiers in telephone transmission
• First used in early 20th century with vacuum tubes
• Linear state is not quite as linear as is needed for
a chain of numerous amplifier stages
– Cumulative distortion (“flattening” of the peaks of the
waveform) occurred in long distance analog telephone
conversations
• Negative feedback (invented by H.S. Black of Bell
Laboratories, ca. 1927) improves linearity, reduces
internal noise generation
Page 7
©1997-2005 R.Levine
Amplifier Distinctions II
• Non-linear applications intentionally use
primarily the cutoff and saturation states
– Minimal power dissipated in these regions
• power (watts) is product of voltage with current
Power P= v•i
• In these operating regions, either v or I is low
– therefore low power dissipated in transistor
– design objective is typically to switch through the linear
region as rapidly as possible to minimize power
consumption
– Certain types of amplifiers (so-called Class C
and D, etc.) use non-linear properties for special
applications (not described in this course)
Page 8
©1997-2005 R.Levine
Digital Logic is the Main
Non-Linear Application
• Technology and design knowledge to
make digital logic equipment is
highly developed
• Fabrication in integrated circuit form
is well known, widely available
• Small devices switch between cutoff
and saturation states quickly
– thus minimizing power consumption
Page 9
©1997-2005 R.Levine
Digital “Logic”
• Performs “logical” operations on binary
voltage variables
– Value of binary voltage can be interpreted as
Yes vs. No;True vs. False, etc.
– Perform logical operations such as AND,
inclusive OR, exclusive OR (XOR), NOT, etc.
– Numeric variables can be represented in
binary form (base 2, in contrast to base 10
decimal)
– Ultimately numerical algebra operations (add,
subtract, multiply, divide, etc.) can be
performed using binary logic devices
Page 10
©1997-2005 R.Levine
Digital Components
• Any logical operation that can be
described by a list, table or Boolean*
formula can easily be designed using
simple digital “building blocks”
• We use OR, AND, NOT as the 3 basic
building blocks (other choices are
also possible)
*Boolean logic was invented circa 1860 by George Boole; “reinvented” ca. 1940 with technical improvements applicable
to electronic devices and electro-mechanical relays by
Claude Shannon.
Page 11
©1997-2005 R.Levine
Basic Building Blocks
• In this course, we will use 3 basic building
block devices:
 AND
 OR
 NOT (single input inverter)
• In some system designs, the basic
electronic building block modules may be
NOR or NAND (that is: NotOR, NotAND)
– Corresponds to one-stage output physical devices
– Less hardware and less signal delay in some cases
– Many abstract algebra structures have multiple choices
for their basic building blocks. Further exploration of
alternative building blocks is not done in this course.
Page 12
©1997-2005 R.Levine
Boolean Algebra* Description
•Boolean Algebra is the symbolic “language” of digital
logic system design
– Computers and digital switching systems almost all use
hardware with 2 voltage levels
– Multi-level (e.g. 4 level) hardware devices announced by
Intel Corp. in 1998, but binary symbolism still used
•Theoretical invention by George Boole (British
mathematician ca. 1860)
– “Re-discovered” by Claude E. Shannon and applied to
relay logic circuit design in 1939 (master’s thesis).
•Two-valued variables
– Usually represented by the symbols 0 and 1
– Closely related to binary numbering system as well.
*from the name of the first Arabic book known in Europe on classic algebra, Al'jabr, by Abu Ja’far
Muhammed ibn Musa Al-Khwarizmi. Born approx. 780, he is credited with the introduction of the symbol for
zero into Europe, and his name led to the word “algorithm.” Historically, Al’jabr means “repair of broken
bones.” Today, in general, algebra means “rules for manipulation of symbols”; not limited to high school
algebra using decimal numbers.
Page 13
©1997-2005 R.Levine
Logic Operations Analogous to
Ordinary Arithmetic
• “Sum” (OR) analogous to addition, usually
represented by +, or  or + sign
• “Product” (AND) analogous to multiplication,
usually represented by • (or * on typewriter)
• “NOT” has no direct arithmetic analogy.
Represented in writing via overbar (macron) or
apostrophe or negative sign ( x; x’; -x)
• Confusion is possible when binary arithmetic is
under discussion vs. binary Boolean logical
operations
– Read with care!
Page 14
©1997-2005 R.Levine
Binary Variables
• Two voltage levels are most widely used
– Called TTL (Transistor-transistor logic) levels
– Older equipment (and some recent Intel memory chips)
use more than two levels in some cases
• Typically zero (actually about 0.2) and 5 volts
– Lately, 3 or 1.5 volts is also used for the “high” level, for
faster switching and compatibility with battery voltage
(~1.5 V per cell)
• High level is symbolized by 1, low by 0
– Other symbols are occasionally used: T,F; H,L; ON, OFF,
etc.
– In some special cases, 0 is used for high and 1 for low.
This is sometimes called “negative” or “active low”
logic. We will not use this method here to avoid
confusion.
Page 15
©1997-2005 R.Levine
One-stage Amplifier forms an
Inverter (graphic symbols)
y
x
JEDEC (Joint
Electronic Defense
Department Experts
Commission) Symbol
y
x
The little circle is the true
graphic indicator of
inversion.
International
Electro-technical
Commission (IEC) symbol
x y
0 1
1 0
• Signal flow is graphically normally left-to-right
• “Ground” wires are conventionally not drawn
• The small circle represents the negation (logical
negation changes 1 to 0; changes 0 to 1)
• The triangle symbol is also used to represent a
linear amplifier in other contexts.
– Read with care!
Page 16
©1997-2005 R.Levine
Two Chained Stages form a
non-inverting device
IEC Symbol
y
y
x
x
y
x
x y
0 0
1 1
JEDEC Symbol
• Logically, this does “nothing,” but in practical circuits it
allows greater “fan-out” and delay
• It produces greater delay
– However, in modern devices, the delay per stage is typically 20
to 50 nanoseconds
– Simple analysis in this course ignores delay
– Intentional delay is desired in some cases to synchronize two
signal paths
Page 17
©1997-2005 R.Levine
Boolean Symbolic
Operations
• Finite number of combinations, well
represented by a table
• Only 2-input forms shown here, but
multiple inputs are used in general
Logical sum
Logical product
(inclusive OR)
(AND)
x
y
z=x+y x0 y0 z0
w =x • y
0
0
0
1
1
Note, in this
context, that
1+1 is 1, not 2.
Page 18
1
0
1
1
1
1
©1997-2005 R.Levine
0
1
1
1
0
1
w
0
0
0
1
Logical OR Devices
• Multiple input non-inverting device with any one input
sufficient to raise the output
– Examples here use only 2 inputs, but multiple input
OR devices are widely used
• Several implementations:
–
–
–
–
Transistor with two controlling electrodes “side by side”
Junction transistor with two emitters
FET with two gates
single control electrode (emitter or gate), with multiple inputs
connected via resistors (used in past with discrete
components)
• Either controlling electrode alone can turn the anode
current ON, thus lowering Vout.
– Then a second chained inverting stage used to produce binary
1 output.
Page 19
©1997-2005 R.Levine
Logical AND Device
• Non-inverting device constructed so that
all inputs must go ON to switch the output
ON
– Examples here use only 2 inputs, but 3, 4 or
more input AND devices are widely used
• FET Transistor constructed with multiple
controlling electrodes “in series”
– Any one electrode, if OFF, turns the device
anode current OFF
– Followed by chained inverting stage to get
binary 1 output
Page 20
©1997-2005 R.Levine
Other Implementations and Graphic
Symbols
&
x
JEDEC “shaped”
symbol. Bullet nose,
straight back
y
AND gate symbols
•
•
•
Hollow “box” graphic symbol
represents amplifier on page 6.
Via suitable resistor size choices,
the circuit above could act either as
an AND or an OR device (see pp.
30ff for details)
A multi-input device with special
input transistors can be an AND or
an OR device
Page 21
©1997-2005 R.Levine

JEDEC “shaped”
symbol. Bullet or
curved and pointed
nose, concave back
OR gate symbols
Combinatorial Logical Design
• No “ingenuity” is required to make a basic working system
– of course, further optimization is possible
• Desired logical functionality must first be stated in equation
or table form, relating all logical inputs to desired output(s)
• SUM OF PRODUCTS: Each row in the table which produces
binary 1 output can be related to a multi-input AND device,
and the outputs of all such devices can be used as input to
a multi-input OR device.
• PRODUCT OF SUMS: Alternatively, each row in the table
which produces a binary 0 can be related to a multi-input OR
device, and all these outputs can be used as input to a
multi-input AND device.
• Choice between these two methods based on smallest
component count or other criterion (fastest switching time,
minimization of the number of interconnections, etc.)
• Many other synthesis or design methods are also known,
but not described here
Page 22
©1997-2005 R.Levine
Example Logic Table and Formula
_
Negation of x denoted x in most documents. We write x’ due to typographic limitations.
row
0
1
2
3
4
5
6
7
x
0
0
0
0
1
1
1
1
y
0
0
1
1
0
0
1
1
Page 23
z
0
1
0
1
0
1
0
1
u
1
0
1
0
1
0
1
0
u = (x’ •y’ •z’) +
(x’• y • z’) +
(x • y’ • z’) +
(x • y • z’)
This is a “sum of products” formula.
An equally valid “product of sums”
formula can also be stated. Since this
example has 4 zero and 4 one values
for u, either method is equally “efficient”
using AND and OR building blocks.
©1997-2005 R.Levine
Simple Logic Synthesis
Note that • or T intersection in wiring graphics represents a connection.
&
x
Small hollow circles represent negation
at the input of a multi-input device.
Multi-input AND (&) output is 1
when all inputs are 1. Multi-input OR
is 1 when one (or more) input(s) is/are 1.
&
y

u
&
z
&
Page 24
©1997-2005 R.Levine
Example
corresponds to
previous page
sum of
products
This design can be simplified and
redesigned to operate faster (less
gates in the signal path) in different
ways depending on the optimization
criteria.
Many Standard Logic Devices
• Many logical functions are required in large
quantities, and are typically mass produced in
special purpose integrated circuit form.
– Hardware logic “works” fast, uses less chip area, has
other desirable properties (vis-à-vis software).
– Desirable way to implement a function if the quantity
needed is large and economically attractive.
• Often Called Application Specific Integrated
Circuits (ASICs). Examples:
– Error protection coding and decoding
– Encryption/decryption devices
– Devices which scan for a special binary bit pattern (e.g.,
for synchronization bit string within a bit stream)
Page 25
©1997-2005 R.Levine
“Wired” vs. Programmable Logic
• Logical operations performed by a programmable computer
usually require more time and power, since operations are
performed sequentially
• Because hardware design (and test) is a large “up-front” cost,
special hardware is usually made only for a large quantity
(mass produced) product
• ASICs often combine special hardware with a CPU for both the
flexibility of software and the speed of wired logic
• Field “Programmable” Logic Arrays (FPLAs) have numerous
multi-input gates on a chip, with interconnect “wiring” (a
surface metallic pattern of connections) applied by the
designer for a particular application. This allows the economy
of mass production with the flexibility of small production
quantities.
Page 26
©1997-2005 R.Levine
Useful Example Digital Devices
• Multiple Output Devices
– Decoder example (used with memory)
– Full Adder (used in CPU)
– Path switch (and tri-state)
• Digital Devices with Feedback
– Output depends on past signal values
– Produces memory devices
• Multi-function Devices
– Arithmetic-Logic Unit (ALU)
Page 27
©1997-2005 R.Levine
Binary Numbering
•
•
•
Base 10 numbers and arithmetic are customary, but base 2
numbers are appropriate for two-state hardware.
Binary logic symbols do not inherently represent numbers, but
can be arranged to do so (another reason to use binary symbols)
Decimal
Value
0
1
2
21
column
0
0
1
20
column
0
1
0
3
1
1
Extending this table beyond decimal 3 value requires another digit
column.
Page 28
©1997-2005 R.Levine
Binary, Octal, Hexadecimal
Decimal
0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
Page 29
Binary
Octal
Hexadecimal
0000
0
0
0001
1
1
0010
2
2
0011
3
3
0100
4
4
0101
5
5
0110
6
6
0111
7
7
1000
10
8
1001
11
9
1010
12
A
1011
13
B
1100
14
C
1101
15
D
1110
16
E
1111
17
F
©1997-2005 R.Levine
•
Alternate display/
documentation forms use
integral powers of 2 as
base for convenience
(physical internal form is
still binary)
Binary joke: technical people
confuse Halloween with
Christmas, because Oct
31equals Dec 25.
(In the early Roman
Calendar: September,
October, November
And December were
the 7th through 10th
Months respectively.)
Multiple Outputs
•
Graphic
Symbols
Decoder example for 3-bit binary number
least significant bit
&
20
&
21
Connections
n=0
n=1
No
Connection
&
most significant bit
22
&
Page 30
n=2
©1997-2005 R.Levine
n=3
Note: this figure
illustrates only 4
of the 8 distinct
decoder outputs
which are possible
with a 3-bit input.
Decoder Uses
• Addressing Memory
• Choosing one of several operations in a
Central Processing Unit (CPU)
• Sending signals to one selected device on
a line or data bus
– Example: IEEE-488 (HP-IB Lab) data bus
– Internal data bus in a computer between CPU,
memory and peripheral devices
– Using a decoder, all the printed wiring cards
on a shelf can be connected to the same
“back-plane” bus, but only one card receives
the electrical bus signals when a selection
code for that particular card is sent.
Page 31
©1997-2005 R.Levine
1-Line “Switch” or Router
• Steers or directs “high” signal to desired output
• Unselected output is always 0 (low)
• Multiple AND devices used for parallel signal lines (bus)
binary data signal
&
A
steering or
control signal:
&
0 for output A,
B
1 for output B.
When one of the two
outputs has been selected, it mimics the data input,
while the other output is continuously zero.
Page 32
©1997-2005 R.Levine
Three-state Line Drivers
• We cannot connect multiple ordinary
device outputs to the same line
– Contradictory high vs. low voltage outputs on
the same line are logically “illegal”
• Output value of +5 V from one device and 0
V from another device cannot exist on the
same wire in our abstract model
– In practice, one or both devices will overheat
from excessive output current
• Instead, a special device type is used,
called a 3-state or Tri-state driver
– Has a transistor “current switch” in the output
Page 33
©1997-2005 R.Levine
Multiple DriversSimplified output
common output line
ordinary device with
constant output
impedance
signal 1
control 1
current switch. FET
channel is pinched
off by “low” control,
producing “high output
impedance”
signal
signal 2
control 2
control
Conceptual Simplified Internal Diagram
Symbol may differ: may be in rectangular
outline, may have circles to indicate
negation
on34control line.©1997-2005 R.Levine
Page
3-state Uses
• Allows shared use of a single output wire
when different devices do not produce
useful output at the same time
– “pin count” (quantity of external connection
“wires”) is a major cost and complexity factor
with many integrated circuits
– “connections” often cost more than silicon
components in integrated circuits!
• Allows use of same wire(s) for opposite
direction signal flow at different times
– Example: Memory input/output (I/O Bus)
Page 35
©1997-2005 R.Levine
Binary Arithmetic
• 2-input forms shown here:
Arithmetic sum
Arithmetic Carry
(exclusive OR)
(AND)
s=x XOR y x y s
c =x • y x y c
Also called
“ring sum” or
“modulo-2
sum.” The
0
0
1
1
0
1
0
1
0
1
1
0
symbol  is sometimes used.
Page 36
©1997-2005 R.Levine
0
0
1
1
0
1
0
1
0
0
0
1
2-input (“half”)adder
x
&

y
s sum out
&
&
Page 37
c carry out
©1997-2005 R.Levine
3-input, 2-output “full” adder
• Table definitions (3 inputs, 2 outputs)
carry
input
0
0
0
x
y
c out
s out
0
0
1
0
1
0
0
0
0
0
1
1
0
1
1
1
1
1
0
0
1
1
1
0
1
0
1
1
0
1
1
1
0
1
0
0
1
Page 38
©1997-2005 R.Levine
Full Arithmetic Sum
&
x
&
y

s
&
cin
&
Page 39
©1997-2005 R.Levine
Full Arithmetic Carry
&
x
&
y

cout
&
cin
Note that bottom AND gate
is common on both pages.
&
Page 40
©1997-2005 R.Levine
Combined Symbol
• Two previous structures in one box
• Common elements such as lower AND
gate need not be repeated
– This minimizes the component count
• This is available as a medium-scaleintegrated circuit (MSI)
• Also available as a “standard cell” which
can be placed in a large-scale-integrated
(LSI) circuit overall design layout, as
needed.
Page 41
©1997-2005 R.Levine
Graphic Symbols
x
y
ADD
cin
Diagram showing data flow from top input to bottom
output is convenient here to show separate bits of
binary number, with least significant bit at viewer’s
right.
In 3-bit full adder below. cin0 is set to 0.
x2
cout
s
y2
cin2
ADD
cout s2
Page 42
©1997-2005 R.Levine
x1
y1
ADD
cout s1
cin1
x0
y0
ADD
cout s0
Page 17
cin0
ADDer is a Simple Arithmetic-Logic Unit
(ALU) Example
• “Real” ALU has at least 4 parallel bits.
– Also 8, 12*, 16, 18*, 32, 60* and 64 bit units
have been made
• Real ALU has multiple functions aside
from ADDing. Here is an example
– Arithmetic subtraction performed by means of
ADDing 2’s complement negative values
– Complementation is another operation
• 1’s complement: comp1(0001) is 1110
• 2’s complement: comp2(0001) is 1111. Sum of 1111
with 0001 produces 0000, and carry out.
* these values are rarely used today, only for special-purpose computers
Page 43
©1997-2005 R.Levine
Negative Numbers via 2’s Complement
• 2’s complement of binary number formed in 2 steps:
– Form 1’s complement (01; 10)
– Add ...000001
– example: 3 in binary is 0011. 1’s comp value is 1100. Add 1, giving
1101, which is the 2’s comp representation of -3. Check by adding 1101
with 0011. Result is 0000 (with carry out).
– Most modern computers (PCs, Macintosh, etc.) use 2’s
complement for negative numbers
• Some older computers, for example Cray or CDC (or
modern emulation,via e.g. Sun Sparc, for backward
software compatibility) used 1’s complement
– telecom PCM* binary values do not use 2’s complement
for negative values. (First bit is sign, remaining bits are
magnitude of encoded value. Conversion is required
before doing 2’s complement arithmetic with PCM
samples.)
*PCM=pulse code modulation. Examples are µ-law and A-law.
Page 44
©1997-2005 R.Levine
Logic Functions
• ALU could perform logical AND on its
inputs by using only the lowest device in
previous diagrams (pp. 14,15).
• Requires additional AND gates in the 4
lines entering the final box.
• Other input to the the new AND gate is a
function control
• This control line can be an output of a
decoder.
Page 45
©1997-2005 R.Levine
Graphic ALU Symbol
x
y
Thick lines represent
parallel wires (4, 8,
etc.) For hand drawn
pictures, an alternative
graphic symbol uses
a diagonal mark and
sometimes a number
ALU
operation control
code (op-code)
ALU typically contains a
decoder, not shown explicitly
But represented by the red box
8
cout
Page 46
output
©1997-2005 R.Levine
Early Electronic Computers
• Prior to 1950s
• Many single function logic processor units were
available (Add, Multiply, etc)
– few were multiple function ALUs
• Computer was “programmed” by wiring output
from one processor into input of next step
– Output of a multiplier was wired into input of an adder,
etc.
• “Program” was saved on a removable plug-board
with wire jumpers
Page 47
©1997-2005 R.Levine
“Von Neumann” Structure
• In late 1940s, Johann “Johnny” Von Neumann, a
noted American mathematician and refugee from
Nazi Germany, proposed
– Use a multiple function ALU
– Store the program steps as binary operation codes in
random access memory (RAM)
– Also store the data to be processed and (temporarily)
output data in RAM.
– Known as “Von Neumann” computer architecture
• Magnetic devices were most popular for memory
in that era
– Magnetic memory is still used today in disks, tape, etc.,
but not described in this course
Page 48
©1997-2005 R.Levine
Early Software
• Computers designed since about 1948
thus use stored programs
• A sequence of
– operation step binary codes, and
– RAM address values indicating
• where to get input data
• where to store results temporarily (ultimately the
results are typically sent out to a “peripheral” device:
a printer, display screen, or other device at a later
step in the program)
• Historically, the program was stored on punched
cards or magnetic media (typically tape) when not
in use.
Page 49
©1997-2005 R.Levine
“Memory” is Needed
• RAM or main memory
– Stores data
– Stores programs (physically separate RAM or ROM used for data
vs. program in some computers, particularly high reliability units)
– Access to a particular part of memory is organized according to a
numbering scheme (addresses)
• Many bit-storage devices in the central processor unit (CPU)
also hold data, control codes, etc.
– Registers hold a “byte” or other “chunk” of bits
– Single bits indicating overflow, etc.
• “Mass” memory: usually disk, tape, etc. for more storage
“space” at lower cost. Non-volatile data, not lost if power off.
– Mass memory has slower access to a designated address
than RAM
• For example, one must wait for disk to rotate to proper
angular sector location, etc.
Page 50
©1997-2005 R.Levine
Digital Memory Devices
• Many digital memory devices today use internal
feedback to produce a bi-stable electronic device
(so-called “static” RAM).
• Some digital memory also is based on a tiny
capacitor which retains electric charge
• Called “dynamic” RAM memory
• It requires auxiliary electronic devices to scan the
electronic capacitors to “refresh” any charge that leaks
away due to imperfect insulation
• Its cycle time is faster, but it requires more standby power
to operate.
– Magnetic memory devices are little used today in internal
computer circuits:
• Still extensively used for mass memory and backup.
Page 51
©1997-2005 R.Levine
Digital Memory Devices
• Feedback of a digital output to an input can
produce different effects:
1. Bi-stable conditions (two consistent stable values of the
voltage)
– Actual physical value then depends on history of electric signals -thus: “memory”
2. Mono-stable conditions (like a non-feedback structure)
3. Inconsistent conditions (no logically consistent voltage,
neither high nor low)
- In physical implementations, inconsistent logical conditions
typically cause oscillation
- In the humorous American folk song “The Arkansas Traveler”
a donkey is described in the second verse with two
limitations on its movement: “He couldn’t go ahead and he
couldn’t stand still.” The conclusion was that the donkey’s
position oscillated since he couldn’t remain in either stable
condition: “So he went up and down like an old saw mill.”
Page 52
©1997-2005 R.Levine
“Inconsistent” Device
• NOT inverter device with output connected to
input is “inconsistent”
– Diagram implies x=y. Table implies xy.
y
x
x y
0 1
1 0
– Actual voltage oscillates with frequency equal to
reciprocal of device delay time (typically 20 ns delay time
corresponds to 50 MHz frequency)
– Delay time was never previously specified as a significant
parameter in this course!
– A doorbell buzzer is an electromechanical oscillator
device of this general type.
Page 53
©1997-2005 R.Levine
Consistent Bi-stable Device
• Two consistent stable states: x=0,y=1; or x=1,y=0
1
0
0
y 1
z
x
• Only one state exists at a given time
• Actual state depends on history (previous time
values of x, y)
• In fact, there are 3 consistent states, but only two
of them are stable.
Page 54
©1997-2005 R.Levine
Page 29
Physical Description
Right graph is composite
or chain result of “two stage
amplifier”
vz
c,d
b
a
vz c f
vy
a
e
45° line (vx=vz)
d
a
vx
b
vy
b
c
d
vx
This lower left graph is same as upper left, but was rotated
clockwise 90° for convenience to make two vy axes align.
• Trace points a,b,c,d to find composite relationship between
vx, vz
• Connecting x with z on previous page makes vx=vz. Three
consistent values: points a, e, f… but e is unstable in region
of linear amplification. Any small disturbance will drive
voltage to either of points a or f (reasons not shown here).
Page 55
©1997-2005 R.Levine
NOR (NOT OR) Gate
• One stage inverting amplifier with two
ORed inputs
• First internal part of practical noninverting OR gate
x

z
y
Page 56
©1997-2005 R.Levine
x
0
0
1
1
y
0
1
0
1
z
1
0
0
0
“Flip Flop” from NOR Gates
• Value of two input signals controls
state of this binary “flip flop”

s

q’
r
q
s
0
0
1
1
©1997-2005 R.Levine
q
1/0
0
1
0
Middle node often designated
by q’ since it has opposite value
from q in all consistent states
except bottom row.
1/0 in table indicates that either 1 or 0 is a
consistent value. We check each row of
chart by assuming a value for q and tracing
around the circuit to verify assumption of consistency.
Page 57
r
0
1
0
1
Flip Flop Analysis 1
• Check consistency of row 2 (row 3
test not shown but similar)
0
q’ 1

0

1
s
01!!
q
0
s
0
0
1
1
r
0
1
0
1
q
1/0
0
1
0
row 2
starting point
r
Method: Begin by assuming q=0 and inputs s=0, r=1. Then q and s are
inputs to left NOR, and output (q’) is 1 according to chart on earlier
page. With 1 and 1 as inputs to right NOR, output is 0) which is consistent
with original assumptions. Assuming q=1 is inconsistent (gives q=0 result)
Page 58
©1997-2005 R.Levine
Flip Flop Analysis 2
• Check consistency of row 1

0
0
1
q’

0
s
q
1
0
s
0
0
1
1
r
0
1
0
1
q
1/0
0
1
0
row 1
starting point
r
Method: Begin by assuming q=0 (or 1) and inputs s=0, r=0. Then compute
q’ from left NOR using NOR chart. Result of r and q’ inputs to right NOR is
consistent with both original assumptions. In a physical device, the actual
state will also be dependent on previous time value of q. This is the bi-stable
state of the flip flop.
Page 59
©1997-2005 R.Levine
Flip Flop Analysis 3
• Check consistency of row 4
0
q’ 0

1

1
s
01!!
q
0
s
0
0
1
1
r
0
1
0
1
q
1/0
0
1
0
row 4
starting point
r
Method: Begin by assuming q=0 (or 1) and inputs s=1, r=1. Then compute
q’ output from left NOR using NOR chart (0 for both initial assumptions!).
Result of r and q’ inputs to right NOR is 0 in both cases, which is inconsistent
with original assumption of 1 for q value, but consistent with q=0. This is not a
good state to use at a time just before the s=0, r=0 state because it presents a history
which does not properly set up the bistable state of the flip flop.
Page 60
©1997-2005 R.Levine
Conclusions
• NOR flip flop has “memory” when s=0,r=0. It
“remembers” previous state of q corresponding
to opposite values of s vs. r.
• The state s=1, r=1 is consistent, but causes both
q and q’ to be zero. It is to be avoided just before
setting s=0, r=0.
– In a real device, if we pulse both s and r, and then turn
them both off (0 value), the actual flip flop will always
take a specific state (like q=1, q’=0) which depends on
small differences in the component values (resistance,
etc.) of the two NOR devices, rather than on the history.
This is not helpful, since we then can’t use the device as
a memory element.
Page 61
©1997-2005 R.Levine
Physical Memory
• The flip flop “remembers” because electric
charge stored in each transistor (in
depletion layers, etc.), like an electric
capacitor, prevents the voltage from
changing in a discontinuous jump
• A large current pulse is required to make
the voltage change suddenly in a capacitor,
and such a pulse does not occur here.
• Therefore, the voltage across various
semiconductor junctions does not
suddenly change, but “remembers” its
immediately previous value
Page 62
©1997-2005 R.Levine
Partial Model Gives Partial Solution to
the Problem
• Because our abstract model of the electronic
components does not include delay or storage of
electric charge, we cannot calculate the time
needed to switch from (symbolic) 0 to 1.
– We also cannot accurately determine which of the three
consistent states is stable vs. unstable.
– If we included some capacitors in the correct places in our
circuit diagram, to represent the storage of electric charge
in the depletion layers inside the transistors, we could get
at least an approximate answer to these questions.
• However, even with a simplified model, we can still
determine which output states are consistent, and
we can infer which previous inputs will produce
those consistent outputs.
Page 63
©1997-2005 R.Levine
Memory Device Applications
• Digital memory devices (memory
registers) can be used in the central
processor unit (CPU) of a computer to
hold data used as input or output of the
ALU. Organized into “registers”
• Digital memory devices can be used in
large arrays as random access memory
(RAM) to store data and program code in a
computer.
• A “shift register” can be used to convert
data presentation between serial and
parallel formats.
Page 64
©1997-2005 R.Levine
Memory Cell Devices
• NOR flip-flop used with added
modules can make several useful
devices:
– Gated or clocked memory cell
– D-type memory cell (sampling gate)
• Arrays of D units make up static RAM memory
– Sequential pulse counters
– Serial-parallel converter (shift register)
Page 65
©1997-2005 R.Levine
Gated Flip Flop
• Gated devices only respond to inputs when the gate/clock
signal is high. Device “remembers” status at instant the
gate/clock signal goes low.
• Jargon note: a whole module is also called a “gate” in
some cases. This module is called an S-R (set-reset) ff.
Reset
or
clear
&

q
Gate or
clock
pulse
&
Set
Page 66
©1997-2005 R.Levine

q’
G
0
0
0
0
S
0
0
1
1
R
0
1
0
1
q
1/0
1/0
1/0
1/0
1
1
1
1
0
0
1
1
0
1
0
1
1/0
0
1
0
1/0 in table describes
last value of s input at
time before the state of
the row indicated.
D (latch) Flip Flop
• “Samples” the data line and saves the value in
g
place when gate signal drops
0
data
&

q
gate or
clock
pulse
d
q
d(t-)
d(t-)
0
1
q
g
&

q’
Page 67
0
1
1
d
0
1
0
1
©1997-2005 R.Levine
One type
of graphic
symbol
Memory Cell
• A 64 Kbit (8 Kbyte) memory uses 65536 of these cells*
• Address decoder (not shown here) produces the enable
signal for each cell. Eight such cells store a byte/octet.
Note that both data input and output use the same line,
due to 3-state output, and they are never simultaneous.
cycle (a short
pulse to cause
a read or write)
cycle
data I/O line
D
&
enable
Q
3-state
driver
G
write
&
cell enable signal
write1(vs. read 0) [write stores data in the D latch]
*Capital K represents 1024 while small k represents 1000.
Page 68
©1997-2005 R.Levine
When a cycle pulse
occurs with the write
input high, the D latch
captures the data on the
I/O line. When the write
is low (indicating read)
the 3-state driver pulses
the I/O line with the
output from the D latch.
Static RAM Structure
•
Static RAM is an array of D cells. Write/read and cycle pulse signals are
connected to each cell but not shown.
data I/O
line bit7
cell
data I/O
line bit6
cell
decoder
5 columns of cells
omitted from this
diagram for
simplicity
data I/O
line bit0
cell
8191 (decimal) cell enable line
8190 rows of cells
omitted from this
diagram.
cell
cell
0 (decimal) cell enable line
Address presented on 13 parallel lines for 8Kbyte memory.
Page 69
©1997-2005 R.Levine
cell
Dynamic RAM
• Dynamic RAM uses an insulated gate MOS transistor and
other components as a capacitor data storage cell to store
electric charge (logical high voltage) or store no charge
(logical low voltage) on the gate electrode, rather than using
a D storage cell
– Switches faster, and most designs use less power than static
RAM when storage is short term anyway
• Because the charged electrode is not perfectly insulated
from its surroundings, charge can gradually flow away if not
“refreshed.” To refresh the binary data:
– An internal address sequence generator rapidly scans each
data storage cell, and performs a read operation
– If detected analog voltage is near the high voltage level, the
gate is is then recharged to full voltage (a write operation)
– Special circuits prevent undesired interaction of the refresh
operation with a “genuine” read or write.
Page 70
©1997-2005 R.Levine
“Flash” ROM/RAM
• So-called FLASH memory uses isolated
electrodes to store electric charge in a
capacitor structure.
• It retains stored binary data for months or
more without power.
• It is “slow” compared to traditional
dynamic or static RAM with regard to the
cycle time to read or write data, and
requires considerable power to “write.”
• Useful for semi-permanent data such as a
list of frequently used telephone numbers
in a cell ‘phone.
Page 71
©1997-2005 R.Levine
Chain of RS Flip Flops
•
Also called a shift register
– Note right-to-left signal flow in diagram. Some shift registers are
constructed with AND gates and additional cross-wiring to permit flow
in either chosen direction
serial
output
serial
input
q
p s
q’ c
r
q
p s
q’ c
r
q
p s
q’ c
r
q
p s
q’ c
r
q
p s
q’ c
r
Notes: Gate/clock not shown but connected to all cells. Additional signals:
p is parallel preset, sets q=1 regardless of gate/clock. c is parallel clear, “clears”
(makes q=0) regardless of clock. Details of p and c not shown here.
When the gate/clock is pulsed, the q value at the left side of a cell is “copied”
into the cell to its immediate left. The internal signal delays in each cell allow
the cell to assume its new value shortly after its old value is sensed by the next cell.
Page 72
©1997-2005 R.Levine
Shift Register Uses
• Serial-parallel inter-conversion
– When gate/clock pulses are synchronized with a serial
input bit stream,
• one bit is “captured” at input for each clock pulse
• bit values shift (to left in our figure) with each clock pulse
• the q outputs show a historical sequence of the past input
pulses, “oldest” on the left
– When each cell is first parallel preset (or cleared) from a
simultaneous parallel signal source, and then the
gate/clock signal begins repetitive pulsing
• serial output shows each bit in sequence, starting with the
left one first
• output bit rate is controlled by the gate/clock pulse rate
Page 73
©1997-2005 R.Levine
“Elastic” Buffer
•
A structure (not shown) using two shift registers can be designed
to take in a serial bit string at one end and deliver it later as output
at the other end, although the instantaneous bit rates at the two
ends do not always match.
– One register holds the data, the second register indicates which bits
are active storage.
– The bits not yet output are stored in serial order inside one of the shift
registers (or some designs use a parallel byte access memory)
– The long term average bit rates must agree, or the amount of stored
data would grow without bound (if input rate > output rate), a
condition called “overflow.” Alternatively, the stored data would be
totally emptied, and there would be a “gap” in the output bit stream (if
input rate < output rate), called “underflow.”
•
First In First Out (FIFO) elastic buffers are used extensively in
conjunction with serial multiplexers that make use of bit or byte
stuffing (examples: DS-3, SONET/SDH, etc.) to produce a uniform
output bit rate even though internal bit rate has short-term “jitter”
for various reasons.
Page 74
©1997-2005 R.Levine
“Register” Uses
• Bit storage devices used in the CPU are historically called
“registers.” This jargon term dates from days of mechanical
“cash registers” to store/display numbers.
• To store output from 4-bit arithmetic, 5-bit registers are
needed to include possible carry-out (“overflow”) value
• In previous diagram, right bit is conventionally considered
20 bit, left bit (of 5) is 24 bit (called carry or overflow)
– left shift by 1 bit is symbolic equivalent of: “multiply by 2”
– general “long” multiplication can be done by repeated shifting,
AND, and adding
– multiplying two n-bit numbers requires storage of a product
which may be as long as 2n+1 bits
• Input and output of parallel signals can be controlled by
use of AND gates on p,c input and 3-state drivers on q
outputs (but not shown in our figures)
Page 75
©1997-2005 R.Levine
Parallel vs. Serial
• Almost all computers and telecom signal processors,
switches, etc. are internally parallel
– Parallel connections are used within one room, one
cabinet or shorter distances
• Intermediate distances (shelf-to-shelf or peripheral-- printer
or disk -- in same room) may be serial or parallel
• Longer distance digital transmission (across the building
floor to across the world) are almost all serial connections.
• Parallel structure allows “parallel processing”* -- usually
faster
• Example: small computers have evolved from 8-bit parallel
internal connections to 16 and 32 and now 64 bits
*In recent years, the term “parallel processing” has been used to describe algorithmic
processes utilizing separate CPUs operating simultaneously on different parts of the
data. On this page, the term only implies separate bits processed by separate
hardware modules simultaneously.
Page 76
©1997-2005 R.Levine
Parallel vs. Serial
•
Parallel structure requires more components,
– but this is less economically significant today due to low cost,
very large scale integrated (VLSI) circuits
• Serial transmission is used over long distances because:
– Very difficult to maintain parallel bit synchronization on separate
lines over long distance, due to jitter and related factors
– Failure of one line of a parallel group renders data from the entire
parallel group useless
• Bad economic result to the owner or renter/leaser of transmission
facilities
– Practical parallel transmission systems (such as distinct
wavelengths in wavelength division multiplexing -WDM- via fiber
optics) usually transmit unrelated serial signal waveforms on
each parallel link
Page 77
©1997-2005 R.Levine
Computer Structure
• A general purpose computer, and a special purpose computer
for control of a telephone switch have similar major elements:
– Central Processor Unit (CPU), comprising:
• Arithmetic Logic Unit (ALU)
– Choice of operations, controlled by a parallel op-code input used in
conjunction with a decoder to activate the desired signal paths for each
operation. A sequence controller is also used when a sequence of operations
is needed
– For this purpose, a special op-code (instruction) register is used, to hold the
op-code value for the duration of the operation,
– in our simple example, we will use a 4-bit op-code (16 code choices)
• Register(s) to temporarily hold the inputs and the output of the ALU
– in our simple example we use only one dedicated input or output
holding register
– in modern integrated circuit CPUs, often 16 or 32 (or more) registers
are provided (or even multiple sets of registers), and they may be
used flexibly for various ALU-related input or output purposes
– Random Access Memory (RAM)
– Direct Memory Access (DMA) input/output module(s)
Page 78
©1997-2005 R.Levine
Computer Block Diagram
Central
Processor
Unit (CPU)
Int
DMA
Control
Signals
Direct
Memory Access
Unit (DMA)
Address
Bus
Data
Bus
Key:
•Int=interrupt “I'm finished”signal from
DMA to CPU
•R/W (read/write) control signals from
CPU or DMA to RAM
•clock and some other signals not
shown
R/W control
Random Access
Memory (RAM)
Page 79
©1997-2005 R.Levine
Data
Input/
Output
(I/O)
Simplified CPU Diagram
• Clock, power, other wires not shown
op-code
decoder
holding
register
Ain
Bin
ALU
output C
control signals
to everything
op-code numeric
part
part
Read/write
control (to
RAM)
Although almost all parts are connected,
signals only pass from one part to another
when control signals from the op-code
decoder cause specific signal transfers.
Page 80
two RAM address
registers, incrementing circuits,
etc.
©1997-2005 R.Levine
instruction
register
main data bus,
to/from RAM
address bus
to RAM
Typical ALU Operations
• ALU has 2 inputs and 1 output*. Typical
operations:
– ADD two inputs
• One input is the holding register
• Other input is usually a designated data byte in RAM
• Output goes back into holding register.
– Internal time delay prevents premature erasure of the input!
– SUBtract, by first 2’s complementing one input and then
adding
– AND, OR, XOR the corresponding bits from the inputs
– Other arithmetic operations, such as multiply, by
repeated shift and add. (Requires special long output
holding register – not shown here.)
*Inputs and output are each a parallel bus. We use 4-bit bus in our
examples.
Page 81
©1997-2005 R.Levine
Modern Computers
• Modern computers typically have multiple holding registers, while
our example shows only one.
– Each operation also includes a code value specifying which of the
holding registers is used, e.g. for the 1 or 2 inputs, and which one
holds the output (we don’t show this)
– Some registers can be automatically incremented or decremented by
1 without using the main ALU (we don’t show this)
• These registers have a separate dedicated mini-add/subtract circuit with
wired input to add or subtract 1 upon command
• Later we will discuss “interrupt processing”-- responding to an
external electrical signal by running a special program
– this occurs when a subscriber lifts a handset or dials a digit in a small
telephone switch
– a multiple-register computer can save all the intermediate results of prior
computations so it can return to the “background” program after the program
required by the interrupt is completed. Depending on the specific design, the
register contents may be saved in a pre-determined part of the RAM memory,
in a separate set of “backup” registers in the CPU, or elsewhere.
Page 82
©1997-2005 R.Levine
More CPU Stuff
• CPU also contains:
– “Steering” circuits, which cause signal or bit flow to/from the
RAM, to/from particular registers by means of 3-state drivers
and gate/clock signals on the bit-storage registers involved.
– Two “address” registers:
• Instruction Address:holds the address value used to extract a
program step instruction from RAM.
– Usually this register is “incremented” by adding 1 after each program
step, using special dedicated add-only hardware. (Not all computer
designs add 1 -- this depends on the number of bytes in an instruction
“word.”)
– When a program “go to” occurs, a new program address is loaded into
this register rather than adding 1 to the previous value
• Data Address: holds the address used to extract or store data
from/to RAM
– Clock signals from an oscillator (clock), which cause a predetermined sequence of events, such as reading a new
instruction from RAM
Page 83
©1997-2005 R.Levine
Further CPU Stuff
• The CPU includes a module to decode the
op-codes and produce output signals
which cause the correct actions in
response to each op-code
• This control module may itself be “microprogrammed” to perform a sequence of
operations. For example: “multiply”
operation requires a sequence of several
left-shifts and ADDs. This requires more
clock cycles than a simple one-step
operation.
Page 84
©1997-2005 R.Levine
Control Module
• Certain operations are controlled by the control decoder:
– LOAD a data value from a designated byte in RAM to the
holding register (this name only used for putting data into a register)
– STORE the data from the holding register to a designated byte
in memory (this name only used for putting data into RAM)
– JUMP (change the next instruction step address in the address
register) instead of incrementing that register
• corresponds to GO TO in certain programming languages
– Conditional JUMP, with specified test bit:
• corresponds to IF(… test condition…) THEN GOTO… in program
• bit tested (in holding register) could be
– test the most significant bit (sign bit) of a 2’s complement number for
“jump if result of last calculation is negative.”
– test overflow (carry-out) bit
– test if all bits are zero
– etc., etc.
Page 85
©1997-2005 R.Levine
Random Access Memory (RAM)
• RAM in general contains both stored program steps
(object code) and data
– Program steps include an op-code and relevant address
• In our simple example computer, the 8-bit storage byte/octet
contains a 4-bit op-code and a 4-bit address
• In many telecom computers, the program steps are
in a physically separate memory module from the
data
– This program memory can be altered only when loading a
new version of the program code (requiring certain
mechanical switches to be used). This protects against
inadvertent alteration of the program code while operating.
Page 86
©1997-2005 R.Levine
DMA I/O
•
•
Direct Memory Access (DMA) I/O modules are widely used in
computers for “fast” data transfer to/from disks, screen display or
printer (also low bit rate devices like keyboards, etc.) since the
1960s.
DMA module can transfer data to/from a section of RAM
independent of what the CPU is then doing
– During part of an op-code execution cycle, the CPU accesses
RAM, during another part it does internal operations. The DMA
module acts during this second part of the cycle.
• Comment: A special type of DMA module is used with a
separate buffer memory to perform time switching or timespace switching on PCM samples in a multiplexed
transmission bit stream for telecommunications switching.
– Serial-parallel conversion via shift registers is used as an
interface between such a switch and serial wire links like T-1
or E-1 or SONET/SDH optical fiber.
– Very similar to computer memory hardware… more on this in a
future session.
Page 87
©1997-2005 R.Levine
A Few Example Op-codes
Mnemonic
L
ST
HLT
ADD
Binary
0110
0111
0001
1001
Hex
6
7
1
9
• In a simple example we will use these op-codes*:
– Load from designated RAM address to holding register
– STore data from the holding register to the designated address
in RAM
– HaLT: stop the clock and the incrementation of the instruction
address register
• In some multi-user computers, this operation is “privileged” so
one ordinary user can’t stop the computer.
– Add the contents of the holding register to the contents of the
designated byte in RAM. Result is in holding register
*op-code values here are patterned after Motorola 68000 family CPU. L and ST
operations are both “copy data” operations, but have distinct jargon names.
Page 88
©1997-2005 R.Levine
A Simple Program
• Assume that we can put data into memory
and start the instruction address register at
an address we choose (choose address 5)
• Consider a “starting value” pre-setting for instruction register
(here we don’t explain how this is done)
• Consider that a separate DMA device loads the RAM, to give
one method of getting program into memory
• Next page shows contents of memory
before and after program runs. How we get
the result out of RAM is not described here.
Page 89
©1997-2005 R.Levine
What is in RAM?
address
(decimal)
contents before
symbolizing contents after
(binary)
0
1
2
3
4
5
6
7
8
00000111
00001001
00000000
00000000
00000000
01100001
10010000
01110010
00010000
(hex)
07
09
00
00
00
61
90
72
10
Page 90
©1997-2005 R.Levine
(decimal) (binary)
7
9
0
0
0
97
144
114
16
00000111
00001001
00010000
00000000
00000000
01100001
10010000
01110010
00010000
Interpretation
•
This example program starts at address 5. The “input” data is at
addresses 0 and 1, and the result will be stored at address 2
(which is the only byte in memory which has a different stored
value after the program run ends).
• Address and meaning of each step:
5
6
7
8
•
Load holding reg. with contents of address 1
Add contents of address 0
Store the result from the holding reg. to address 2
Halt the computer.
If you identify addresses 0,1,2 with symbolic names x,y,z
respectively, this corresponds to a programming statement:
z:= y+x
in a language like Pascal, C language, Fortran, etc.
• In “assembly language” this program is written:
L
ADD
ST
HLT
y
x
z
Page 91
©1997-2005 R.Levine
Operating Systems
• In our previous example, we did not show how to put/get
programs or data into, or out of, RAM
• This is usually done by means of operating system (called
OS, or just “system”) software. Examples: Windows, Unix
or Linux, MS-DOS, SOS, DMERT, etc.
– Uses prepared programs (“drivers”) to operate specific
displays and printers, or accept input from keyboards and
disks
• The internal binary numbers must be converted to a decimal
representation, and then each decimal digit must be represented
as an ASCII code character, to operate a printer
• System input/output programs are generally written in the form of
subroutines (to be discussed) which are called by the application
programmer by means of statements like PRINT x,y,z
• System software also manages and organizes data and programs
stored on disks, and schedules programs to run in a
predetermined sequence
Page 92
©1997-2005 R.Levine
Software to Make Software
• Writing programs in binary code (or even
in Hexadecimal code) is difficult, tedious,
and prone to error
• Several “translator” programs exist to
make writing the program easier. For
historical reasons none of them are called
“translators”:
• Assembly language “assembler”
• One-to-one correspondence between internal
computer operation codes and program steps
• Mnemonic alphabetic names may be used for opcodes and for addresses
Page 93
©1997-2005 R.Levine
Assembler Program Properties
• Object code runs only on the CPU for which it
was written (uses specific CPU op-codes)
– Programmer needs detailed knowledge of that one CPU
• Can exploit special capabilities to make a very
compact and fast running program
– Example: Higher level programming languages
sometimes un-necessarily store intermediate results in
RAM. Intermediate results need not be stored in RAM if
programmer knows they have no permanent
significance. They can be stored temporarily in CPU
register(s) only.
– When run-time “speed” and small program size are
essential, assembly language is still used extensively
– Some higher level languages have special extensions to
allow writing in-line assembly steps, or to designate
specific registers to hold data, etc.
Page 94
©1997-2005 R.Levine
Higher Level Languages
• Also called application programming languages
– Examples: Algol, Basic, C, C++, Fortran, Pascal, Protel,
Java, PERL, etc.
• Translation from “source” coding to “object” (opcodes and addresses). One source statement
typically corresponds to many object operations.
Conversion performed by “compiler” software
• Complicated algebra-like expressions are
“parsed” (analyzed) into an equivalent sequence
of machine steps.
• Mostly target-CPU independent
– Many compilers exist for a given “source” language to
produce object code for various CPUs
• Far less tedious and intricate for programmers
Page 95
©1997-2005 R.Levine
Interpreter vs. Compiler
• Many languages are also supported by interpreter system
software (as well as compilers)
– A compiler translates source code into object code once, and
object code can then be put into RAM and run/executed as
many times as desired
– An interpreter translates source code into object code each
time the program is run
• Interpreter implementations are good for programs which
must frequently be modified in small ways for particular
applications
– But much more computer resources (memory and CPU
operations) are needed to “run” an interpreter
• Some languages (e.g. Basic, JAVA, PERL) are traditionally
implemented only as an interpreter
– Some are available either way
• Use the interpreter for test and debug, the compiler for the “final”
version
Page 96
©1997-2005 R.Levine
Datafill vs. Programming
• In many cases program code cannot be modified, but it
makes extensive use of tables for different purposes to
control a variety of actions
• Example: the choice of a particular outgoing link (trunk) to
use from a telephone switch is a result of the area code (or
telephone central office code) dialed. This is expressed in a
table with the area code as an index and the trunk/link
number as the “output” data
• The entry or writing/modification of these tables is often
loosely described as “programming,” but more accurately
as “data fill”
– The use of different values in the data fill tables do not
qualitatively change what the software can do, but make a
choice from among a finite number of distinct choices
Page 97
©1997-2005 R.Levine
Download