PC - E

advertisement
II MSC AE
PC BASED SYSTEM DESIGN
UNIT I HARDWARE AND MOTHER BOARD ORGANIZATION OF IBM PC
Introduction to computer organization – components of IBM PC: system unit –
monitor – input device – printers – interfaces – i/o buses – parallel and serial ports – USB –
motherboard logic.
I/o data transfer – DMA channels – peripheral interface and controllers – memory space –
memory refresh – post sequence.
UNIT II DRIVES
Introduction – principles of magnetic storage – floppy disk drive – hard disk drive –
drive formatting – physical & logical formatting – IDE interface – SCSI interface – CD-ROM
drive – bios disk drive devices – fat details.
UNIT III PERIPHERALS
Introduction – video display system – video adapter – colour graphic adapter – CRT
display controller – keyboard – keyboard interface – mouse – printer.
UNIT IV I/O BUSES AND PORTS
Introduction – ISA bus – MCA bus – EISA bus – local buses – VL bus – PCI bus –
AGP.
Introduction – parallel port – serial port – introduction to USB – features of USB – USB
transfer – USB controller.
UNIT V TROUBLESHOOTING
Introduction – computer faults – nature of faults – types of faults – diagnostic
programs and tools – fault elimination process – systematic troubleshooting procedure –
motherboard problems – FDD, FDC problems - HDD, HDC problems – monitor problems –
serial port problems – keyboard problems – SMPS problems-printer problems.
REFERENCE BOOKS
1. B. GOVINDARAJULU “IBM PC AND CLONES”, , TATA MCGRAW
HILL.2ND EDITION.
2. N. MATHIVANAN “MICROPROCESSORS, PC HARDWARE AND
INTERFACING”,
, PHI.
3. ROBERT C. BRENNER “IBM PC TROUBLESHOOTING AND REPAIR
GUIDE”, , BPB PUBLISHERS.
4. PETER NORTON “INSIDE THE IBM PC AND PS/2”, , PHI
PUBLISHERS, FOURTH EDITION.
UNIT I HARDWARE AND MOTHER BOARD ORGANIZATION OF IBM PC
Introduction to computer organization – components of IBM PC: system unit –
monitor – input device – printers – interfaces – i/o buses – parallel and serial ports – USB –
motherboard logic.
I/o data transfer – DMA channels – peripheral interface and controllers – memory space –
memory refresh – post sequence.
1. Organization and Architecture
In describing computer system, a distinction is often made between computer architecture and
computer organization.
Computer architecture refers to those attributes of a system visible to a programmer, or put
another way, those attributes that have a direct impact on the logical execution of a program.
Computer organization refers to the operational units and their interconnection that realize the
architecture specification.
Examples of architecture attributes include the instruction set, the number of bit to represent
various data types (e.g.., numbers, and characters), I/O mechanisms, and technique for
addressing memory.
Examples of organization attributes include those hardware details transparent to the
programmer, such as control signals, interfaces between the computer and peripherals, and the
memory technology used.
As an example, it is an architectural design issue whether a computer will have a multiply
instruction. It is an organizational issue whether that instruction will be implemented by a special
multiply unit or by a mechanism that makes repeated use of the add unit of the system. The
organization decision may be bases on the anticipated frequency of use of the multiply
instruction, the relative speed of the two approaches, and the cost and physical size of a special
multiply unit.
Historically, and still today, the distinction between architecture and organization has been an
important one. Many computer manufacturers offer a family of computer model, all with the
same architecture but with differences in organization. Consequently, the different models in the
family have different price and performance characteristics. Furthermore, an architecture may
survive many years, but its organization changes with changing technology.
2. Structure and Function
A computer is a complex system; contemporary computers contain million of elementary
electronic components. How, then, can one clearly describe them? The key is to recognize the
hierarchical nature of most complex system. A hierarchical system is a set of interrelated
subsystem, each of the later, in turn, hierarchical in structure until we reach some lowest level of
elementary subsystem.
The hierarchical nature of complex systems is essential to both their design and their description.
The designer need only deal with a particular level of the system at a time. At each level, the
system consists of a set of components and their interrelationships. The behavior at each level
depends only on a simplified, abstracted characterization of the system at the next lower level. At
each level, the designer is concerned with structure and function:
* Structure: The way in which the components are interrelated.
* Function: The operation of each individual component as part of the structure.
In term of description, we have two choices: starting at the bottom and building up to a complete
description, or beginning with a top view and decomposing the system, describing their structure
and function, and proceed to successively lower layer of the hierarchy. The approach taken in
this course follows the latter.
2.1 Function
In general terms, there are four main functions of a computer:
* Data processing
* Data storage
* Data movement
* Control
Figure 1Figure 1 (graphics1.png)
Figure 1.1 A functional view of the computer
The computer, of course, must be able to process data. The data may take a wide variety of
forms, and the range of processing requirements is broad. However, we shall see that there are
only a few fundamental methods or types of data processing.
It is also essential that a computer store data. Even if the computer is processing data on the fly
(i.e., data come in and get processed, and the results go out immediately), the computer must
temporarily store at least those pieces of data that are being worked on at any given moment.
Thus, there is at least a short-term data storage function. Files of data are stored on the computer
for subsequent retrieval and update.
The computer must be able to move data between itself and the outside world. The computer’s
operating environment consists of devices that serve as either sources or destinations of data.
When data are received from or delivered to a device that is directly connected to the computer,
the process is known as input-output (I/O), and the device is referred to as a peripheral. When
data are moved over longer distances, to or from a remote device, the process is known as data
communications.
Finally, there must be control of there three functions. Ultimately, this control is exercised by the
individual who provides the computer with instructions. Within the computer system, a control
unit manages the computer’s resources and orchestrates the performance of its functional parts in
response to those instructions.
At this general level of discussion, the number of possible operations that can be performed is
few. The figure 1.2 depicts the four possible types of operations.
The computer can function as a data movement device (Figure 1.2a), simply transferring data
from one peripheral or communications line to another. It can also function as a data storage
device (Figure 1.2b), with data transferred from the external environment to computer storage
(read) and vice versa (write). The final two diagrams show operations involving data processing,
on data either in storage or in route between storage and the external environment.
Figure 2Figure 2 (graphics2.png)
Figure 1.2 Possible computer operations
2.2 Structure
Figure 1.3 is the simplest possible depiction of a computer. The computer is an entity that
interacts in some fashion with its external environment. In general, all of its linkages to the
external environment can be classified as peripheral devices or communication lines. We will
have something to say about both types of linkages.
* Central Processing Unit (CPU): Controls the operation of the computer and performs its data
processing functions. Often simply referred to as processor.
* Main Memory: Stores data.
* I/O: Moves data between the computer and its external environment.
* System Interconnection: Some mechanism that provides for communication among CPU,
main memory, and I/O.
graphics3.png
Figure 1.3: The computer: top-level structure
There may be one or more of each of the above components. Traditionally, there has been just a
single CPU. In recent years, there has been increasing use of multiple processors, in a single
system. Each of these components will be examined in some detail in later lectures. However, for
our purpose, the most interesting and in some ways the most complex component is the CPU; its
structure is depicted in Figure 1.4. Its major structural components are:
* Control Unit (CU): Controls the operation of the CPU and hence the computer.
* Arithmetic and Logic Unit (ALU): Performs computer’s data processing functions.
* Register: Provides storage internal to the CPU.
* CPU Interconnection: Some mechanism that provides for communication among the control
unit, ALU, and register.
Each of these components will be examined in some detail in next lectures.
Figure 3Figure 3 (graphics4.png)
Figure 1.4 The CPU
3. A Brief History of Computers
3.1 The first Generation: Vacuum Tubes
ENIAC
The ENIAC (Electronic Numerical Integrator And Computer), designed by and constructed
under the supervision of Jonh Mauchly and John Presper Eckert at the University of
Pennsylvania, was the world’s first general-purpose electronic digital computer. The project was
a response to U.S. wartime needs. Mauchly, a professor of electrical engineering at the
University of Pennsylvania and Eckert, one of his graduate students, proposed to build a generalpurpose computer using vacuum tubes. In 1943, this proposal was accepted by the Army, and
work began on the ENIAC. The resulting machine was enormous, weighting 30 tons, occupying
15,000 square feet of floor space, and containing more than 18,000 vacuum tubes. When
operating, it consumed 140 kilowatts of power. It was aloes substantially faster than any
electronic-mechanical computer, being capable of 5000 additions per second.
The ENIAC was decimal rather than a binary machine. That is, numbers were represented in
decimal form and arithmetic was performed in the decimal system. Its memory consisted of 20
“accumulators”, each capable of holding a 10-digit decimal number. Each digit was represented
by a ring of 10 vacuum tubes. At any time, only one vacuum tube was in the ON state,
representing one of the 10 digits. The major drawback of the ENIAC was that it had to be
programmed manually by setting switches and plugging and unplugging cables.
The ENIAC was completed in 1946, too late to be used in the war effort. Instead, its first task
was to perform a series of complex calculations that were used to help determine the feasibility
of the H-bomb. The ENIAC continued to be used until 1955.
The von Neumann Machine
The programming process could be facilitated if the program could be represented in a form
suitable for storing in memory alongside the data. Then, a computer could get its instructions by
reading them from memory, and a program could be set of altered by setting the values of a
portion of memory.
This idea, known as the Stored-program concept, is usually attributed to the ENIAC designers,
most notably the mathematician John von Neumann, who was a consultant on the ENIAC
project. The idea was also developed at about the same time by Turing. The first publication of
the idea was in a 1945 proposal by von Neumann for a new computer, the EDVAC (Electronic
Discrete Variable Computer).
In 1946, von Neumann and his colleagues began the design of a new stored-program computer,
referred to as the IAS computer, at the Princeton Institute for Advanced Studies. The IAS
computer, although not completed until 1952, is the prototype of all subsequent general-purpose
computers. Figure 1.5 shows the general structure of the IAS computer. It consists of:
* A main memory, which stores both data and instructions.
* An arithmetic-logical unit (ALU) capable of operating on binary data.
* A control unit, which interprets the instructions in memory and causes them to be executed.
* Input and output (I/O) equipment operated by the control unit.
Figure 4Figure 4 (graphics5.jpg)
Figure 1.5 Structure of the IAS computer
Commercial Computers
The 1950s saw the birth of the computer industry with two companies, Sperry and IBM,
dominating the marketplace.
In 1947, Eckert and Mauchly formed the Eckert-Maunchly computer Corporation to manufacture
computers commercially. Their first successful machine was the UNIVAC I (Universal
Automatic Computer), which was commissioned by the Bureau of the Census for the 1950
calculations. The Eckert-Maunchly Computer Corporation became part of the UNIVAC division
of Sperry-Rand Corporation, which went on to build a series of successor machines.
The UNIVAC II, which had greater memory capacity and higher performance than the UNIVAC
I, was delivered in the late 1950s and illustrates several trends that have remained characteristic
of the computer industry. First, advances in technology allow companies to continue to build
larger, more powerful computers. Second, each company tries to make its new machines upward
compatible with the older machines. This means that the programs written for the older machines
can be executed on the new machine. This strategy is adopted in the hopes of retaining the
customer base; that is, when a customer decides to buy a newer machine, he is likely to get it
from the same company to avoid losing the investment in programs.
The UNIVAC division also began development of the 1100 series of computers, which was to be
its bread and butler. This series illustrates a distinction that existed at one time. The first model,
the UNIVAC 1103, and its successors for many years were primarily intended for scientific
applications, involving long and complex calculations. Other companies concentrated on
business applications, which involved processing large amounts of text data. This split has
largely disappeared but it was evident for a number of years.
IBM, which was then the major manufacturer of punched-card processing equipment, delivered
its first electronic stored-program computer, the 701, in 1953. The 70l was intended primarily for
scientific applications. In 1955, IBM introduced the companion 702 product, which had a
number of hardware features that suited it to business applications. These were the first of a long
series of 700/7000 computers that established IBM as the overwhelmingly dominant com-puter
manufacturer.
3.2 The Second Generation: Transistors
The first major change in the electronic computer came with the replacement of the vacuum tube
by the transistor. The transistor is smaller, cheaper, and dissipates less heal than a vacuum tube
but can be used in the same way as a vacuum tube to con-struct computers. Unlike the vacuum
tube, which requires wires, metal plates, a glass capsule, and a vacuum, the transistor is a solidstate device, made from silicon.
The transistor was invented at Bell Labs in 1947 and by the 1950s had launched an electronic
revolution. It was not until the late 1950s, however, that fully transisto-rized computers were
commercially available. IBM again was not the first company to deliver the new technology.
NCR and. more successfully. RCA were the front-run-ners with some small transistor machines.
IBM followed shortly with the 7000 series.
The use of the transistor defines the second generation of computers. It has become widely
accepted to classify computers into generations based on the fundamental hard-ware technology
employed. Each new generation is characterized by greater processing performance, larger
memory capacity, and smaller size than the previous one.
3.3 The Third Generation: Integrated Circuits
A single, self-contained transistor is called a discrete component. Throughout the 1950s and
early 1960s, electronic equipment was composed largely of discrete com-ponents—transistors,
resistors, capacitors, and so on. Discrete components were manufactured separately, packaged in
their own containers, and soldered or wired together onto circuit boards, which were then
installed in computers, oscilloscopes, and other electronic equipment. Whenever an electronic
device called for a transistor, a little lube of metal containing a pinhead-sized piece of silicon had
to be soldered to a circuit hoard. The entire manufacturing process, from transistor to circuit
board, was expensive and cumbersome.
These facts of life were beginning to create problems in the computer industry. Early secondgeneration computers contained about 10,000 transistors. This figure grew to the hundreds of
thousands, making the manufacture of newer, more power-ful machines increasingly difficult.
In 1958 came the achievement that revolutionized electronics and started the era of
microelectronics: the invention of the integrated circuit. It is the integrated circuit that defines the
third generation of computers. Perhaps the two most important members of the third generation
are the IBM System/360 and the DEC PDP-8.
3.4 Later Generations
Beyond the third generation there is less general agreement on defining generations of
computers. There have been a fourth and a fifth genera-tion, based on advances in integrated
circuit technology. With the introduction of large-scale integration (LSI), more than 1000,000
components can be placed on a single integrated circuit chip. Very-large-scale integration
(VLSI) achieved more than 1000,000,000 components per chip, and current VLSI chips can
contain more than 1000.000 components.
3.5 The summary of Generations of Computer
* Vacuum tube - 1946-1957
* Transistor - 1958-1964
* Small scale integration: 1965
Up to 100 devices on a chip
* Medium scale integration: -1971
100-3,000 devices on a chip
* Large scale integration :1971-1977
3,000 - 100,000 devices on a chip
* Very large scale integration: 1978 -1991
100,000 - 100,000,000 devices on a chip
* Ultra large scale integration : 1991
Over 100,000,000 devices on a chip
The IBM Personal Computer, commonly known as the IBM PC, is the original version and
progenitor of the IBM PC compatible hardware platform. It is IBM model number 5150, and was
introduced on August 12, 1981. It was created by a team of engineers and designers under the
direction of Don Estridge of the IBM Entry Systems Division in Boca Raton, Florida.
Alongside "microcomputer" and "home computer", the term "personal computer" was already in
use before 1981. It was used as early as 1972 to characterize Xerox PARC's Alto. However,
because of the success of the IBM Personal Computer, the term PC came to mean more
specifically a microcomputer compatible with IBM's PC products
Desktop sized programmable calculators by Hewlett Packard had evolved into the HP 9830
BASIC language computer by 1972, with IBM's releasing its own IBM 5100 in 1975. It was a
complete system programmable in BASIC or APL, with a small built-in CRT monitor, keyboard,
and tape drive for data storage. It was also very expensive — up to $20,000 USD. It was
specifically designed for professional and scientific problem-solvers, not business users or
hobbyists.[1] When the PC was introduced in 1981, it was originally designated as the IBM 5150,
putting it in the "5100" series, though its architecture was not directly descended from the IBM
5100.
The original line of PCs were part of an IBM strategy to get into the small personal computer
market then dominated by the Commodore PET, Atari 8-bit family, Apple II, Tandy
Corporation's TRS-80s, and various CP/M machines.[2]
New products at IBM typically required about four years for development. The company
recognized that to compete with other personal computers it needed to develop its offering much
more quickly.[3] Rather than going through the usual IBM design process, a special team was
assembled with authorization to bypass normal company restrictions and get something to
market rapidly. This project was given the code name Project Chess at the IBM Entry Systems
Division in Boca Raton, Florida. The team consisted of twelve people directed by Don Estridge
with Chief Designer Lewis Eggebrecht.[4] They developed the PC in about a year. To achieve
this they first decided to build the machine with "off-the-shelf" parts from a variety of different
original equipment manufacturers (OEMs) and countries. Previously IBM had always developed
its own components. Secondly for scheduling and cost reasons, rather than developing unique
IBM PC monitor and printer designs, project management decided to utilize an existing "off-theshelf" IBM monitor developed earlier in IBM Japan as well as an existing Epson printer model.
Consequently, the unique IBM PC industrial design elements were relegated to the system unit
and keyboard.[5] They also decided on an open architecture, so that other manufacturers could
produce and sell peripheral components and compatible software without purchasing licenses.
IBM also sold an IBM PC Technical Reference Manual that included complete circuit
schematics, a listing of the ROM BIOS source code, and other engineering and programming
information.[6] IBM announced the PC on August 12, 1981. Six weeks later at COMDEX Fall,
Tecmar had 20 PC products available for sale. These products included memory expansion,
IEEE-488, data acquisition, and PC Expansion chassis[7][8][9][10] .[11] Pricing for the IBM PC
started at $1,565 for a bare-bones configuration without disk drives.[12]
At the time, Don Estridge and his team considered using the IBM 801 processor (an early RISC
CPU) and its operating system that had been developed at the Thomas J. Watson Research
Center in Yorktown Heights, New York. The 801 processor was more than an order of
magnitude more powerful than the Intel 8088, and the operating system more advanced than the
DOS 1.0 operating system from Microsoft, which was finally selected. Ruling out an in-house
solution made the team’s job much easier and may have avoided a delay in the schedule, but the
ultimate consequences of this decision for IBM were far-reaching. IBM had recently developed
the Datamaster business microcomputer, which used an Intel processor and peripheral ICs;
familiarity with these chips and the availability of the Intel 8088 processor was a deciding factor
in the choice of processor for the new product. Even the 62-pin expansion bus slots were
designed to be similar to the Datamaster slots. Delays due to in-house development of the
Datamaster software also influenced the design team to a fast-track development process for the
PC, with publicly available technical information to encourage third-party developers.[13]
Other manufacturers soon reverse engineered the BIOS to produce their own non-infringing
functional copies. Columbia Data Products introduced the first IBM-PC compatible computer in
June 1982. In November 1982, Compaq Computer Corporation announced the Compaq
Portable, the first portable IBM PC compatible. The first models were shipped in March 1983.
Once the IBM PC became a commercial success, the product came back under the more usual
tight IBM management control.[citation needed] IBM's tradition of "rationalizing" product lines,
deliberately restricting the performance of lower-priced models in order to prevent them from
"cannibalizing" profits from higher-priced models, worked against them.[citation needed]
IBM PC as standard
Main article: Influence of the IBM PC on the personal computer market
The success of the IBM computer led other companies to develop IBM Compatibles, which in
turn led to branding like diskettes being advertised as "IBM format". An IBM PC clone could be
built with off-the-shelf parts, but the BIOS required some reverse-engineering. Companies like
Phoenix Software Associates, American Megatrends, Award, and others achieved workable
versions of the BIOS, allowing companies like DELL, Compaq, and HP to manufacture PCs that
worked like IBM's product. The IBM PC became the industry standard.
Third-party distribution
ComputerLand and Sears Roebuck partnered with IBM from the beginning of development.
IBM's head of sales and marketing, H.L. ('Sparky') Sparks, relied on these retail partners for
important knowledge of the marketplace. Computerland and Sears became the main outlets for
the new product. More than 190 Computerland stores already existed, while Sears was in the
process of creating a handful of in-store computer centers for sale of the new product. This
guaranteed IBM widespread distribution across the U.S.
Targeting the new PC at the home market, Sears Roebuck sales failed to live up to expectations.
This unfavorable outcome revealed that the strategy of targeting the office market was the key to
higher sales.
Models
IBM Personal Computer
IBM 5150 PC
The IBM PC line
Model
name
PC
XT
XT/370
3270 PC
PCjr
Portable
AT
AT/370
3270 AT
Model # Introduced CPU Features
August 1981 8088 Floppy disk or cassette[14] system
First IBM PC to come with an internal hard drive as
5160
March 1983 8088
standard.
October
5160 with XT/370 Option Kit and 3277 Emulation
5160/588
8088
1983
Adapter
October
With 3270 terminal emulation, 20 Function Key
5271
8088
1983
Keyboard
November
4860
8088 Floppy-based home computer, Infrared Keyboard
1983
February
5155
8088 Floppy-based portable
1984
Faster Processor, Faster System Bus (6 MHz, later
5170
August 1984 80286 8 MHz, vs 4.77 MHz), Jumperless Configuration,
Real Time Clock
October
5170 with AT/370 Option Kit and 3277 Emulation
5170/599
80286
1984
Adapter
June 1985
5281
80286 With 3270 terminal emulation
[15]
5150
Convertible 5140
April 1986
XT 286
September
1986
5162
8088 Microfloppy laptop portable
Slow hard disk, but zero wait state memory on the
motherboard. This 6 MHz machine was actually
80286
faster than the 8 MHz ATs (when using planar
memory) because of the zero wait states
All IBM personal computers are software backwards-compatible with each other in general, but
not every program will work in every machine. Some programs are time sensitive to a particular
speed class. Older programs will not take advantage of newer higher-resolution and higher-color
display standards, while some newer programs require newer display adapters. (Note that as the
display adapter was an adapter card in all of these IBM models, newer display hardware could
easily be, and often was, retrofitted to older models.) A few programs, typically very early ones,
are written for and require a specific version of the IBM PC BIOS ROM.[citation needed] Most
notably, BASICA which was dependent on the BIOS ROM had a sister program called GWBASIC which supported more functions and was 100% backwards compatible and could run
independent from the BIOS ROM.
PC
The CGA video card, with a suitable modulator, could use an NTSC television set or an RGB
monitor for display; IBM's RGB monitor was their display model 5153. The other option that
was offered by IBM was an MDA and their monochrome display model 5151. It was possible to
install both an MDA and a CGA card and use both monitors concurrently,[16] if supported by the
application program. For example, AutoCAD, Lotus 1-2-3 and others allowed use of a CGA
Monitor for graphics and a separate monochrome monitor for text menus. Some model 5150 PCs
with CGA monitors and a printer port also included the MDA adapter by default, because IBM
provided the MDA port and printer port on the same adapter card; it was in fact an MDA/printer
port combo card.
The most commonly used storage medium was the floppy disk, though cassette tape was
originally envisioned by IBM as a low-budget alternative. Accordingly, the IBM 5150 PC was
available with one or two 5-1/4" floppy drives or without any drives or storage medium; in the
latter case IBM intended a user to connect his own cassette recorder via the 5150's cassette port.
The cassette tape port was mechanically identical to, and located next to, the keyboard port on
the 5150's motherboard. A hard disk could not be installed into the 5150's system unit without
retrofitting a more powerful power supply, but an "Expansion Unit," a.k.a. the "IBM 5161
Expansion Chassis," was available, which came with one 10 MB hard disk and also allowed the
installation of a second hard disk.[17] The system unit had five expansion slots, and the expansion
unit had eight; however, one of the system unit's slots and one of the expansion unit's slots had to
be occupied by the Extender Card and Receiver Card, respectively, which were needed to
connect the expansion unit to the system unit and make the expansion unit's other slots available,
for a total of 11 slots. A working configuration required that some of the slots be occupied by
display, disk, and I/O adapters, as none of these were built in to the 5150's motherboard; the only
motherboard external connectors were the keyboard and cassette ports. The simple PC speaker
sound hardware was also on-board. The original PC's maximum memory using IBM parts was
256 kB, 64 kB on the motherboard and three 64 kB expansion cards. The processor was an Intel
8088 running at 4.77 MHz (4/3 the standard NTSC color burst frequency of 3.579545 MHz). (In
early units, the Intel 8088 used was a 1978 version, later were 1978/81/2 versions of the Intel
chip; second-sourced AMDs were used after 1983)[citation needed]. Some owners replaced the 8088
with an NEC V20 for a slight increase in processing speed and support for real mode 80286
instructions. An Intel 8087 co-processor could also be added for hardware floating-point
arithmetic. IBM sold the first IBM PCs in configurations with 16 or 64 kB of RAM preinstalled
using either nine or thirty-six 16-kbit DRAM chips. (The ninth bit was used for parity checking
of memory.) After the IBM XT shipped, the IBM PC motherboard was configured more like the
XTs motherboard with 8 narrower slots[dubious – discuss], as well as the same RAM configuration as
the IBM XT. ( 64 kB in one bank, expandable to 256kB by populating the other 3 banks ).
Although the TV-compatible video board, cassette port and Federal Communications
Commission Class B certification were all aimed at making it a home computer,[18] the original
PC proved too expensive for the home market. At introduction, a PC with 64 kB of RAM and a
single 5.25-inch floppy drive and monitor sold for US $3,005 ($ 7,682 in today's dollars), while
the cheapest configuration (1565 US$) that had no floppy drives, only 16 kB RAM, and no
monitor (again, under the expectation that users would connect their existing TV sets and
cassette recorders) proved too unattractive and low-spec, even for its time (cf. footnotes to the
above IBM PC range table).[19][20] While the 5150 did not become a top selling home computer,
its floppy-based configuration became an unexpectedly large success with businesses.
XT
Main article: IBM Personal Computer XT
The "IBM Personal Computer XT", IBM's model 5160, was an enhanced machine that was
designed for diskette and hard drive storage, introduced two years after the introduction of the
"IBM Personal Computer". It had eight expansion slots and a 10 MB hard disk (later versions
20 MB). Unlike the model 5150 PC, the model 5160 XT no longer had a cassette jack, but still
contained the Cassette Basic interpreter in ROMs. The XT could take 256 kB of memory on the
main board (using 64 kbit DRAM); later models were expandable to 640 kB. (The BIOS ROM
and adapter ROM and RAM space, including video RAM space [since the video hardware was
always an adapter] filled the remaining 384 kB of the one megabyte address space of the 8088
CPU.) It was usually sold with a Monochrome Display Adapter (MDA) video card.[citation needed]
The processor was a 4.77 MHz Intel 8088 and the expansion bus 8-bit XT bus architecture (later
called 8-bit Industry Standard Architecture (ISA) by IBM's competitors). The XT's expansion
slots were placed closer together[21] than with the original PC;[22] this rendered the XT's case and
mainboard incompatible with the model 5150's case and mainboard. The slots themselves and
the peripheral cards however were compatible, unless a rare card designed for the PC happened
to use the extra width of the 5150's slots, in which case the card might require two slots in the
XT. The XT's expansion slot mechanical design, including the slot spacing and the design of the
case openings and expansion card retaining screws, was identical to the design that was later
used in the IBM PC AT and is still used as of 2011, though (since the phase-out of ISA slots)
with different actual slot connectors and bus standards.
XT/370
The IBM Personal Computer XT/370 was an XT with three custom 8-bit cards: the processor
card (370PC-P), contained a modified Motorola 68000 chip, microcoded to execute System/370
instructions, a second 68000 to handle bus arbitration and memory transfers, and a modified
8087 to emulate the S/370 floating point instructions. The second card (370PC-M) connected to
the first and contained 512 kB of memory. The third card (PC3277-EM), was a 3270 terminal
emulator necessary to install the system software for the VM/PC software to run the processors.
The computer booted into DOS, then ran the VM/PC Control Program.[23][24]
PCjr
Main article: IBM PCjr
The IBM PCjr was IBM's first attempt to enter the market for relatively inexpensive educational
and home-use personal computers. The PCjr, IBM model number 4860, retained the IBM PC's
8088 CPU and BIOS interface for compatibility, but its cost and differences in the PCjr's
architecture, as well as other design and implementation decisions, eventually led the PCjr to be
a commercial failure.
Portable
Main article: IBM Portable Personal Computer
The IBM Portable Personal Computer 5155 model 68 was an early portable computer developed
by IBM after the success of Compaq's suitcase-size portable machine (the Compaq Portable). It
was released in February, 1984, and was eventually replaced by the IBM Convertible.
The Portable was an XT motherboard, transplanted into a Compaq-style luggable case. The
system featured 256 kilobytes of memory (expandable to 512 kB), an added CGA card
connected to an internal monochrome (amber) composite monitor, and one or two half-height
5.25" 360K floppy disk drives. Unlike the Compaq Portable, which used a dual-mode monitor
and special display card, IBM used a stock CGA board and a composite monitor, which had
lower resolution. It could however, display color if connected to an external monitor or
television.
AT
Main article: IBM Personal Computer/AT
The "IBM Personal Computer/AT" (model 5170), announced August 15, 1984, used an Intel
80286 processor, originally running at 6 MHz. It had a 16-bit ISA bus and 20 MB hard drive. A
faster model, running at 8 MHz and sporting a 30-megabyte hard disk [25] was introduced in
1986.[26]
The AT was designed to support multitasking; the new SysRq (System request key), little noted
and often overlooked, is part of this design, as is the 80286 itself, the first Intel 16-bit processor
with multitasking features (i.e. the 80286 protected mode). IBM made some attempt at marketing
the AT as a multi-user machine, but it sold mainly as a faster PC for power users. For the most
part, IBM PC/ATs were used as more powerful DOS (single-tasking) personal computers, in the
literal sense of the PC name.
Early PC/ATs were plagued with reliability problems, in part because of some software and
hardware incompatibilities, but mostly related to the internal 20 MB hard disk, and High Density
Floppy Disk Drive[27]
While some people blamed IBM's hard disk controller card and others blamed the hard disk
manufacturer Computer Memories Inc. (CMI), the IBM controller card worked fine with other
drives, including CMI's 33-MB model. The problems introduced doubt about the computer and,
for a while, even about the 286 architecture in general, but after IBM replaced the 20 MB CMI
drives, the PC/AT proved reliable and became a lasting industry standard.
IBM AT's Drive parameter table listed the CMI-33 as having 615 cylinders instead of the
640 the drive was designed with, as to make the size an even 30 MB. Those who re-used
the drives mostly found that the 616th cylinder was bad due to it being used as a landing
area.
AT/370
The "IBM Personal Computer AT/370 was an AT with two custom 16-bit cards, running almost
the exact same setup as the XT/370.
Convertible
Main article: IBM PC Convertible
The IBM PC Convertible, released April 3, 1986, was IBM's first laptop computer and was also
the first IBM computer to utilize the 3.5" floppy disk which went on to become the standard.
Like modern laptops, it featured power management and the ability to run from batteries. It was
the follow-up to the IBM Portable and was model number 5140. The concept and the design of
the body was made by the German industrial designer Richard Sapper.
It utilized an Intel 80c88 CPU (a CMOS version of the Intel 8088) running at 4.77 MHz, 256 kB
of RAM (expandable to 640 kB), dual 720 kB 3.5" floppy drives, and a monochrome CGAcompatible LCD screen at a price of $2,000. It weighed 13 pounds (5.8 kg) and featured a builtin carrying handle.
The PC Convertible had expansion capabilities through a proprietary ISA bus-based port on the
rear of the machine. Extension modules, including a small printer and a video output module,
could be snapped into place. The machine could also take an internal modem, but there was no
room for an internal hard disk.
Next Generation IBM PS/2
The IBM PS/2 line was introduced in 1987. The Model 30 at the bottom end of the lineup was
very similar to earlier models, it used an 8086 processor and an ISA bus. The Model 30 was not
"IBM compatible" in that it did not have standard 5.25" drive bays, it came with a 3.5" floppy
drive and optionally a 3.5" sized hard disk. Most models in the PS/2 line further departed from
"IBM compatible" by replacing the ISA bus completely with Micro Channel Architecture.
UNIT II DRIVES
Introduction – principles of magnetic storage – floppy disk drive – hard disk drive –
drive formatting – physical & logical formatting – IDE interface – SCSI interface – CD-ROM
drive – bios disk drive devices – fat details.
Most of now use magnetic recording and information storage technology one way or another on a daily
basis. Billions of bytes of digital information storage space can be accessed at a touch of a fingertip and
for very little cost. This feat is made possible by the ingenious creativity and hard work of many
scientists and engineers who have devoted themselves to magnetic information technology over the
years. The technology involves the interaction of many different scientific disciplines, and is progressing
at lightning speed reaching the economic level of billions of dollars in investments and revenues.
Moreover, the worldwide economy derived from magnetic and nonmagnetic storage products is global
and deepening. As the new millennium approaches and the information industrial revolution continues
so does the development of the magnetic recording industry based on information storage technology.
Magnetic Information Storage Technology grew out of a need for a text/reference that is self-contained
and emphasizes both experimental and theoretical concepts while containing the important
developments of the 1990's. Designed to help readers gain an appreciation for the science and
technology involved in magnetic information storage, this book aims to provide readers with the ability
to solve problems in magnetic recording using the basic techniques and models introduced.
Topics include: Fundamentals of inductive magnetic head and medium, read and write processes in
magnetic recording, inductive magnetic process, channel coding and error correction, noises, nonlinear
distortions, peak detection channel, PRML channel, decision feedbacks channel, off-track performance,
head-disk assembly servo, fundamental limitations of magnetic recording, alternative information
storage technologies.
A floppy disk is a disk storage medium composed of a disk of thin and flexible magnetic storage
medium, sealed in a rectangular plastic carrier lined with fabric that removes dust particles. They
are read and written by a floppy disk drive (FDD).
Floppy disks, initially as 8-inch (200 mm) media and later in 5.25-inch (133 mm) and 3.5-inch
(89 mm) sizes, were an ubiquitous form of data storage and exchange from the mid-1970s well
into the first decade of the 21st century.[1]
By 2010, computer motherboards were rarely manufactured with floppy drive support; 3 1⁄2"
floppies could be used as an external USB drive, but 5 1⁄4", 8", and non-standard drives could
only be handled by old equipment.
While floppy disk drives still have some limited uses, especially with legacy industrial computer
equipment, they have been superseded by data storage methods with much greater capacity, such
as USB flash drives, portable external hard disk drives, optical discs, memory cards, and
computer networks.
A hard disk drive (HDD; also hard drive, hard disk, or disk drive)[2] is a device for storing
and retrieving digital information, primarily computer data. It consists of one or more rigid
(hence "hard") rapidly rotating discs (often referred to as platters), coated with magnetic material
and with magnetic heads arranged to write data to the surfaces and read it from them.
Hard drives are classified as non-volatile, random access, digital, magnetic, data storage devices.
Introduced by IBM in 1956, hard disk drives have decreased in cost and physical size over the
years while dramatically increasing in capacity and speed.
Hard disk drives have been the dominant device for secondary storage of data in general purpose
computers since the early 1960s.[3] They have maintained this position because advances in their
recording capacity, cost, reliability, and speed have kept pace with the requirements for
secondary storage.[3]
Floppy disk format and density refer to the logical and physical layout of data stored on a
floppy disk. Since their introduction, there have been many popular and rare floppy disk types,
densities, and formats used in computing, leading to much confusion over their differences. In
the early 2000s, most floppy disk types and formats became obsolete, leaving the 3½ inch disk,
using an IBM PC compatible format of 1440 KB, as the only remaining popular format.
Different floppy disk types had different recording characteristics, with varying magnetic
coercivity (measured in oersteds, or in modern SI units in amperes per meter), ferrite grain size,
and tracks per inch (TPI). TPI was not a part of the physical manufacturing process; it was a
certification of how closely tracks of data could be spaced on the medium safely.
The term density has a double meaning for floppy disks. Originally, single density and double
density indicated a difference in logical encoding on the same type of physical media -- FM for
single, and MFM for double. Future use of the term "density" referred to physical characteristics
of the media, with MFM assumed to be the logical format used. GCR was also used on some
platforms, but typically in a "double" density form.
8 and 5¼ inch floppy disks were available with both soft sectoring and hard sectoring. Because
of the similarity in magnetic characteristics between some disk types, it was possible to use an
incorrectly certified disk in a soft sectored drive. Quad density 5¼ inch disks were rare, so it was
not uncommon to use higher quality double density disks, which were usually capable of
sustaining the 96 TPI formatting of quad density, in drives such as the Commodore 8050.
Disks were available in both single and double sided forms, with double sided formats providing
twice the storage capacity. Like TPI, "double sided" was mostly a certification indicator, as the
magnetic media was usually recordable on both sides. Many (but not all) certified "double sided"
8 and 5¼ inch floppies had an index hole on both sides of the disk sleeve to make them usable as
flippy disks.
A combination floppy disk and optical disc, known as a Floptical disk exists. The size of a
90 mm (3.5 in) disk, they are capable of holding close to 20.8 MB[1], but need a special drive.

Logical formatting
Formatted disk capacity is always less than the nominal capacity provided for each type of disk.
Leaving some space empty between sectors and tracks provides some more reliability by
preventing bits from being stored too close together in the magnetic film.
Most common floppy disks in use are formatted in the FAT12 file system format, though
sometimes disks may use a more exotic file system and/or be superformatted to accommodate
slightly more data. Some floppy-based Linux distributions utilize such techniques.[citation needed]
The capacity numbers given in this section assume FAT12 formatting unless otherwise noted.
Single Sided, Double Density
SSDD originally referred to Single Sided, Double Density, a format of (usually 5¼") floppy disk
which could typically hold 35-40 tracks of nine 512-byte (or 18 256-byte) sectors each. Only one
side of the disc was used, although some users did discover that punching additional holes into
the disc jacket would allow the creation of a "flippy" disc which could be manually turned over
to store additional data on the reverse side.
Single-sided disks began to become "obsolete" soon after the introduction of the original IBM
5150 PC in 1981, which used 360Kb double-sided double-density drives. Ironically this same
year, Commodore released a floppy disk system that could store 1MB of data but it was not well
received in part because their users felt that it was overkill.
Parallel ATA (PATA), originally AT Attachment, is an interface standard for the connection
of storage devices such as hard disks, solid-state drives, floppy drives, and optical disc drives in
computers. The standard is maintained by X3/INCITS committee.[1] It uses the underlying AT
Attachment (ATA) and AT Attachment Packet Interface (ATAPI) standards.
The Parallel ATA standard is the result of a long history of incremental technical development,
which began with the original AT Attachment interface, developed for use in early PC AT
equipment. The ATA interface itself evolved in several stages from Western Digital's original
Integrated Drive Electronics (IDE) interface. As a result, many near-synonyms for
ATA/ATAPI and its previous incarnations are still in common informal use. After the
introduction of Serial ATA in 2003, the original ATA was renamed Parallel ATA, PATA for
short.
Parallel ATA cables have a maximum allowable length of only 18 in (457 mm).[2][3] Because of
this limit, the technology normally appears as an internal computer storage interface. For many
years ATA provided the most common and the least expensive interface for this application. It
has largely been replaced by Serial ATA (SATA) in newer systems.
Small Computer System Interface (SCSI, /ˈskʌzi/ SKUZ-ee)[1] is a set of standards for
physically connecting and transferring data between computers and peripheral devices. The SCSI
standards define commands, protocols, and electrical and optical interfaces. SCSI is most
commonly used for hard disks and tape drives, but it can connect a wide range of other devices,
including scanners and CD drives, although not all controllers can handle all devices. The SCSI
standard defines command sets for specific peripheral device types; the presence of "unknown"
as one of these types means that in theory it can be used as an interface to almost any device, but
the standard is highly pragmatic and addressed toward commercial requirements.
SCSI is an intelligent, peripheral, buffered, peer to peer interface. It hides the complexity of
physical format. Every device attaches to the SCSI bus in a similar manner. Up to 8 or 16
devices can be attached to a single bus. There can be any number of hosts and peripheral devices
but there should be at least one host. SCSI uses handshake signals between devices, SCSI-1,
SCSI-2 have the option of parity error checking. Starting with SCSI-U160 (part of SCSI-3) all
commands and data are error checked by a CRC32 checksum. The SCSI protocol defines
communication from host to host, host to a peripheral device, peripheral device to a peripheral
device. However most peripheral devices are exclusively SCSI targets, incapable of acting as
SCSI initiators—unable to initiate SCSI transactions themselves. Therefore peripheral-toperipheral communications are uncommon, but possible in most SCSI applications. The Symbios
Logic 53C810 chip is an example of a PCI host interface that can act as a SCSI target.
A CD-ROM ( /ˌsiːˌdiːˈrɒm/, an acronym of "Compact Disc Read-only memory") is a prepressed compact disc that contains data accessible to, but not writable by, a computer for data
storage and music playback. The 1985 “Yellow Book” standard developed by Sony and Philips
adapted the format to hold any form of binary data.[2]
CD-ROMs are popularly used to distribute computer software, including video games and
multimedia applications, though any data can be stored (up to the capacity limit of a disc). Some
CDs hold both computer data and audio with the latter capable of being played on a CD player,
while data (such as software or digital video) is only usable on a computer (such as ISO 9660
format PC CD-ROMs). These are called enhanced CDs.
Even though many people use lowercase letters in this acronym, proper presentation is in all
capital letters with a hyphen between CD and ROM. At the time of the technology's introduction
it had more capacity than computer hard drives common at the time. The reverse is now true,
with hard drives far exceeding CDs, DVDs and Blu-ray, though some experimental descendants
of it such as HVDs may have more space and faster data rates than today's largest hard drive.
UNIT III PERIPHERALS
Introduction – video display system – video adapter – colour graphic adapter – CRT
display controller – keyboard – keyboard interface – mouse – printer.
A peripheral is a device connected to a host computer, but not part of it, and is more or less dependent
on the host. These are also input/output devices. It expands the host's capabilities, but does not form
part of the core computer architecture.
A monitor or display (also called screen or visual display unit) is an electronic visual display
for computers. The monitor comprises the display device, circuitry, and an enclosure. The
display device in modern monitors is typically a thin film transistor liquid crystal display (TFTLCD) thin panel, while older monitors use a cathode ray tube about as deep as the screen size.
Originally, computer monitors were used for data processing while television receivers were
used for entertainment. From the 1980s onwards, computers (and their monitors) have been used
for both data processing and entertainment, while televisions have implemented some computer
functionality. The common aspect ratio of televisions, and then computer monitors, has also
changed from 4:3 to 16:9 (and 16:10).
A video card, display card, graphics card, or graphics adapter is an expansion card which
generates a feed of output images to a display. Most video cards offer various functions such as
accelerated rendering of 3D scenes and 2D graphics, MPEG-2/MPEG-4 decoding, TV output, or
the ability to connect multiple monitors (multi-monitor).
Video hardware can be integrated into the motherboard but recently it has been integrated into
the CPU, however all modern motherboards, and even motherboards from the 90's provide
expansion ports to which a video card can be attached. In this configuration it is sometimes
referred to as a video controller or graphics controller. Modern low-end to mid-range
motherboards often include a graphics chipset manufactured by the developer of the northbridge
(e.g. an nForce chipset with Nvidia graphics or an Intel chipset with Intel graphics) on the
motherboard. This graphics chip usually has a small quantity of embedded memory and takes
some of the system's main RAM, reducing the total RAM available. This is usually called
integrated graphics or on-board graphics, and is usually low in performance and undesirable for
those wishing to run 3D applications; however, the new Ivy Bridge CPU's will contain graphics
capable of running 3D applications. A dedicated graphics card on the other hand has its own
Random Access Memory or RAM and Processor specifically for processing video images, and
thus offloads this work from the CPU and system RAM. Almost all of these motherboards allow
the disabling of the integrated graphics chip in BIOS, and have an AGP, PCI, or PCI Express slot
for adding a higher-performance graphics card in place of the integrated graphics.
The Color Graphics Adapter (CGA), originally also called the Color/Graphics Adapter or IBM
Color/Graphics Monitor Adapter,[1] introduced in 1981, was IBM's first color graphics card, and
the first color computer display standard for the IBM PC.
The standard IBM CGA graphics card was equipped with 16 kilobytes of video memory, and
could be connected either to a NTSC-compatible monitor or television via an RCA connector for
composite video, or to a dedicated 4-bit "RGBI"[2] interface CRT monitor, such as the IBM 5153
color display.[3]
Built around the Motorola MC6845 display controller, the CGA card featured several graphics
and text modes. The highest display resolution of any mode was 640×200, and the highest color
depth supported was 4-bit (16 colors).
A Video Display Controller or VDC is an integrated circuit which is the main component in a
video signal generator, a device responsible for the production of a TV video signal in a
computing or game system. Some VDCs also generate an Audio signal, but in that case it's not
their main function.
VDCs were most often used in the old home-computers of the 80s, but also in some early video
game systems.
The VDC is always the main component of the video signal generator logic, but sometimes there
are also other supporting chips used, such as RAM to hold the pixel data, ROM to hold character
fonts, or perhaps some discrete logic such as shift registers were necessary to build a complete
system. In any case, it's the VDC's responsibility to generate the timing of the necessary video
signals, such as the horizontal and vertical synchronisation signals, and the blanking interval
signal.
Most often the VDC chip is completely integrated in the logic of the main computer system, (its
video RAM appears in the memory map of the main CPU), but sometimes it functions as a
coprocessor that can manipulate the video RAM contents independently
In computing, a keyboard is a typewriter-style keyboard, which uses an arrangement of buttons
or keys, to act as mechanical levers or electronic switches. Following the decline of punch cards
and paper tape, interaction via teleprinter-style keyboards became the main input device for
computers.
Despite the development of alternative input devices, such as the mouse, touchscreen, pen
devices, character recognition and voice recognition, the keyboard remains the most commonly
used and most versatile device used for direct (human) input into computers.
A keyboard typically has characters engraved or printed on the keys and each press of a key
typically corresponds to a single written symbol. However, to produce some symbols requires
pressing and holding several keys simultaneously or in sequence. While most keyboard keys
produce letters, numbers or signs (characters), other keys or simultaneous key presses can
produce actions or computer commands.
In normal usage, the keyboard is used to type text and numbers into a word processor, text editor
or other program. In a modern computer, the interpretation of key presses is generally left to the
software. A computer keyboard distinguishes each physical key from every other and reports all
key presses to the controlling software. Keyboards are also used for computer gaming, either
with regular keyboards or by using keyboards with special gaming features, which can expedite
frequently used keystroke combinations. A keyboard is also used to give commands to the
operating system of a computer, such as Windows' Control-Alt-Delete combination, which
brings up a task window or shuts down the machine. Keyboards are the only way to enter
commands on a command-line interface.
Legal Information:
All information within this article is provided "as is" and without any express or implied
warranties, including, without limitation, the implied warranties of merchantibility and fitness for
a particular purpose.
This article is protected under copyright law. This document may be copied only if the source,
author, date, and legal information is included.
Abstract:
This article tries to cover every aspect of AT and PS/2 keyboards. It includes information on the
low-level signals and protocol, scan codes, the command set, initialization, compatibility issues,
and other miscellaneous information. Since it's closely related, I've also included information on
the PC keyboard controller. All code samples involving the keyboard encoder are written in
assembly for Microchip's PIC microcontrollers. All code samples related to the keyboard
controller are written in x86 assembly
A History Lesson:
The most popular keyboards in use today include:



USB keyboard - Latest keyboard supported by all new computers (Macintosh and
IBM/compatible). These are relatively complicated to interface and are not covered in
this article.
IBM/Compatible keyboards - Also known as "AT keyboards" or "PS/2 keyboards", all
modern PCs support this device. They're the easiest to interface, and are the subject of
this article.
ADB keyboards - Connect to the Apple Desktop Bus of older Macintosh systems. These
are not covered in this article
IBM introduced a new keyboard with each of its major desktop computer models. The original
IBM PC, and later the IBM XT, used what we call the "XT keyboard." These are obsolete and
differ significantly from modern keyboards; the XT keyboard is not covered in this article. Next
came the IBM AT system and later the IBM PS/2. They introduced the keyboards we use today,
and are the topic of this article. AT keyboards and PS/2 keyboards were very similar devices,
but the PS/2 device used a smaller connector and supported a few additional features.
Nonetheless, it remained backward compatible with AT systems and few of the additional
features ever caught on (since software also wanted to remain backward compatible.) Below is a
summary of IBM's three major keyboards.
IBM PC/XT Keyboard (1981):





83 keys
5-pin DIN connector
Simple uni-directional serial protocol
Uses what we now refer to as scan code set 1
No host-to-keyboard commands
IBM AT Keyboard (1984) - Not backward compatible with XT systems(1).





84 -101 keys
5-pin DIN connector
Bi-directional serial protocol
Uses what we now refer to as scan code set 2
Eight host-to-keyboard commands
IBM PS/2 Keyboard (1987) - Compatible with AT systems, not compatible with XT systems(1).





84 - 101 keys
6-pin mini-DIN connector
Bi-direction serial protocol
Offers optional scan code set 3
17 host-to-keyboard commands
The PS/2 keyboard was originally an extension of the AT device. It supported a few additional
host-to-keyboard commands and featured a smaller connector. These were the only differences
between the two devices. However, computer hardware has never been about standards as much
as compatibility. For this reason, any keyboard you buy today will be compatible with PS/2 and
AT systems, but it may not fully support all the features of the original devices.
Today, "AT keyboard" and "PS/2 keyboard" refers only to their connector size. Which
settings/commands any given keyboard does or does not support is anyone's guess. For example,
the keyboard I'm using right now has a PS/2-style connector but only fully supports seven
commands, partially supports two, and merely "acknowledges" the rest. In contrast, my "Test"
keyboard has an AT-style connector but supports every feature/command of the original PS/2
device (plus a few extra.) It's important you treat modern keyboards as compatible, not
standard. If your design a keyboard-related device that relies on non-general features, it may
work on some systems, but not on others...
Modern PS/2 (AT) compatible keyboards





Any number of keys (usually 101 or 104)
5-pin or 6-pin connector; adaptor usually included
Bi-directional serial protocol
Only scan code set 2 guaranteed.
Acknowledges all commands; may not act on all of them.
Footnote 1) XT keyboards use a completely different protocol than that used by AT and PS/2
systems, making it incompatible with the newer PCs. However, there was a transition period
where some keyboard controllers supported both XT and AT (PS/2) keyboards (through a switch,
jumper, or auto-sense.) Also, some keyboards were made to work on both types of systems
(again, through the use of a switch or auto-sensing.) If you've owned such a PC or keyboard,
don't let it fool you--XT keyboards are NOT compatible with modern computers.
General Description:
Keyboards consist of a large matrix of keys, all of which are monitored by an on-board processor
(called the "keyboard encoder".) The specific processor(1) varies from keyboard-to-keyboard
but they all basically do the same thing: Monitor which key(s) are being pressed/released and
send the appropriate data to the host. This processor takes care of all the debouncing and buffers
any data in its 16-byte buffer, if needed. Your motherboard contains a "keyboard controller"(2)
that is in charge of decoding all of the data received from the keyboard and informing your
software of what's going on. All communication between the host and the keyboard uses an IBM
protocol.
Footnote 1) Originally, IBM used the Intel 8048 microcontroller as its keyboard encoder. There
are now a wide variety of keyboard encoder chips available from many different manufacturers.
Footnote 2) Originally, IBM used the Intel 8042 microcontroller as its keyboard controller. This
has since been replaces with compatible devices integrated in motherboards' chipsets. The
keyboard controller is covered later in this article.
Electrical Interface / Protocol:
The AT and PS/2 keyboards use the same protocol as the PS/2 mouse. Click here for detailed
information on this protocol.
Scan Codes:
Your keyboard's processor spends most of its time "scanning", or monitoring, the matrix of
keys. If it finds that any key is being pressed, released, or held down, the keyboard will send a
packet of information known as a "scan code" to your computer. There are two different types of
scan codes: "make codes" and "break codes". A make code is sent when a key is pressed or held
down. A break code is sent when a key is released. Every key is assigned its own unique make
code and break code so the host can determine exactly what happened to which key by looking at
a single scan code. The set of make and break codes for every key comprises a "scan code set".
There are three standard scan code sets, named one, two, and three. All modern keyboards
default to set two.(1)
So how do you figure out what the scan codes are for each key? Unfortunately, there's no simple
formula for calculating this. If you want to know what the make code or break code is for a
specific key, you'll have to look it up in a table. I've composed tables for all make codes and
break codes in all three scan code sets:



Scan Code Set 1 - Original XT scan code set; supported by some modern keyboards
Scan Code Set 2 - Default scan code set for all modern keyboards
Scan Code Set 3 - Optional PS/2 scan code set--rarely used
Footnote 1) Originally, the AT keyboard only supported set two, and the PS/2 keyboard would
default to set two but supported all three. Most modern keyboards behave like the PS/2 device,
but I have come across a few that didn't support set one, set three, or both. Also, if you've ever
done any low-level PC programming, you've probably notice the keyboard controller supplies
set ONE scan codes by default. This is because the keyboard controller converts all incomming
scan codes to set one (this stems from retaining compatibility with software written for XT
systems.) However, it's still set two scan codes being sent down the keyboard's serial line.
Make Codes, Break Codes, and Typematic Repeat:
Whenever a key is pressed, that key's make code is sent to the computer. Keep in mind that a
make code only represents a key on a keyboard--it does not represent the character printed on
that key. This means that there is no defined relationship between a make code and an ASCII
code. It's up to the host to translate scan codes to characters or commands.
Although most set two make codes are only one-byte wide, there are a handfull of "extended
keys" whose make codes are two or four bytes wide. These make codes can be identified by the
fact that their first byte is E0h.
Just as a make code is sent to the computer whenever a key is pressed, a break code is sent
whenever a key is released. In addition to every key having its own unique make code, they all
have their own unique break code(1). Fortunately, however, you won't always have to use
lookup tables to figure out a key's break code--certain relationships do exist between make codes
and break codes. Most set two break codes are two bytes long where the first byte is F0h and the
second byte is the make code for that key. Break codes for extended keys are usually three bytes
long where the first two bytes are E0h, F0h, and the last byte is the last byte of that key's make
code. As an example, I have listed below a the set two make codes and break codes for a few
keys:
Key
(Set 2)
Make Code
(Set 2)
Break Code
"A"
1C
F0,1C
"5"
2E
F0,2E
"F10"
09
F0,09
Right Arrow
E0, 74
E0, F0, 74
Right "Ctrl"
E0, 14
E0, F0, 14
Example: What sequence of make codes and break codes should be sent to your computer for
the character "G" to appear in a word processor? Since this is an upper-case letter, the sequence
of events that need to take place are: press the "Shift" key, press the "G" key, release the "G"
key, release the "Shift" key. The scan codes associated with these events are the following:
make code for the "Shift" key (12h), make code for the "G" key (34h), break code for the "G"
key(F0h,34h), break code for the "Shift" key (F0h,12h). Therefore, the data sent to your
computer would be: 12h, 34h, F0h, 34h, F0h, 12h.
If you press a key, its make code is sent to the computer. When you press and hold down a key,
that key becomes typematic, which means the keyboard will keep sending that key's make code
until the key is released or another key is pressed. To verify this, open a text editor and hold
down the "A" key. When you first press the key, the character "a" immediately appears on your
screen. After a short delay, another "a" will appear followed by a whole stream of "a"s until you
release the "A" key. There are two important parameters here: the typematic delay, which is the
short delay between the first and second "a", and the typematic rate, which is how many
characters per second will appear on your screen after the typematic delay. The typematic delay
can range from 0.25 seconds to 1.00 second and the typematic rate can range from 2.0 cps
(characters per second) to 30.0 cps. You may change the typematic rate and delay using the "Set
Typematic Rate/Delay" (0xF3) command.
Typematic data is not buffered within the keyboard. In the case where more than one key is held
down, only the last key pressed becomes typematic. Typematic repeat then stops when that key
is released, even though other keys may be held down.
Footnote 1) Actually, the "Pause/Break" key does not have a break code in scan code sets one
and two. When this key is pressed, its make code is sent; when it's released, it doesn't send
anything. So how do you tell when this key has been released? You can't.
Reset:
At power-on or software reset (see the "Reset" command) the keyboard performs a diagnostic
self-test referred to as BAT (Basic Assurance Test) and loads the following default values:




Typematic delay 500 ms.
Typematic rate 10.9 cps.
Scan code set 2.
Set all keys typematic/make/break.
When entering BAT, the keyboard enables its three LED indicators, and turns them off when
BAT has completed. At this time, a BAT completion code of either 0xAA (BAT successful) or
0xFC (Error) is sent to the host. This BAT completion code must be sent 500~750 milliseconds
after power-on.
Many of the keyboards I've tested ignore their CLOCK and DATA lines until after the BAT
completion code has been sent. Therefore, an "Inhibit" condition (CLOCK line low) may not
prevent the keyboard from sending its BAT completion code.
Command Set:
A few notes regarding commands the host can issue to the keyboard:




The keyboard clears its output buffer when it recieves any command.
If the keyboard receives an invalid command or argument, it must respond with "resend"
(0xFE).
The keyboard must not send any scancodes while processing a command.
If the keyboard is waiting for an argument byte and it instead receives a command, it
should discard the previous command and process this new one.
Below are all the commands the host may send to the keyboard:


0xFF (Reset) - Keyboard responds with "ack" (0xFA), then enters "Reset" mode. (See
"Reset" section.)
0xFE (Resend) - Keyboard responds by resending the last-sent byte. The exception to
this is if the last-sent byte was "resend" (0xFE). If this is the case, the keyboard resends
the last non-0xFE byte. This command is used by the host to indicate an error in
reception.
The next six commands can be issued when the keyboard is in any mode, but it only effects the
behavior of the keyboard when in "mode 3" (ie, set to scan code set 3.)




*0xFD (Set Key Type Make) - Disable break codes and typematic repeat for specified
keys. Keyboard responds with "ack" (0xFA), then disables scanning (if enabled) and
reads a list of keys from the host. These keys are specified by their set 3 make codes.
Keyboard responds to each make code with "ack". Host terminates this list by sending
an invalid set 3 make code (eg, a valid command.) The keyboard then re-enables
scanning (if previously disabled).
*0xFC (Set Key Type Make/Break) - Similar to previous command, except this one only
disables typematic repeat.
*0xFB (Set Key Type Typematic) - Similar to previous two, except this one only disables
break codes.
*0xFA (Set All Keys Typematic/Make/Break) - Keyboard responds with "ack" (0xFA).
Sets all keys to their normal setting (generate scan codes on make, break, and typematic
repeat)







*0xF9 (Set All Keys Make) - Keyboard responds with "ack" (0xFA). Similar to 0xFD,
except applies to all keys.
*0xF8 (Set All Keys Make/Break) - Keyboard responds with "ack" (0xFA). Similar to
0xFC, except applies to all keys.
*0xF7 (Set All Keys Typematic) - Keyboard responds with "ack" (0xFA). Similar to
0xFB, except applies to all keys.
0xF6 (Set Default) - Load default typematic rate/delay (10.9cps / 500ms), key types (all
keys typematic/make/break), and scan code set (2).
0xF5 (Disable) - Keyboard stops scanning, loads default values (see "Set Default"
command), and waits further instructions.
0xF4 (Enable) - Re-enables keyboard after disabled using previous command.
0xF3 (Set Typematic Rate/Delay) - Host follows this command with one argument byte
that defines the typematic rate and delay as follows:
.
Bits 0-4 Rate(cps)
Repeat Rate
Bits 0-4 Rate(cps) Bits 0-4 Rate(cps)
Bits 0-4 Rate(cps)
00h
30.0
08h
15.0
10h
7.5
18h
3.7
01h
26.7
09h
13.3
11h
6.7
19h
3.3
02h
24.0
0Ah
12.0
12h
6.0
1Ah
3.0
03h
21.8
0Bh
10.9
13h
5.5
1Bh
2.7
04h
20.7
0Ch
10.0
14h
5.0
1Ch
2.5
05h
18.5
0Dh
9.2
15h
4.6
1Dh
2.3
06h
17.1
0Eh
8.6
16h
4.3
1Eh
2.1
07h
16.0
0Fh
8.0
17h
4.0
1Fh
2.0
Delay
Bits 5-6
00b
01b
10b
11b



Delay (seconds)
0.25
0.50
0.75
1.00
*0xF2 (Read ID) - The keyboard responds by sending a two-byte device ID of 0xAB,
0x83. (0xAB is sent first, followed by 0x83.)
*0xF0 (Set Scan Code Set) - Keyboard responds with "ack", then reads argument byte
from the host. This argument byte may be 0x01, 0x02, or 0x03 to select scan code set 1,
2, or 3, respectively. The keyboard responds to this argument byte with "ack". If the
argument byte is 0x00, the keyboard responds with "ack" followed by the current scan
code set.
0xEE (Echo) - The keyboard responds with "Echo" (0xEE).

0xED (Set/Reset LEDs) - The host follows this command with one argument byte, that
specifies the state of the keyboard's Num Lock, Caps Lock, and Scroll Lock LEDs. This
argument byte is defined as follows:
MSb
LSb
Always Always Always Always Always Caps Num Scroll
0
0
0
0
0
Lock Lock Lock
o
o
o
"Scroll Lock" - Scroll Lock LED off(0)/on(1)
"Num Lock" - Num Lock LED off(0)/on(1)
"Caps Lock" - Caps Lock LED off(0)/on(1)
*Originally available in PS/2 keyboards only.
Emulation:
Click here for keyboard/mouse routines. Source in MPASM for PIC microcontrollers.
The i8042 Keyboard Controller:
Up to this point in the article, all information has been presented from a hardware point-of-view.
However, if you're writing low-level keyboard-related software for the host PC, you won't be
communicating directly with the keyboard. Instead, a keyboard controller provides an interface
between the keyboard and the peripheral bus. This controller takes care of all the signal-level
and protocol details, as well as providing some conversion, interpretation, and handling of scan
codes and commands.
An Intel 8042/compatible microcontroller is used as the PC's keyboard controller. In modern
computers, this microcontroller is hidden within the motherboard's chipset, which integrates
many controllers in a single package. Nonetheless, this device is still there, and the keyboard
controller is still commonly referred to as "the 8042".
Depending on the motherboard, the keyboard controller may operate in one of two modes: "ATcompatible" mode, or "PS/2-compatible" mode. The latter is used if a PS/2 mouse is supported
by the motherboard. If this is the case, the 8042 acts as the keyboard controller and the mouse
controller. The keyboard controller auto-detects which mode it is to use according to how it's
wired to the keyboard port.
The 8042 contains the following registers:




A one-byte input buffer - contains byte read from keyboard; read-only
A one-byte output buffer - contains byte to-be-written to keyboard; write-only
A one-byte status register - 8 status flags; read-only
A one-byte control register - 7 control flags; read/write
The first three registers (input, output, status) are directly accessible via ports 0x60 and 0x64.
The last register (control) is read using the "Read Command Byte" command, and written using
the "Write Command Byte" command. The following table shows how the peripheral ports are
used to interface the 8042:
Port
Read /
Write
Function
0x60 Read
Read Input Buffer
0x60 Write
Write Output Buffer
0x64 Read
Read Status Register
0x64 Write
Send Command
Writing to port 0x64 doesn't write to any specific register, but sends a command for the 8042 to
interpret. If the command accepts a parameter, this parameter is sent to port 0x60. Likewise,
any results returned by the command may be read from port 0x60.
When describing the 8042, I may occasionally refer to its physical I/O pins. These pins are
defined below:
AT-compatible mode
Port 1 (Input Port):
Pin Name Function
Port 2 (Output Port):
Pin Name Function
0
P10
Undefined
.
0
P20
1
P11
Undefined
.
1
P21
Undefined
.
2
Undefined
.
3
2
3
4
5
P12
P13
P14
P15
External RAM
1: Enable external
RAM
0: Disable external
RAM
Manufacturing Setting
1: Setting enabled
0: Setting disabled
4
P22
P23
P24
5
P25
6
P26
System Reset
1: Normal
0: Reset computer
Port 3 (Test Port):
Pin Name Function
0
T0
Keyboard
Clock
(Input)
.
Keyboard Data
(Input)
.
Gate A20
.
1
T1
Undefined
.
2
--
Undefined
.
Undefined
.
3
--
Undefined
.
Input Buffer Full
.
4
--
Undefined
.
Output Buffer
Empty
.
5
--
Undefined
.
Keyboard Clock
6
--
6
7
P16
Display Type Switch
1: Color display
0: Monochrome
P17
Keyboard Inhibit
Switch
1: Keyboard enabled
0: Keyboard inhibited
1: Pull Clock low
0: High-Z
7
P27
Keyboard Data:
1: Pull Data low
0: High-Z
Undefined
.
7
--
Undefined
.
PS/2-compatible mode
Port 1 (Input Port):
Pin Name Function
0
1
2
3
4
5
6
7
P10
P11
P12
P13
P14
Keyboard Data
(Input)
.
Mouse Data
(Input)
.
Undefined
.
Undefined
.
External RAM
1: Enable external
RAM
0: Disable external
RAM
P15
Manufacturing
Setting
1: Setting enabled
0: Setting disabled
P16
Display Type Switch
1: Color display
0: Monochrome
P17
Keyboard Inhibit
Switch
1: Keyboard enabled
0: Keyboard disabled
Port 2 (Output Port):
Pin Name Function
0
1
2
3
4
P20
P21
System Reset
1: Normal
0: Reset computer
Gate A20
.
P22
Mouse Data:
1: Pull Data low
0: High-Z
P23
Mouse Clock:
1: Pull Clock low
0: High-Z
P24
Keyboard IBF
interrupt:
1: Assert IRQ 1
0: De-assert IRQ 1
P25
Mouse IBF
interrupt:
1: Assert IRQ 12
0: De-assert IRQ 12
6
P26
Keyboard Clock:
1: Pull Clock low
0: High-Z
7
P27
Keyboard Data:
1: Pull Data low
0: High-Z
5
Port 3 (Test Port):
Pin Name Function
T0
Keyboard
Clock
(Input)
.
1
T1
Mouse Clock
(Input)
.
2
--
Undefined
.
3
--
Undefined
.
4
--
Undefined
.
5
--
Undefined
.
6
--
Undefined
.
7
--
Undefined
.
0
(Note: Reading keyboard controller datasheets can be confusing--it will refer to the "input
buffer" as the "output buffer" and vice versa. This makes sense from the point-of-view of
someone writing firmware for the controller, but for somebody used to interfacing the controller,
this can cause problems. Throughout this document, I only refer to the "input buffer" as the one
containing input from the keyboard, and the "output buffer" as the one that contains output to be
sent to the keyboard.)
Status Register:
The 8042's status flags are read from port 0x64. They contain error information, status
information, and indicate whether or not data is present in the input and output buffers. The
flags are defined as follows:
MSb

LSb
AT-compatible mode:
PERR
RxTO
PS/2-compatible mode:
PERR
TO
TxTO
MOBF
INH A2
INH
A2
SYS
SYS
IBF
OBF
IBF
OBF
OBF (Output Buffer Full) - Indicates when it's okay to write to output buffer.
0: Output buffer empty - Okay to write to port 0x60
1: Output buffer full - Don't write to port 0x60

IBF (Input Buffer Full) - Indicates when input is available in the input buffer.
0: Input buffer empty - No unread input at port 0x60
1: Input buffer full - New input can be read from port 0x60

SYS (System flag) - Post reads this to determine if power-on reset, or software reset.
0: Power-up value - System is in power-on reset.
1: BAT code received - System has already beed initialized.

A2 (Address line A2) - Used internally by the keyboard controller
0: A2 = 0 - Port 0x60 was last written to
1: A2 = 1 - Port 0x64 was last written to

INH (Inhibit flag) - Indicates whether or not keyboard communication is inhibited.
0: Keyboard Clock = 0 - Keyboard is inhibited
1: Keyboard Clock = 1 - Keyboard is not inhibited

TxTO (Transmit Timeout) - Indicates keyboard isn't accepting input (kbd may not be
plugged in).
0: No Error - Keyboard accepted the last byte written to it.
1: Timeout error - Keyboard didn't generate clock signals within 15 ms of "request-tosend".

RxTO (Receive Timeout) - Indicates keyboard didn't respond to a command (kbd
probably broke)
0: No Error - Keyboard responded to last byte.
1: Timeout error - Keyboard didn't generate clock signals within 20 ms of command
reception.

PERR (Parity Error) - Indicates communication error with keyboard (possibly noisy/loose
connection)
0: No Error - Odd parity received and proper command response recieved.
1: Parity Error - Even parity received or 0xFE received as command response.

MOBF (Mouse Output Buffer Full) - Similar to OBF, except for PS/2 mouse.
0: Output buffer empty - Okay to write to auxillary device's output buffer
1: Output buffer full - Don't write to port auxillary device's output buffer

TO (General Timout) - Indicates timeout during command write or response. (Same as
TxTO + RxTO.)
0: No Error - Keyboard received and responded to last command.
1: Timeout Error - See TxTO and RxTO for more information.
[EG: On my PC, the normal value of the 8042's "Status" register is 14h = 00010100b. This
indicates keyboard communication is not inhibited, and the 8042 has already completed its selftest ("BAT"). The "Status" register is accessed by reading from port 64h ("IN AL, 64h")]
Reading keyboard input:
When the 8042 recieves a valid scan code from the keyboard, it is converted to its set 1
equivalent. The converted scan code is then placed in the input buffer, the IBF (Input Buffer
Full) flag is set, and IRQ 1 is asserted. Furthermore, when any byte is received from the
keyboard, the 8042 inhibits further reception (by pulling the "Clock" line low), so no other scan
codes will be received until the input buffer is emptied.
If enabled, IRQ 1 will activate the keyboard driver, pointed to by interrupt vector 0x09. The
driver reads the scan code from port 0x60, which causes the 8042 to de-assert IRQ 1 and reset
the IBF flag. The scan code is then processed by the driver, which responds to special key
combinations and updates an area of the system RAM reserved for keyboard input.
If you don't want to patch into interrupt 0x09, you may poll the keyboard controller for input.
This is accomplished by disabling the 8042's IBF Interrupt and polling the IBF flag. This flag is
set (1) when data is available in the input buffer, and is cleared (0) when data is read from the
input buffer. Reading the input buffer is accomplished by reading from port 0x60, and the IBF
flag is at port 0x64, bit 1. The following assembly code illustrates this:
kbRead:
WaitLoop:
in
and
jz
in
al, 64h
al, 10b
WaitLoop
al, 60h
;
;
;
;
Read
Test
Wait
Read
Status byte
IBF flag (Status<1>)
for IBF = 1
input buffer
Writing to keyboard:
When you write to the 8042's output buffer (via port 0x60), the controller sets the OBF ("Output
Buffer Full") flag and processes the data. The 8042 will send this data to the keyboard and wait
for a response. If the keyboard does not accept or generate a response within a given amount of
time, the appropriate timeout flag will be set (see Status register definition for more info.) If an
incorrect parity bit is read, the 8042 will send the "Resend" (0xFE) command to the keyboard. If
the keyboard continues to send erroneous bytes, the "Parity Error" flag is set in the Status
register. If no errors occur, the response byte is placed in the input buffer, the IBF ("Input Buffer
Full") flag is set, and IRQ 1 is activated, signaling the keyboard driver.
The following assembly code shows how to write to the output buffer. (Remember, after you
write to the output buffer, you should use int 9h or poll port 64h to get the keyboard's response.)
kbWrite:
WaitLoop:
in
and
jnz
out
al, 64h
al, 01b
WaitLoop
60h, cl
;
;
;
;
Read Status byte
Test OBF flag (Status<0>)
Wait for OBF = 0
Write data to output buffer
Keyboard Controller Commands:
Commands are sent to the keyboard controller by writing to port 0x64. Command parameters
are written to port 0x60 after teh command is sent. Results are returned on port 0x60. Always
test the OBF ("Output Buffer Full") flag before writing commands or parameters to the 8042.


0x20 (Read Command Byte) - Returns command byte. (See "Write Command Byte"
below).
0x60 (Write Command Byte) - Stores parameter as command byte. Command byte
defined as follows:
MSb
AT-compatible mode:
LSb
--
XLAT
PC
PS/2-compatible mode: --
XLAT
_EN2
o
_EN
_EN
OVR
--
SYS
SYS
--
INT
INT2
INT
INT (Input Buffer Full Interrupt) - When set, IRQ 1 is generated when data is
available in the input buffer.
0: IBF Interrupt Disabled - You must poll STATUS<IBF> to read input.
1: IBF Interrupt Enabled - Keyboard driver at software int 0x09 handles input.
o
SYS (System Flag) - Used to manually set/clear SYS flag in Status register.
0: Power-on value - Tells POST to perform power-on tests/initialization.
1: BAT code received - Tells POST to perform "warm boot" tests/initiailization.
o
OVR (Inhibit Override) - Overrides keyboard's "inhibit" switch on older
motherboards.
0: Inhibit switch enabled - Keyboard inhibited if pin P17 is high.
1: Inhibit switch disabled - Keyboard not inhibited even if P17 = high.
o
_EN (Disable keyboard) - Disables/enables keyboard interface.
0: Enable - Keyboard interface enabled.
1: Disable - All keyboard communication is disabled.
o
PC ("PC Mode") - ???Enables keyboard interface somehow???
0: Disable - ???
1: Enable - ???
o
XLAT (Translate Scan Codes) - Enables/disables translation to set 1 scan codes.
0: Translation disabled - Data appears at input buffer exactly as read from
keyboard
1: Translation enabled - Scan codes translated to set 1 before put in input buffer
o
INT2 (Mouse Input Buffer Full Interrupt) - When set, IRQ 12 is generated when
mouse data is available.
0: Auxillary IBF Interrupt Disabled 1: Auxillary IBF Interrupt Enabled o
_EN2 (Disable Mouse) - Disables/enables mouse interface.
0: Enable - Auxillary PS/2 device interface enabled
1: Disable - Auxillary PS/2 device interface disabled



















?0x90-0x9F (Write to output port) - Writes command's lower nibble to lower nibble of
output port (see Output Port definition.)
?0xA1 (Get version number) - Returns firmware version number.
?0xA4 (Get password) - Returns 0xFA if password exists; otherwise, 0xF1.
?0xA5 (Set password) - Set the new password by sending a null-terminated string of scan
codes as this command's parameter.
?0xA6 (Check password) - Compares keyboard input with current password.
0xA7 (Disable mouse interface) - PS/2 mode only. Similar to "Disable keyboard
interface" (0xAD) command.
0xA8 (Enable mouse interface) - PS/2 mode only. Similar to "Enable keyboard
interface" (0xAE) command.
0xA9 (Mouse interface test) - Returns 0x00 if okay, 0x01 if Clock line stuck low, 0x02 if
clock line stuck high, 0x03 if data line stuck low, and 0x04 if data line stuck high.
0xAA (Controller self-test) - Returns 0x55 if okay.
0xAB (Keyboard interface test) - Returns 0x00 if okay, 0x01 if Clock line stuck low,
0x02 if clock line stuck high, 0x03 if data line stuck low, and 0x04 if data line stuck high.
0xAD (Disable keyboard interface) - Sets bit 4 of command byte and disables all
communication with keyboard.
0xAE (Enable keyboard interface) - Clears bit 4 of command byte and re-enables
communication with keyboard.
0xAF (Get version)
0xC0 (Read input port) - Returns values on input port (see Input Port definition.)
0xC1 (Copy input port LSn) - PS/2 mode only. Copy input port's low nibble to Status
register (see Input Port definition)
0xC2 (Copy input port MSn) - PS/2 mode only. Copy input port's high nibble to Status
register (see Input Port definition.)
0xD0 (Read output port) - Returns values on output port (see Output Port definition.)
0xD1 (Write output port) - Write parameter to output port (see Output Port definition.)
0xD2 (Write keyboard buffer) - Parameter written to input buffer as if received from
keyboard.




0xD3 (Write mouse buffer) - Parameter written to input buffer as if received from mouse.
0xD4 (Write mouse Device) - Sends parameter to the auxillary PS/2 device.
0xE0 (Read test port) - Returns values on test port (see Test Port definition.)
0xF0-0xFF (Pulse output port) - Pulses command's lower nibble onto lower nibble of
output port (see Output Port definition.)
Modern Keyboard Controllers:
So far, I've only discussed the 8042 keyboard controller. Although modern keyboard controllers
remain compatible with the original device, compatibility is their only requirement (and their
goal.)
My motherboard's keyboard controller is a great example of this. I connected a
microcontroller+LCD in parallel to my keyboard to see what data is sent by the keyboard
controller. At power-up, the keyboard controller sent the "Set LED state" command to turn off
all LEDs, then reads the keyboard's ID. When I tried writing data to the output buffer, I found
the keyboard controller only forwards the "Set LED state" command and "Set Typematic
Rate/Delay" command. It does not allow any other commands to be sent to the keyboard.
However, it does emulate the keyboard's response by placing "acknowledge" (0xFA) in the input
buffer when appropriate (or 0xEE in response to the "Echo" command.) Furthermore, if the
keyboard sends it an erroneous byte, the keyboard controller takes care of error handling (sends
the "Retry" command; if byte still erroneous; sends error code to keyboard and places error code
in input buffer.)
Once again, keep in mind chipset designers are more interested in compatibility than
standardization.
Initialization:
The following is the communication between my computer and keyboard when it boots-up. I
beleive the first three commands were initiated by the keyboad controller, the next command
(which enables Num lock LED) was sent by the BIOS, then the rest of the commands were sent
my the OS (Win98SE). Remember, these results are specific to my computer, but it should give
you a general idea as to what happens at startup.
Keyboard:
Host:
Keyboard:
Host:
Keyboard:
Host:
Keyboard:
Keyboard:
Host:
Keyboard:
Host:
Keyboard:
Host:
AA
ED
FA
00
FA
F2
FA
AB
ED
FA
02
FA
F3
Self-test passed
Set/Reset Status Indicators
Acknowledge
Turn off all LEDs
Acknowledge
Read ID
Acknowledge
First byte of ID
Set/Reset Status Indicators
Acknowledge
Turn on Num Lock LED
Acknowledge
Set Typematic Rate/Delay
;Keyboard controller init
;BIOS init
;Windows init
Keyboard:
Host:
Keyboard:
Host:
Keyboard:
Host:
Keyboard:
Host:
Keyboard:
FA
20
FA
F4
FA
F3
FA
00
FA
Acknowledge
500 ms / 30.0 reports/sec
Acknowledge
Enable
Acknowledge
Set Typematic Rate/delay
Acknowledge
250 ms / 30.0 reports/sec
Acknowledge
Download