Semiconductor Memory

advertisement

Computer Architecture 8 : Semiconductor Memory

Semiconductor Memory

Semiconductor Memory

8.1

Introduction

8.2

Vocabulary

8.3

Memory Types

8.3.1

Volatile Memory

8.3.2

Non-Volatile Memory - ROM

8.4

Cache Memory

8.5

Virtual Memory

Answers to SAQs

8.

1 Introduction

This unit concentrates on semiconductor memory and its physical and functional characteristics.

ROM, RAM, static and dynamic memory, cache memory are all covered.

A computer memory must be able to temporarily store the patterns of bits with which the processor is working and provide it immediate access to any location it requests. The storage of information is accomplished by using collections of individual storage elements, each of which is capable of maintaining a single bit. For a device to be useful as a memory element it must have two stable states, a mechanism for setting the device to one state or the other, and a mechanism for reading the state. Memory systems have evolved through a variety of devices that match this characteristic, from relays, vacuum tubes, delay lines, ferrite cores to semiconductor materials.

The use of ferrite cores was used for many years to provide the main memory for most computers. The cost and size of these memories, as well as their speed became a disadvantage as semiconductor memories were developed. The technique of storing information by the magnetic orientation of a ferrous material is now used for other types of storage (backup devices) than for the main memory.

All microcomputers now use semiconductor memory which consists of RAM and ROM, made in the form of LSI circuits. The principal features of such circuits are low cost, high density and ease of use. Considerable differences exist in the types of semiconductor memory due to the wide range of manufacturing process available. These differences manifest themselves in the form of:-

Page

1

1

2

4

5

9

12

14

17

© LC Simmonds, Edinburgh. Page 1 of 17

Computer Architecture 8 : Semiconductor Memory

power consumption

packing density

speed of operation

internal

interface

methods of storage

cost

8.

2 Vocabulary

There are many different types of memory devices available, each with its own characteristics.

The following are terms used most frequently when dealing with memory technology.

Memory cell

A memory cell is the smallest amount of information storage, holding either a 1 or 0.

Memory cells are often grouped together to form words.

Access time (tacc)

Access time is one of the most important parameters of any memory component and is the time taken to read data from a given memory location, measured from the start of a read cycle. Access time is made up from two parts; the time taken to locate the required memory location and time taken for data to become available from the memory cell (i.e. valid on the data bus) Many semiconductor memories have identical read and write access times

Cycle time (trcyc)

This is the time which must elapse between 2 successive read or write accesses.

Random access

This is when a memory is configured so that the access time of any cell within it is constant and independent of the physical location of the cell. As far as a processor is concerned random access implies the access time to read from any memory location is constant. If a memory is random access for read cycles, it will usually be random access for write cycles also. Unfortunately the term RAM is now commonly used to indicate a memory which is both read and write. This has nothing to do with the property of random access which indicates an identical access time for all memory cells.

Serial access

© LC Simmonds, Edinburgh. Page 2 of 17

Computer Architecture 8 : Semiconductor Memory

Serial access refers to a memory where the time taken to access data depends on the physical location of data within the memory. Examples of serial access memories are magnetic tapes and magnetic discs, where the access time depends on the time taken for the data to be moved to the read/write head.

Volatile memory

This describes a memory which loses its contents when the source of power is removed.

Read only memory

The contents of a read only memory may be read, but cannot be modified (written to).

Static memory

Semiconductor memory is either static or dynamic. Once data is written to a static memory cell it stays there until overwritten with new data, or the power is removed.

Dynamic memory

Semiconductor memory is either dynamic or static. Once data is written to a dynamic memory cell it must be refreshed (rewritten) periodically otherwise the electrical charge which represents the bit ‘leaks’ away. Dynamic memory, unlike static memory, requires a considerable amount of circuitry to control it. Despite this, it is still much cheaper than static memory.

Internal Organisation

All computer memory is regarded as a sequence of memory locations, where each location is identified by a unique address. In practice, however, the memory will consist of a number of memory chips suitably connected together to make up the memory capacity. Within any chip the memory is organised as a matrix of storage cells (see figure 7.1). Any cell in the matrix can be accessed by specifying its row and column co-ordinates. The memory chip circuitry has to translate any memory address into the corresponding co-ordinates.

In some devices a location holds only 1 bit. Others are organised such that a location holds a group of bits, typically 4 or 8. The internal storage organisation is one of the characteristics quoted when referring to a memory chip. A device which consists of 1024 memory locations each capable of storing 1 bit has an internal organisation of 1024 x 1 bit (1K x 1). Other typical organisations are 1024 x 8 bit (1K x 8) and 16392 x 1 (16K x 1). Chips organised with

1 bit locations must be suitably connected together to make up the desired wordsize.

© LC Simmonds, Edinburgh. Page 3 of 17

Computer Architecture 8 : Semiconductor Memory

0

A4

A5

A6

A7

1

1

1

1

R o w e r o d

D e c

Matrix of

256 memory cells

15

0 15

Cell accessed, corresponding to address

11110000

Column

Decoder

0 0 0 0

A0 A1 A2 A3

Figure 7.1 A memory chip containing a matrix of 256 addressable locations or cells arranged as 16 rows and 16 columns. Any cell may be accessed by specifying its row and column coordinates. To simplify the diagram, only the address lines are shown. There will also be data lines to transfer data in and out of the cells.

As an example, the CPU requests access to address 250 (i.e. 1111 0000 in binary). This binary pattern is placed on the address bus. The four least significant bits (0000) determine the column co-ordinate - in this case, column 0. The four most significant bits (1111 ) are used by the row decoder to determine the row co-ordinate - in this case 15. The row and column address lines then access only the single unique cell which corresponds to the supplied address.

Semiconductor memory is fabricated on silicon chips in the same fashion as microprocessors and other digital devices. The low cost of semiconductor memory (as compared to other memory devices) is the main reason for the ready availability and low cost of microcomputers nowadays. The main characteristics of semiconductor memory are low cost, high density (bits per chip), and ease of use. Apart from these characteristics, memory can be graded in terms of capacity and speed of access.

A range of memory products exist, with differing characteristics. However, there are only two basic types:

Those whose contents can be read and also written to (volatile). Examples of this type are

DRAM and SRAM.

Those whose contents can only be read (non-volatile). Some memory’s contents may be permanent, while other memory chips may be removed from the computer and reprogrammed. Examples of this type are ROM, PROM, EPROM and EEPROM.

© LC Simmonds, Edinburgh. Page 4 of 17

Computer Architecture 8 : Semiconductor Memory

Figure 7.2 illustrates the classifications of the various types of semiconductor memory.

semiconductor memory volatile ROM static dynamic masked

ROM

EPROM PROM

UV

EPROM

EEPROM

Figure 7.2 Classification of semiconductor memory types

Computers use two types of RAM. These are termed Dynamic Ram (DRAM) and Static RAM

(SRAM) and they have differing constructions and characteristics. These characteristics include, speed, complexity and cost. The speed of the chip is termed its ‘Access time’ and is measured in nano-seconds (i.e. 10 to the power minus 9). Both types use arrays of transistor switches to store the binary data. The main difference lies in how the transistors are switched and it is this which affects the chips’ characteristics.

Both types use different circuitry and are therefore not interchangeable. Static RAM cannot be plugged into sockets intended for Dynamic RAM and vice versa.

Memory constructed from MOS devices require so little power that only a battery back up is required to hold the data for days. The memory for a computer will be made up from a number of memory chips of a given organisation which are suitably connected to the system buses in order to provide the desired memory capacity. These collections of chips appearing on printed circuit boards has given rise to the phrase ‘memory banks’.

Static Memory

Static memory is normally found only in smaller memories due to its high cost, and is easy to use from a designer’s point of view The internal arrangement is shown in figure 7.3.

© LC Simmonds, Edinburgh. Page 5 of 17

Computer Architecture 8 : Semiconductor Memory

The memory locations are arranged into a memory matrix, one of which will be accessed according to the signals on the row and column decoders (determined from the split address control logic column decoder column I/O address bus

A high

A low row decoder memory matrix

Input data control data bus

__

CS

__

R/W

___

OE and symbolize logic gates which open or close the data path, and are controlled through the control logic unit according to a read or write operation.

Figure 7.3

The internal arrangement of a static ram chip.

The address bus is split into a high part and a low part, as inputs to the row and column decoders. The control logic unit will open or close the gates in the data lines. The address decoding and read/write electronics are located on the chip, greatly simplifying the memory system design. bus)

There are 3 control pins:

-

-

-

__

CS

_

R/W

__

OE

Chip Select

Read/ Write

Output Enable

To indicate that a logic state of 0 enables an operation, an overline (bar) is placed above the corresponds to logic 1, and W corresponds to logic 0. of memory chips to share the data bus, enabling only one to be active at any time.

© LC Simmonds, Edinburgh. Page 6 of 17

Computer Architecture 8 : Semiconductor Memory used to turn on the chip’s bus drivers during a write cycle.

A static RAM chip consists of a number of bistable elements called flip-flops. A flip-flop is a circuit consisting of 4 transistors, which can store 1 bit of data. The requirement of 4 transistors per bit of strorage is the reason why the packing density of static RAM is so low. This is the major drawback of SRAM: a high component count per bit of storage, making it very expensive with a high power consumption (as compared to dynamic RAM). Nevertheless, SRAM offers very fast access times typically about 10ns (nano - thousand millionths). It is unlikely to be used as a main computer memory, being used instead as a fast cache memory.

An important parameter in memory circuit design is the timing diagram which indicates the access and cycle times of a memory chip. Such a diagram illustrates the sequence of actions which take place during a read or write cycle. A simplified timing diagram for a static memory chip during a read cycle is shown in figure 7.3. time

A

Address lines

1

0 old address new address

D

CS

1

0 read cycle time (trcyc)

Address valid

E

Data lines

1

0

Data valid read access time (tacc)

C B

F

Figure 7.4

A read-cycle timing diagram.

The address waveform appears as two parallel lines. The use of parallel lines is conventional and indicates that some lines will be in a logic 0 state while the others will be in a logic 1 state.

At the start of a read cycle some address lines will be going from low to high, while other lines are moving in the opposite direction. It is not the state of the address lines which are of interest but the times at which a changeover takes place, and the duration of valid signals on the lines.

© LC Simmonds, Edinburgh. Page 7 of 17

Computer Architecture 8 : Semiconductor Memory

At time A the contents of the address bus have fully changed from their previous value and are now stable. This time is the start of a read cycle and is taken as a reference point for a chip’s timing measurements. The address will remain valid on the address line until time C. The line must be at a logic 1 state from time A to time B.

R/W

At time D the memory circuits have decoded the signals on the address lines causing the chip effect of enabling this particular memory chip, while all others remain disabled. From this time the requested internal location will be accessed and its data appear on the data terminals of the chip. At time E this operation is complete and the data has become valid on the data bus. The time from the address valid, A, to the data valid, E, is the read access time of the chip. in a logic 0 state

In summary, static memory

is simplest from the designers point of view

has a high cost

consumes more power (than dynamic RAM)

has a low storage density (bits per unit area)

- is used mainly for small fast (cache) memory

Dynamic Memory

The necessity for higher memory capacities has led to the development of dynamic memory.

Compared to static ram, it has the advantages of high storage density, low cost and low power consumption. Standard dynamic RAM chips are available with capacities of about 256k x 1 bit, and typical access times of 70ns.

One bit of information is stored as an electrical charge on one of the legs of a FET (field effect transistor). Compare this with the 4 transistors per bit, for static RAM. A dynamic RAM chip contains all of the necessary electronics to access a given location, to write a 0 or 1 in a write cycle and to read its contents in a read cycle.

There is a problem: the charge on the FET leg leaks away. Most dynamic chips are guaranteed to retain their data for 2 thousandths of a second ( 2ms). The answer is to rewrite the data to every location periodically, this is known as refreshing. In practice, it is only necessary to access a location to refresh its data. But nothing is that simple, and that is true here. Dynamic memories require a lot of external complex electronics to make them work. And, we do not go into that, except to note that one round in the battle goes to static RAM since it involves considerably less complex control circuitry than dynamic RAM. As a consequence, dynamic

© LC Simmonds, Edinburgh. Page 8 of 17

Computer Architecture 8 : Semiconductor Memory memory tends to be much less reliable than static memory. Bits are easily corrupted, requiring further electronics for error detecting circuitry.

The necessity of regularly refreshing every memory cell, makes DRAM a slow memory with access times of 50ns considered as being fast. The timing diagrams are rather complex and so we do not show them.

In summary, dynamic memory

is complex to use from the designers point of view

has a low cost

consumes less power (than static RAM)

has a high storage density (bits per unit area)

is used in large memory systems

- is not as reliable as SRAM

SAQ

7.1 What is the difference between ROM and RAM? Are there any exceptions to these differences?

8.3.2 Non-Volatile Memory - ROM

The main benefit of such chips is that their contents will not be lost if the power is removed.

These chips are used in a wide variety of electronic control circuits, from industrial machine tools to domestic washing machines. They are also the ideal choice for computer control. A computer’s control programs require to be non-volatile. By placing part of the operating system software into a ROM chip, the system BIOS, the basic machine control programs are available to be run as soon as the computer is switched on. The programs in the ROM provide the machine’s basic input and output functions, to allow application programs to be loaded and run.

Unfortunately, if the system is to be updated, the BIOS chip has to be replaced with a new chip mwhich contains the new program routines. This requires opening the computer case and is a job for experienced support staff or technicians.

ROM chips are only capable of performing required pre-determined programs. Due to the cost of manufacturing ROMs, they are only used in large quantity runs. This, in turn, means that

© LC Simmonds, Edinburgh. Page 9 of 17

Computer Architecture 8 : Semiconductor Memory they are only used when the manufacturer is certain that the programs they contain are debugged.

MASKED ROM

The very first ROMs were hardwired devices that contained a preprogrammed set of data or instructions. The contents of the ROM had to be specified before chip production, so the actual data could be used to arrange the transistors inside the chip. Hardwired memories are still used, though they are now called masked ROMs to distinguish them from other types of ROM. The primary advantage of a masked ROM is its low production cost. Unfortunately, the cost is low only when large quantities of the same ROM are required.

PROM

The initials stand for ‘Programmable Read Only Memory’.

With ROM, the program was dedicated at the production stage; the program itself determined the physical construction of the

ROM chip. A cheaper method for small and medium scale use is a ROM-type chip that can be programmed, after the construction stage. Such chips are mass produced by a chip manufacturer, who has no idea of the use to which they will be put. Once the chip is purchased by a computer manufacturer the company’s programs can be embedded in it. ‘This is achieved by ‘blowing’ fusible links inside the chip, to form the binary codes representing the program’s machine code instructions. This is achieved using a special piece of equipment called a device programmer. Every intact link represents a binary 1, with a blown link representing a binary 0.

Like the ROM, the PROM chip is also non-volatile. If the code or data stored in the PROM must be changed, the current device must be discarded. As a result, PROMs are also known as onetime programmable (OTP) devices.

EPROM

The initials stand for ‘Erasable Programmable Read Only Memory’ and it was introduced as a development tool. The problem with ROM and a programmed PROM was that, once produced, they were unalterable This is perfectly fine for computer manufacture - once the program contents are fully debugged.

The EPROM is used to test an embedded program. Like PROM, its links are blown to the needs of the test program. The EPROM can then be used on the test computer. If the program is satisfactory, it can be used to create mass ROM or PROM versions.

Glass Window

© LC Simmonds, Edinburgh. Page 10 of 17

Computer Architecture 8 : Semiconductor Memory

If the program needs alteration, the EPROM is subjected to ultra-violet light for a few minutes.

This ‘heals’ the ruptured links, allowing the chip to be blown to the next test program. The blowing and wiping clean process can be repeated many times over, before the chip starts to degenerate.

An EPROM chip is easily identified, as it has a glass window on top of the chip to allow entry of the ultra-violet light. Due to its expensive construction, it is only a viable alternative to ROM and EPROM for small scale use.

Example EPROM chips would be the 2764-20 (64k - i.e. 8k x 8 bits) and the 27512-20 (512k - i.e. 64k x 8 bits). For continual development use, the EPROM is often replaced with a ‘ROM emulator’ . This is a piece of equipment which plugs into the ROM socket and acts like an

EPROM. It contains RAM to avoid the program-erase cycles. Since it is self-powered, it appears to the main computer as a piece of ROM.

EEPROM

A variation on the EPROM is the EEPROM - the ‘Electrically Erasable and Programmable Read

Only Memory’ . Like the EPROM, it has the benefit of holding its contents when the power is removed. However its contents can be overwritten without resorting to prior cleaning with ultraviolet light It is currently significantly more expensive than other memory devices but is a likely candidate for future use in computers.

Many palmtop computers use ROM to store application programs, to overcome the storage problems associated with small machines. Due to their size, there is no space for a hard disc to store application software, so the machine stores a word processor, spreadsheet, personal organiser, etc. in ROM.

In most computers, however, the application software is loaded into, and run from, main memory.

Flash Memory

Flash memory is a form of non-volatile memory (EEPROM) that can be electrically erased and reprogrammed. It is erased and programmed in blocks consisting of multiple locations (usually

512 bytes in size). Flash memory costs far less than EEPROM and therefore has become the dominant technology wherever a significant amount of non-voltatile, solid-state storage is needed.

Here are a few examples of Flash memory:

¾ Your computer's BIOS chip

¾ USB flash drives

¾ CompactFlash (most often found in digital cameras)

© LC Simmonds, Edinburgh. Page 11 of 17

Computer Architecture 8 : Semiconductor Memory

¾ SmartMedia (most often found in digital cameras)

¾ Memory Stick (most often found in digital cameras)

The characteristics of flash memory vary according to its type: either NOR or NAND.

NOR-based flash has long erase and write times, but has a full address/data (memory) interface that allows

random access

to any location. This makes it suitable for storage of program code that needs to be infrequently updated, such as a computer's

BIOS

or the

firmware

of set-top boxes. Its endurance is 10,000 to 1,000,000 erase cycles.

NAND flash has faster erase and write times, higher density, and lower cost per bit than NOR flash, and ten times the endurance. However its I/O interface allows only

sequential access

to data. This makes it suitable for mass-storage devices such as

PC cards

and various

memory cards

, and somewhat less useful for computer memory.

A blank flash memory has all cells as 1’s.

It can be read or programmed a byte or word at a time in a random fashion, but it can only be erased a block at a time.

Once a byte has been programmed it cannot be changed again until the entire block is erased.

Erasing is applied to one or more blocks by the application of a high voltage that returns all cells to a 1 state.

Note that flash memory is not the same as flash RAM (which requires some power to retain is contents).

8.4 Cache Memory

Analysis of typical programs shows that most of their execution time is spent in a few main routines. Groups of instructions in a few localised areas are repeatedly executed while the remainder of the program is accessed relatively infrequently. This phenomenon is referred to as the 'locality of reference'.

If the active segments of of a program and the variables used can be kept in a small fast memory then the total execution time can be greatly reduced. Such a memory is known as a cache memory.

CPU cache memory

(SRAM) main memory

© LC Simmonds, Edinburgh. Page 12 of 17

Computer Architecture 8 : Semiconductor Memory

The cache is placed betweeen the CPU and the main memory. It is built of a technology with higher access rate than main memory (SRAM). Their relative access times usually differ by a factor of 5 to 10. It is quicker to fetch an instruction or piece of data from cache into CPU than from main memory into CPU.

The result of using cache memory is to dispense with wasted CPU time and to increase computer efficiency. Of course, the block of fast SRAM is likely to be substantially smaller than the computer’s main memory. the cache memory can only hold a portion of the data which is resident in main memory. The aim is to ensure that only the data most likely to be required is stored in cache memory.

When the content of a memory location is required to be read into the CPU, the cache is accessed and if the material is present there then it is transferred from the cache into the CPU without reference to the main memory. If the material is not present in the cache then the contents of the block of memory words containing the location specified are transferred into the cache from the main memory one word at a time, and then the required word is transferred into the CPU. Due to the locality of reference principle it is then likely that during later read requests the required material may already be in cache.

Usually a cache memory can store a number of blocks at any given time. The correspondence between main memory blocks and those in cache is specified by means of a “mapping function”.

When the cache is full and a new block is to be placed there then a decision must be made as to which block to remove. The rules for making this decision constitute the ‘replacement algorithm’. There are a variety of mapping functions and replacement algorithms possible. The most common is the Least Recently Used (LRU) algorithm which will overwrite the block which has gone the longest time without being referenced.

During a write operation two possibilities exist:

1. if the block containing the location is not present in the cache then the data is written directly to main memory and the cache is ignored. Note that blocks are not moved from memory to cache during a write.

2. if the block is in the cache then either both the cache location and the main memory location are updated simultaneously, or the cache location is updated and is flagged in some way. In the latter case, when the block is eventually overwritten in the cache, the location in main memory will be updated.

When a block of data is transferred from main memory into cache memory the one-off transfer will take place at the slowest speed - i.e. that of the main memory, wait states and all. Any subsequent requests for data to be transferred to the CPU will be at the higher cache memory speed.

© LC Simmonds, Edinburgh. Page 13 of 17

Computer Architecture 8 : Semiconductor Memory

When another area of data is requested - one not already stored in cache - the data is transferred from main memory into cache memory, along with the contiguous data in main memory. The fetch of the first piece of data, in this case, is actually slower than normal, since an entire block of data was transferred at a wait-state speed. However, since subsequent fetches from that memory block will be faster, the overall effect is to speed up processing.

If the cache contains data transferred due to a previous CPU request, there is no guarantee that the next CPU request will be for data from the same block. In that circumstance, there will be no cache hit i.e. the requested data is not to be found in cache memory. The requested data, as part of a new block of data, will be transferred from main memory to cache. This means that the time taken to transfer the previous block was largely wasted and the efficiency of the computer has been reduced. The benefits of caching vary with the type of application in use. A program which uses a lot of data transfers will benefit from a large cache memory. On the other hand, a program which is processor intensive (any number-crunching application) would require less data transfers and would not benefit to the same extent.

The benefits of caching will vary with the efficiency of the application in use. Many large programs are divided into smaller sections of code, called ‘overlays’, which are paged into main memory when required. Well-written programs will make as few jumps between overlays as possible. This, in turn, means less data transfers between main memory and cache memory.

Badly written programs may require more overlay activity. This will result in more redundant caching activity and less processor efficiency. In this case, the larger the cache memory, the worse will be the effect. A small cache may still be beneficial.

Despite the limitations listed, the net effect of using cache memory is to improve the computer’s processing times. Intel state that their cache tests show a hit rate of almost 90%. In other words, only about 12% of the data accesses results in wait states, with most of the accesses involving no wait states. This produces a much improved throughput.

In a virtual memory system the processor sees a very large array of physical memory, which appears to be entirely composed of high speed main memory. In reality the physical memory is a small high speed RAM and a much slower disk system. The advantages offered by virtual memory are

• it allows the execution of programs much larger than the physical memory would normally permit

• the programmer is freed from all concerns to do with a small main memory

© LC Simmonds, Edinburgh. Page 14 of 17

Computer Architecture 8 : Semiconductor Memory

Virtual memory systems divide the main memory into pages of 1K to 16K bytes. This allows several pages of a virtual program to be resident in main memory at any time.

As an example, consider a virtual memory space of 256K, corresponding to the logical address space of a processor wth an 18 bit address bus (see figure 7.13). The virtual memory space is divided into 64 pages of 4K each. The main memory is divided into 16 pages of 4K each, giving a total physical memory of 64K. Each of these 4K blocks is called a page-frame because it holds one page of the virtual memory. The processor is thus able to directly address data anywhere in one of its 64 pages, but only 16 may be in main memory at any one time. The rest? where else? but on disk.

3FFFF page 63 FFFF

3F000

3EFFF page 15

3E000 page 62

F000

EFFF

.

.

page 14

E000

03FFF

.

.

.

.

03000

02FFF page 3 2000

1FFF page 1 page 2

02000

01FFF

1000

0FFF

01000

00FFF page 1 page 0

0000

00000 page 0

64k of physical memory

(16 pages of 4K)

256k of virtual memory space

(64 pages of 4K)

Figure 7.13

The relationship between virtual and physical address space

As an example, referring to figure 7.13, suppose that the processor generates an address

03DFF. The page containing the data for the logical addresses 03000 to 03FFF is actually stored in main memory in addresses F000 to FFFF (page 15). Therefore the logical address

03DFF must be translated into the physical address FDFF. This raises two questions:-

1. how is a logical address, which generated by the processor, translated into a physical address?

2. what happens if the logical address has no corresponding physical address because the appropriate physical address is not in the main memory?

© LC Simmonds, Edinburgh. Page 15 of 17

Computer Architecture 8 : Semiconductor Memory

The processor maintains a page-table (see figure 7.14), which maps the pages in main memory

Page Table

63

62

61

60

0 15

1

.

.

.

1 14

0 3

9

3

2

1

0

1 15

1 1

0 A

1 0

64k of physical memory

(16 pages of 4K)

FFFF page 15 page 14

.

.

.

F000

EFFF

E000

9FFF page 9 page 8

.

.

.

9000

8FFF

8000 page 1 page 0

2000

1FFF

1000

0FFF

0000 processor

18 bit logical address

12 bit address to RAM, selects location within a

4K page frame

Page fault

(page not in RAM) page frame table

(64 entries)

4 bit address to RAM, selects one of 16 page frames

Page-frame (four msb's of the physical address)

Availability (1 if page in main memory, 0 if it is not)

Page number (the six msb's of the logical address)

Figure 7.14

The page table and address mapping corresponding to figure 6.

onto the processor's own logical address space. The page table has three entries for every page in the virtual memory: a page number (0 to 63, in this case), an availability bit, which is 1 if the page is in main memory and 1 if it is not, and the page frame (0 to 15, in this case).

The 18 bit logical address generated by the processor consists of two fields, a six bit address which selects a particular page, and a 12 bit address which selects a location within a page.

From the table you can see that the logical address 03DFF selects entry 4 of the page table to be interrogated. This returns page-frame address 15, and the data is accessed from location

FDFF in the physical memory.

Sometimes a virtual address will be generated and the page in which the data lies will not be in the main memory. In this case the availability bit in the page table will be zero and a page-fault is generated. This is similar to an interrupt causing the OS to intervene. The OS fetches the missing page from disk, loads it into main memory and updates the page table accordingly.

This is called 'demand-paging' because a new page is not brought in until needed. Once the main memory is full, a new page must overwrite an old one. It is usually the least recently used page which is sacrificed, every time a new page is to be loaded in.

© LC Simmonds, Edinburgh. Page 16 of 17

Computer Architecture 8 : Semiconductor Memory

Virtual memory systems are complex and require expensive hardware and software. They do, however offer many advantages.

TAA

7.1

Explain why both RAM and ROM are required within a microprocessor based system. Many systems contain static RAM, dynamic RAM and masked-ROM. Describe briefly the electronic configuration of each of these types of semiconductor memory and explain how each would typically be used in a PC.

Answers to SAQs

SAQ 7.1

ROM (read only memory) is permanently placed into the ROM chip by the manufacturer. It is non volatile and cannot be overwritten. It holds the data necessary to begin operation of a PC.

RAM (random access memory) is volatile. It holds the application and data currently being worked upon. It can be overwritten. RAM can be either static or dynamic. CMOS is an example of dynamic RAM that does not lose data when power is lost because it is constantly refreshed by a battery contained in the computer

© LC Simmonds, Edinburgh. Page 17 of 17

Download