a page fault

Virtual Memory
Many years ago people were first confronted with programs that were too big to fit in the available memory. The solution usually
adopted was to split the program into pieces, called overlays. Overlay 0 would start running first. When it was done, it would call
another overlay. Some overlay systems were highly complex, allowing multiple overlays in memory at once. The overlays were
kept on the disk and swapped in and out of memory by the operating system, dynamically, as needed.
Although the actual work of swapping overlays in and out was done by the system, the decision of how to split the program into
pieces had to be done by the programmer. Splitting up large programs into small, modular pieces was time consuming and
boring. It did not take long before someone thought of a way to turn the whole job over to the computer.
The method that was devised has come to be known as virtual memory. The basic idea behind virtual memory is that the
combined size of the program, data, and stack may exceed the amount of physical memory available for it. The operating
system keeps those parts of the program currently in use in main memory, and the rest on the disk. For example, a 512-MB
program can run on a 256-MB machine by carefully choosing which 256 MB to keep in memory at each instant, with pieces of
the program being swapped between disk and memory as needed.
Virtual memory can also work in a multiprogramming system, with bits and pieces of many programs in memory at once. While a
program is waiting for part of itself to be brought in, it is waiting for I/O and cannot run, so the CPU can be given to another
process, the same way as in any other multiprogramming system.
Most virtual memory systems use a technique called paging, which we will now describe. On any computer, there exists a set of
memory addresses that programs can produce. When a program uses an instruction like
MOV REG,1000
it does this to copy the contents of memory address 1000 to REG (or vice versa, depending on the computer). Addresses can be
generated using indexing, base registers, segment registers, and other ways.
These program-generated addresses are called virtual addresses and form the virtual address space. On computers without
virtual memory, the virtual address is put directly onto the memory bus and causes the physical memory word with the same
address to be read or written. When virtual memory is used, the virtual addresses do not go directly to the memory bus. Instead,
they go to an MMU (Memory Management Unit) that maps the virtual addresses onto the physical memory addresses as
illustrated in Figure 1.
Figure 1. The position and function of the MMU. Here the MMU is shown as being a part of the
CPU chip because it commonly is nowadays.
Paging avoids the considerable problem of fitting memory chunks of varying sizes onto the backing store; most memorymanagement schemes used before the introduction of paging suffered from this problem. The problem arises because, when
some code fragments or data residing in main memory need to be swapped out, space must be found on the backing store.
The backing store also has the fragmentation problems appeared in connection with main memory, except that access is much
slower, so compaction is impossible. Because of its advantages over earlier methods, paging in its various forms is commonly
used in most operating systems.
Traditionally, support for paging has been handled by hardware. However, recent designs have implemented paging by closely
integrating the hardware and operating system, especially on 64-bit microprocessors.
Basic Method
The basic method for implementing paging involves breaking physical memory into fixed-sized blocks called frames and
breaking logical memory into blocks of the same size called pages. When a process is to be executed, its pages are loaded
into any available memory frames from the backing store. The backing store is divided into fixed-sized blocks that are of the
same size as the memory frames.
The hardware support for paging is illustrated in Figure 2. Every address generated by the CPU is divided into two parts: a
page number (p) and a page offset (d). The page number is used as an index into a page table. The page table contains the
base address of each page in physical memory. This base address is combined with the page offset to define the physical
memory address that is sent to the memory unit.
Figure 2. Paging hardware
Figure 3. Paging model of logical and physical memory
The paging model of memory is shown in Figure 3.
The page size (like the frame size) is defined by the hardware. The size of a page is typically a power of 2, varying between 512
bytes and 16 MB per page, depending on the computer architecture. The selection of a power of 2 as a page size makes the
translation of a logical address into a page number and page offset particularly easy. If the size of logical address space is 2m,
and a page size is 2n addressing units (bytes or words), then the high-order m-n bits of a logical address designate the page
number, and the n low-order bits designate the page offset. Thus, the logical address is as follows:
where p is an index into the page table and d is the displacement within the page.
A very simple example of how this mapping works is shown in Figure 4. In this example, we have a computer that can
generate 16-bit addresses, from 0 up to 64K. These are the virtual addresses. This computer, however, has only 32 KB of
physical memory, so although 64-KB programs can be written, they cannot be loaded into memory in their entirety and run.
A complete copy of a program's memory image, up to 64 KB, must be present on the disk, however, so that pieces can be
brought in as needed.
Figure 4. The relation between virtual addresses and physical memory addresses is given by the page table.
When the program tries to access address 0, for example, using the instruction
virtual address 0 is sent to the MMU. The MMU sees that this virtual address falls in page 0 (0 to 4095), which according to its
mapping is page frame 2 (8192 to 12287). It thus transforms the address to 8192 and outputs address 8192 onto the bus. The
memory knows nothing at all about the MMU and just sees a request for reading or writing address 8192, which it honors. Thus,
the MMU has effectively mapped all virtual addresses between 0 and 4095 onto physical addresses 8192 to 12287.
Similarly, an instruction
MOV REG,8192
is effectively transformed into
MOV REG,24576
because virtual address 8192 is in virtual page 2 and this page is mapped onto physical page frame 6 (physical addresses 24576
to 28671). As a third example, virtual address 20500 is 20 bytes from the start of virtual page 5 (virtual addresses 20480 to 24575)
and maps onto physical address 12288 + 20 = 12308.
By itself, this ability to map the 16 virtual pages onto any of the eight page frames by setting the MMU's map appropriately does
not solve the problem that the virtual address space is larger than the physical memory. Since we have only eight physical page
frames, only eight of the virtual pages in Figure 4, are mapped onto physical memory. The others, shown as crosses in the figure,
are not mapped. In the actual hardware, a present/absent bit keeps track of which pages are physically present in memory.
What happens if the program tries to use an unmapped page, for example, by using the instruction
MOV REG,32780
which is byte 12 within virtual page 8 (starting at 32768)? The MMU notices that the page is unmapped (indicated by a cross in the
figure) and causes the CPU to trap to the operating system. This trap is called a page fault. The operating system picks a littleused page frame and writes its contents back to the disk. It then fetches the page just referenced into the page frame just freed,
changes the map, and restarts the trapped instruction.
For example, if the operating system decided to evict page frame 1, it would load virtual page 8 at physical address 4K and make
two changes to the MMU map. First, it would mark virtual page 1's entry as unmapped, to trap any future accesses to virtual
addresses between 4K and 8K. Then it would replace the cross in virtual page 8's entry with a 1, so that when the trapped
instruction is re-executed, it will map virtual address 32780 onto physical address 4108.
Now let us look inside the MMU to see how it works and why we have chosen to use a page size that is a power of 2. In Fig. 5.
we see an example of a virtual address, 8196 (0010000000000100 in binary), being mapped using the MMU map of Fig. 4. The
incoming 16-bit virtual address is split into a 4-bit page number and a 12-bit offset. With 4 bits for the page number, we can have
16 pages, and with 12 bits for the offset, we can address all 4096 bytes within a page.
Figure 5. The internal operation of the MMU with 16 4-KB pages.
The page number is used as an index into the page table, yielding the number of the page frame corresponding to that virtual
page. If the present/absent bit is 0, a trap to the operating system is caused. If the bit is 1, the page frame number found in the
page table is copied to the high-order 3 bits of the output register, along with the 12-bit offset, which is copied unmodified from
the incoming virtual address. Together they form a 15-bit physical address. The output register is then put onto the memory bus
as the physical memory address.
As another concrete (although minuscule) example, consider the memory in Figure 6. Using a page size of 4 bytes and a
physical memory of 32 bytes (8 pages), we show how the user's view of memory can be mapped into physical memory. Logical
address 0 is page 0, offset 0. Indexing into the page table, we find that page 0 is in frame 5. Thus, logical address 0 maps to
physical address 20 (= (5 x 4) + 0). Logical address 3 (page 0, offset 3) maps to physical address 23 (= (5x4) + 3). Logical
address 4 is page 1, offset 0; according to the page table, page 1 is mapped to frame 6. Thus, logical address 4 maps to
physical address 24 (= (6x4) + 0). Logical address 13 maps to physical address 9.
Figure 6. Paging example for a 32 byte memory with 4 byte page table
Page Tables
In the simplest case, the mapping of virtual addresses onto physical addresses is as we have just described it. The virtual
address is split into a virtual page number (high-order bits) and an offset (low-order bits). For example, with a 16-bit address
and a 4-KB page size, the upper 4 bits could specify one of the 16 virtual pages and the lower 12 bits would then specify the
byte offset (0 to 4095) within the selected page. However, a split with 3 or 5 or some other number of bits for the page is also
possible. Different splits imply different page sizes.
The virtual page number is used as an index into the page table to find the entry for that virtual page. From the page table entry,
the page frame number (if any) is found. The page frame number is attached to the high-order end of the offset, replacing the
virtual page number, to form a physical address that can be sent to the memory.
The purpose of the page table is to map virtual pages onto page frames. Mathematically speaking, the page table is a function,
with the virtual page number as argument and the physical frame number as result. Using the result of this function, the virtual
page field in a virtual address can be replaced by a page frame field, thus forming a physical memory address.
Despite this simple description, two major issues must be faced:
1. The page table can be extremely large.
2. The mapping must be fast.
The first point follows from the fact that modern computers use virtual addresses of at least 32 bits. With, say, a 4-KB page
size, a 32-bit address space has 1 million pages, and a 64-bit address space has more than you want to contemplate. With 1
million pages in the virtual address space, the page table must have 1 million entries. And remember that each process needs
its own page table (because it has its own virtual address space).
The second point is a consequence of the fact that the virtual-to-physical mapping must be done on every memory reference. A
typical instruction has an instruction word, and often a memory operand as well. Consequently, it is necessary to make one,
two, or sometimes more page table references per instruction. If an instruction takes, say, 1 nsec, the page table lookup must
be done in under 250 psec to avoid becoming a major bottleneck.
The need for large, fast page mapping is a significant constraint on the way computers are built. Although the problem is most
serious with top-of-the-line machines that must be very fast, it is also an issue at the low end as well, where cost and the
price/performance ratio are critical In this section and the following ones, we will look at page table design in detail and show a
number of hardware solutions that have been used in actual computers.
The simplest design (at least conceptually) is to have a single page table consisting of an array of fast hardware registers, with
one entry for each virtual page, indexed by virtual page number, as shown in Fig. 5. When a process is started up, the
operating system loads the registers with the process' page table, taken from a copy kept in main memory. During process
execution, no more memory references are needed for the page table. The advantages of this method are that it is
straightforward and requires no memory references during mapping. A disadvantage is that it is potentially expensive (if the
page table is large). Also, having to load the full page table at every context switch hurts performance.
You may have noticed that paging itself is a form of dynamic relocation. Every logical address is bound by the paging hardware
to some physical address. Using paging is similar to using a table of base (or relocation) registers, one for each frame of
When we use a paging scheme, we have no external fragmentation: Any free frame can be allocated to a process that needs it.
However, we may have some internal fragmentation. Notice that frames are allocated as units. If the memory requirements of a
process do not happen to coincide with page boundaries, the last frame allocated may not be completely full. For example, if
page size is 2,048 bytes, a process of 72,766 bytes would need 35 pages plus1,086 bytes. It would be allocated 36 frames,
resulting in an internal fragmentation of 2,048 - 1,086 = 962 bytes. In the worst case, a process would need n pages plus 1 byte.
It would be allocated n + 1 frames, resulting in an internal fragmentation of almost an entire frame.
If process size is independent of page size, we expect internal fragmentation to average one-half page per process. This
consideration suggests that small page sizes are desirable. However, overhead is involved in each page-table entry, and this
overhead is reduced as the size of the pages increases. Also, disk I/O is more efficient when the number of data being
transferred is larger. Today, pages typically are between 4 KB and 8 KB in size, and some systems support even larger page
sizes. Some CPUs and kernels even support multiple page sizes. For instance, Solaris uses page sizes of 8 KB and 4 MB,
depending on the data stored by the pages. Usually, each page-table entry is 4 bytes long, but that size can vary as well. A 32-bit
entry can point to one of 232 physical page frames. If frame size is 4 KB, then a system with 4-byte entries can address 244 bytes
(or 16 TB) of physical memory. When a process arrives in the system to be executed, its size, expressed in pages, is examined.
Each page of the process needs one frame. Thus, if the process requires n pages, at least n frames must be available in
memory. If n frames are available, they are allocated to this arriving process. The first page of the process is loaded into one of
the allocated frames, and the frame number is put in the page table for this process. The next page is loaded into another frame,
and its frame number is put into the page table, and so on (Figure 7).
Figure 7. Free frames a) before
allocation. b) after allocation
Multilevel Page Tables
To get around the problem of having to store huge page tables in memory all the time, many computers use a multilevel page
table. A simple example is shown in Figure 8. In Fig. 8(a) we have a 32-bit virtual address that is partitioned into a 10-bit PT1
field, a 10-bit PT2 field, and a 12-bit Offset field. Since offsets are 12 bits, pages are 4 KB, and there are a total of 2 20 of them.
Figure 8. (a) A 32-bit address with two page table fields. (b) Two-level page tables.
The secret to the multilevel page table method is to avoid keeping all the page tables in memory all the time. In particular, those
that are not needed should not be kept around. Suppose, for example, that a process needs 12 megabytes, the bottom 4
megabytes of memory for program text, the next 4 megabytes for data, and the top 4 megabytes for the stack. In between the
top of the data and the bottom of the stack is a gigantic hole that is not used.
In Fig. 8 we see how the two-level page table works in this example. On the left we have the top-level page
table, with 1024 entries, corresponding to the 10-bit PT1 field. When a virtual address is presented to the
MMU, it first extracts the PT1 field and uses this value as an index into the top-level page table. Each of these
1024 entries represents 4M because the entire 4-gigabyte (i.e., 32-bit) virtual address space has been
chopped into chunks of 1024 bytes.
The entry located by indexing into the top-level page table yields the address or the page frame number of a
second-level page table. Entry 0 of the top-level page table points to the page table for the program text, entry
1 points to the page table for the data, and entry 1023 points to the page table for the stack. The other
(shaded) entries are not used. The PT2 field is now used as an index into the selected second-level page
table to find the page frame number for the page itself.
As an example, consider the 32-bit virtual address 0x00403004 (4,206,596 decimal), which is 12,292 bytes
into the data. This virtual address corresponds to PT1 = 1, PT2 = 3, and Offset = 4. The MMU first uses PT1 to
index into the top-level page table and obtain entry 1, which corresponds to addresses 4M to 8M. It then uses
PT2 to index into the second-level page table just found and extract entry 3, which corresponds to addresses
12,288 to 16,383 within its 4M chunk (i.e., absolute addresses 4,206,592 to 4,210,687). This entry contains
the page frame number of the page containing virtual address 0x00403004. If that page is not in memory, the
present/absent bit in the page table entry will be zero, causing a page fault. If the page is in memory, the page
frame number taken from the second-level page table is combined with the offset (4) to construct a physical
address. This address is put on the bus and sent to memory.
The interesting thing to note about Fig. 8 is that although the address space contains over a million pages,
only four page tables are actually needed: the top-level table, the second-level tables for 0 to 4M, 4M to 8M,
and the top 4M. The present/absent bits in 1021 entries of the top-level page table are set to 0, forcing a page
fault if they are ever accessed. Should this occur, the operating system will notice that the process is trying to
reference memory that it is not supposed to and will take appropriate action, such as sending it a signal or
killing it. In this example we have chosen round numbers for the various sizes and have picked PT1 equal to
PT2 but in actual practice other values are also possible, of course.
The two-level page table system of Fig. 8. can be expanded to three, four, or more levels. Additional levels
give more flexibility, but it is doubtful that the additional complexity is worth it beyond two levels.
Structure of a Page Table Entry
Let us now turn from the structure of the page tables in the large, to the details of a single page table entry. The exact layout of
an entry is highly machine dependent, but the kind of information present is roughly the same from machine to machine. In
Figure 9 we give a sample page table entry. The size varies from computer to computer, but 32 bits is a common size. The most
important field is the page frame number. After all, the goal of the page mapping is to locate this value. Next to it we have the
present/absent bit. If this bit is 1, the entry is valid and can be used. If it is 0, the virtual page to which the entry belongs is not
currently in memory. Accessing a page table entry with this bit set to 0 causes a page fault.
Figure 9. A typical page table entry.
The protection bits tell what kinds of access are permitted. In the simplest form, this field contains 1 bit, with 0 for read/write and
1 for read only. A more sophisticated arrangement is having 3 independent bits, one bit each for individually enabling reading,
writing, and executing the page.
The modified and referenced bits keep track of page usage. When a page is written to, the hardware automatically sets the
modified bit. This bit is used when the operating system decides to reclaim a page frame. If the page in it has been modified (i.e.,
is "dirty"), it must be written back to the disk. If it has not been modified (i.e., is "clean"), it can just be abandoned, since the disk
copy is still valid. The bit is sometimes called the dirty bit, since it reflects the page's state.
The referenced bit is set whenever a page is referenced, either for reading or writing. Its value is to help the operating system
choose a page to evict when a page fault occurs. Pages that are not being used are better candidates than pages that are, and
this bit plays an important role in several of the page replacement algorithms that we will study later in this chapter.
Finally, the last bit allows caching to be disabled for the page. This feature is important for pages that map onto device registers
rather than memory. If the operating system is sitting in a tight loop waiting for some I/O device to respond to a command it was
just given, it is essential that the hardware keep fetching the word from the device, and not use an old cached copy. With this bit,
caching can be turned off. Machines that have a separate I/O space and do not use memory mapped I/O do not need this bit.
Note that the disk address used to hold the page when it is not in memory is not part of the page table. The reason is simple.
The page table holds only that information the hardware needs to translate a virtual address to a physical address. Information
the operating system needs to handle page faults is kept in software tables inside the operating system. The hardware does not
need it.
Hardware Support
Each operating system has its own methods for storing page tables. Most allocate a page table for each process. A pointer to
the page table is stored with the other register values (like the instruction counter) in the process control block. When the
dispatcher is told to start a process, it must reload the user registers and define the correct hardware page-table values from
the stored user page table.
The CPU dispatcher reloads these registers, just as it reloads the other registers. Instructions to load or modify the page-table
registers are, of course, privileged, so that only the operating system can change the memory map. The DEC PDP-11 is an
example of such an architecture. The address consists of 16 bits, and the page size is 8 KB. The page table thus consists of
eight entries that are kept in fast registers.
The use of registers for the page table is satisfactory if the page table is reasonably small (for example, 256 entries). Most
contemporary computers, however, allow the page table to be very large (for example, 1 million entries). For these machines,
the use of fast registers to implement the page table is not feasible. Rather, the page table is kept in main memory, and a pagetable base register (PTBR) points to the page table. Changing page tables requires changing only this one register,
substantially reducing context-switch time.
The problem with this approach is the time required to access a user memory location. If we want to access location i, we must
first index into the page table, using the value in the PTBR offset by the page number. This task requires a memory access. It
provides us with the frame number, which is combined with the page offset to produce the actual address. We can then access
the desired place in memory. With this scheme, two memory accesses are needed to access a byte (one for the page-table
entry, one for the byte). Thus, memory access is slowed by a factor of 2. This delay would be intolerable under most
circumstances. We might as well resort to swapping!
The standard solution to this problem is to use a special, small, fast-lookup hardware cache, called a translation look-aside
buffer (TLB). The TLB is associative, high-speed memory. Each entry in the TLB consists of two parts: a key (or tag) and a
value. When the associative memory is presented with an item, the item is compared with all keys simultaneously. If the item is
found, the corresponding value field is returned. The search is fast; the hardware, however, is expensive. Typically, the number
of entries in a TLB is small, often numbering between 64 and 1,024.
The TLB is used with page tables in the following way. The TLB contains only a few of the page-table entries. When a logical
address is generated by the CPU, its page number is presented to the TLB. If the page number is found, its frame number is
immediately available and is used to access memory. The whole task may take less than 10 percent longer than it would if an
unmapped memory reference were used.
If the page number is not in the TLB (known as a TLB miss), a memory reference to the page table must be made. When the
frame number is obtained, we can use it to access memory (Figure 8.11). In addition, we add the page number and frame
number to the TLB, so that they -will be found quickly on the next reference. If the TLB is already full of entries, the operating
system must select one for replacement. Replacement policies range from least recently used (LRU) to random. Furthermore,
some TLBs allow entries to be wired down, meaning that they cannot be removed from the TLB. Typically, TLB entries for
kernel code are wired down.
Figure 10. Paging hardware with TLB
Some TLBs store address-space identifiers (ASIDs) in each TLB entry. An ASID uniquely identifies each process and is used to
provide address-space protection for that process. When the TLB attempts to resolve virtual page numbers, it ensures that the
ASID for the currently running process matches the ASID associated with the virtual page. If the ASIDs do not match, the attempt
is treated as a TLB miss. In addition to providing address-space protection, an ASID allows the TLB to contain entries for several
different processes simultaneously. If the TLB does not support separate ASIDs, then every time a new page table is selected (for
instance, with each context switch), the TLB must be flushed (or erased) to ensure that the next executing process does not use
the wrong translation information. Otherwise, the TLB could include old entries that contain valid virtual addresses but have
incorrect or invalid physical addresses left over from the previous process.
The percentage of times that a particular page number is found in the TLB is called the hit ratio. An 80-percent hit ratio means that
we find the desired page number in the TLB 80 percent of the time. If it takes 20 nanoseconds to search the TLB and 100
nanoseconds to access memory, then a mapped-memory access takes 120 nanoseconds when the page number is in the TLB. If
we fail to find the page number in the TLB (20 nanoseconds), then we must first access memory for the page table and frame
number (100 nanoseconds) and then access the desired byte in memory (100 nanoseconds), for a total of 220 nanoseconds. To
find the effective memory-access time, we weight each case by its probability:
effective access time = 0.80 x 120 + 0.20 x 220 = 140 nanoseconds.
In this example, we suffer a 40-percent slowdown in memory-access time (from 100 to 140 nanoseconds).
For a 98-percent hit ratio, we have
effective access time = 0.98 x 120 + 0.02 x 220 = 122 nanoseconds. This increased hit rate produces only a 22 percent slowdown
in access time.
The TLB is usually inside the MMU and consists of a small number of entries, eight in this example, but rarely more than 64.
Each entry contains information about one page, including the virtual page number, a bit that is set when the page is modified,
the protection code (read/write/execute permissions), and the physical page frame in which the page is located. These fields
have a one-to-one correspondence with the fields in the page table. Another bit indicates whether the entry is valid (i.e., in use)
or not.
Modified Protection Page frame
Fig. 11. A TLB to speed up paging
An example that might generate the TLB of Fig. 11. is a process in a loop that spans virtual pages 19, 20, and 21, so these TLB
entries have protection codes for reading and executing. The main data currently being used (say, an array being processed)
are on pages 129 and 130. Page 140 contains the indices used in the array calculations. Finally, the stack is on pages 860 and
When a virtual address is presented to the MMU for translation, the hardware first checks to see if its virtual page number is
present in the TLB by comparing it to all the entries simultaneously (i.e., in parallel). If a valid match is found and the access
does not violate the protection bits, the page frame is taken directly from the TLB, without going to the page table. If the virtual
page number is present in the TLB but the instruction is trying to write on a read-only page, a protection fault is generated, the
same way as it would be from the page table itself.
The interesting case is what happens when the virtual page number is not in the TLB. The MMU detects the miss and does an
ordinary page table lookup. It then evicts one of the entries from the TLB and replaces it with the page table entry just looked
up. Thus if that page is used again soon, the second time around it will result in a hit rather than a miss. When an entry is
purged from the TLB, the modified bit is copied back into the page table entry in memory. The other values are already there.
When the TLB is loaded from the page table, all the fields are taken from memory.
Hashed Paged Tables
A common approach for handling address spaces larger than 32 bits is to use a hashed page table, with the hash value being
the virtual page number. Each entry in the hash table contains a linked list of elements that hash to the same location (to handle
collisions). Each element consists of three fields: (1) the virtual page number, (2) the value of the mapped page frame, and (3) a
pointer to the next element in the linked list.
The algorithm works as follows: The virtual page number in the virtual address is hashed into the hash table. The virtual page
number is compared with field 1 in the first element in the linked list. If there is a match, the corresponding page frame (field 2) is
used to form the desired physical address. If there is no match, subsequent entries in the linked list are searched for a matching
virtual page number. This scheme is shown in Figure 12.
A variation of this scheme that is favorable for 64-bit address spaces has been proposed. This variation uses clustered page
tables, which are similar to hashed page tables except that each entry in the hash table refers to several pages (such as 16)
rather than a single page. Therefore, a single page-table entry can store the mappings for multiple physical-page frames.
Clustered page tables are particularly useful for sparse address spaces, where memory references are noncontiguous and
scattered throughout the address space.
Fig. 12. Hashed paged table
Inverted Page Tables
Traditional page tables of the type described so far require one entry per virtual page, since they are indexed by virtual page
number. If the address space consists of 232 bytes, with 4096 bytes per page, then over 1 million page table entries are needed.
As a bare minimum, the page table will have to be at least 4 megabytes. On large systems, this size is probably doable.
However, as 64-bit computers become more common, the situation changes drastically. If the address space is now 264 bytes,
with 4-KB pages, we need a page table with 252 entries. If each entry is 8 bytes, the table is over 30 million gigabytes. Tying up
30 million gigabytes just for the page table is not doable, not now and not for years to come, if ever. Consequently, a different
solution is needed for 64-bit paged virtual address spaces.
One such solution is the inverted page table. In this design, there is one entry per page frame in real memory, rather than one
entry per page of virtual address space. For example, with 64-bit virtual addresses, a 4-KB page, and 256 MB of RAM, an
inverted page table only requires 65,536 entries. The entry keeps track of which (process, virtual page) is located in the page
Although inverted page tables save vast amounts of space, at least when the virtual address space is much larger than the
physical memory, they have a serious downside: virtual-to-physical translation becomes much harder. When process n
references virtual page p, the hardware can no longer find the physical page by using p as an index into the page table. Instead,
it must search the entire inverted page table for an entry (n, p). Furthermore, this search must be done on every memory
reference, not just on page faults. Searching a 64K table on every memory reference is definitely not a good way to make your
machine blindingly fast.
The way out of this dilemma is to use the TLB. If the TLB can hold all of the heavily used pages, translation can happen just as
fast as with regular page tables. On a TLB miss, however, the inverted page table has to be searched in software. One feasible
way to accomplish this search is to have a hash table hashed on the virtual address. All the virtual pages currently in memory
that have the same hash value are chained together, as shown in Figure 13. If the hash table has as many slots as the machine
has physical pages, the average chain will be only one entry long, greatly speeding up the mapping. Once the page frame
number has been found, the new (virtual, physical) pair is entered into the TLB and the faulting instruction restarted.
Fig. 12. Inverted paged table
Inverted page tables are currently used on IBM, Sun, and Hewlett-Packard workstations and will become more common as 64bit machines become widespread. Inverted page tables are essential on this machines.
Related documents