Chapter 8: Memory Management
 In uni-processor systems, main memory is divided
into two parts as shown below:
 In multiprogramming systems, user part of memory
must be subdivided to accommodate multiple
Prepared by Dr. Amjad Mahmood
Memory Management
 Main memory is a resource that must be allocated
and deallocated
 Memory Management Techniques determine:
 How the memory is to be (logically) subdivided?
 Where and how a process resides in memory?
 How addressing is performed?
 How process can be relocated?
 How memory is to be protection?
 How memory can be shared by processes?
 How to logical and physically organize memory
Addressing Requirements of a Process
Prepared by Dr. Amjad Mahmood
The Basics
 Program must be brought into memory and placed
within a process for it to be run.
 Input queue – collection of processes on the disk
that are waiting to be brought into memory to run
the program.
 Normally, a process is selected from the input
queue and is brought into the memory for execution.
During execution of a process, it accesses
instruction and data from the memory.
 Most systems allow a user process to be loaded in
any part of the memory. This affects the addresses
that a user program can access.
 User programs go through several steps before
being run.
 Addresses may be represented during these steps.
 Addresses in the source program are generally
symbolic (variable name).
Prepared by Dr. Amjad Mahmood
 A compiler typically binds these addresses to
relocatable addresses (in terms of offsets).
 The linkage editor or loader binds relocatable
addresses to absolute addresses (physical
Loading and Loader
 A loader is responsible to place a load module in
main memory at some starting address.
 There are three approaches, which can be used for
 Absolute loading – A given module is always
loaded into the same memory location. All
references in the load module must be absolute
memory addresses. The address binding can
be done at programming time, compile time or
assembly time.
 Relocatable loading – The loader places a
module in any desired location of the main
memory. To make it possible, the compiler or
assembler must generate relative addresses.
 Dynamic loading - Routine is not loaded until it
is called resulting in better memory-space
utilization (unused routine is never loaded). It is
useful when large amounts of code are needed
to handle infrequently occurring cases. No
special support from the operating system is
required implemented through program design.
Prepared by Dr. Amjad Mahmood
Linking and Linker
 The function of a linker is to take as input a
collection of object modules and produce a load
module consisting of an integrated set of programs
and data modules to be passed to the loader.
 In each object module, there may be symbolic
reference to location in other modules.
 The linking can be done either statically or
 Static linking – A linker generally creates a
single load module that is the contiguous
joining of all the object modules with references
properly changed. This is called static
 Dynamic linking – Linking is postponed until
execution time. All external references are not
resolved until the CPU executes the external
 Small piece of code, stub, used to locate
the appropriate memory-resident library
 Stub replaces itself with the address of the
routine, and executes the routine.
 Operating system needed to check if
routine is in processes’ memory address.
 Dynamic linking is particularly useful for
Prepared by Dr. Amjad Mahmood
Binding of Instructions and Data to Memory
 Address binding of instructions and data to memory
addresses can happen at three different stages.
 Compile time: If memory location known a
priori, absolute code can be generated; must
recompile code if starting location changes.
 Load time: The compiler must generate
relocatable code if memory location is not
known at compile time. The final binding is
delayed until load time.
 Execution time: Binding delayed until run time
if the process can be moved during its
execution from one memory segment to
another. Need hardware support for address
maps (e.g., base and limit registers).
Logical vs. Physical Address Space
 The concept of a logical address space that is
bound to a separate physical address space is
central to proper memory management.
 Logical address – generated by the CPU; also
referred to as virtual address.
 Physical address – address seen by the
memory unit.
 Logical and physical addresses are the same in
schemes; logical (virtual) and physical addresses
differ in execution-time address-binding scheme.
Prepared by Dr. Amjad Mahmood
Memory Management Unit - (MMU)
 MMU is a hardware device that maps virtual to
physical address at run-time.
 In a simple MMU scheme, the value in the
relocation register is added to every address
generated by a user process at the time it is sent to
 The user program deals with logical addresses; it
never sees the real physical addresses.
Dynamic relocation using a relocation register
 Overlay is one of the techniques to run a program
that is bigger than the size of the physical memory.
 The idea of overlays is to keep in memory only
those instructions and data that are needed at any
given time.
 The basic idea is to divide the program into modules
in such a way that not all modules need to be in the
memory at the same time.
Prepared by Dr. Amjad Mahmood
 Programmer specifies which modules can overlay
each other
 The linker inserts commands to invoke the loader
when modules are referenced
 Used in DOS as an alternative to Expanded
Overlays for a two-pass assembler
 Advantages
 Reduced memory requirements
 Disadvantages
 Overlap map must be specified by programmer
 Programmer must know memory requirements
 Overlapped modules must be completely
 A process needs to be in the memory to be
executed. However, a process can be swapped
temporarily out of memory to a backing store, and
then brought back into memory for continued
Prepared by Dr. Amjad Mahmood
 Swapping needs a backing store – fast disk large
enough to accommodate copies of all memory
images for all users; must provide direct access to
these memory images.
 Roll out, roll in – swapping variant used for prioritybased scheduling algorithms; lower-priority process
is swapped out so higher-priority process can be
loaded and executed.
 Major part of swap time is transfer time; total
transfer time is directly proportional to the amount of
memory swapped. Note that context switching time
in this scheme is quite high.
 If we want to swap out a process, we must be sure
that it is completely idle.
 Modified versions of swapping are found on many
systems, i.e., UNIX, Linux, and Windows.
Swapping of two processes using a disk as a backing store
Prepared by Dr. Amjad Mahmood
Contiguous Allocation Techniques
 Main memory usually divided into two partitions:
 Resident operating system, usually held in low
memory with interrupt vector.
 User processes then held in high memory.
 Every process is loaded into a single
contiguous partition of the memory.
Single-partition Allocation
 A single processes in loaded into the memory at a
 Advantages:
 Simplicity
 No special hardware required
 Disadvantages:
 CPU wasted
 Main memory not fully used
 Limited job size
Prepared by Dr. Amjad Mahmood
Multiple-Partition Allocation – Fixed Partitions
 Desirable in multiprogramming environment.
 A simple multi-partition scheme is to divide the
memory into fixed size partitions.
 Each process is loaded into a single partition.
 In this scheme, the degree of multiprogramming is
bounded by the number of partitions.
 This scheme was originally used in IBM OS/360. No
longer in use now.
 There might be a separate process queue (need
absolute addressing) for each partition or a single
queue for all partitions (need dynamic addressing).
Prepared by Dr. Amjad Mahmood
Multiple-partition Allocation – Dynamic Partitions
 Partitions are of variable length and number
 Process is allocated exactly as much memory as
 MVT, a dynamic partition scheme, works as follow:
 Operating system maintains information about:
 Allocated partitions
 Free partitions (hole). Initially, all the
memory is available and is considered as a
single block (one big hole).
 When a process arrives, we search for a hole
large enough for the process. If a hole is found,
it is allocated to the process. If we find a hole,
which is too large, it is split into two: one part is
allocated to the arriving process, the other is
returned to the set of holes.
 When a process completes, the memory is
freed and is placed in to the set of holes. If the
new hole is adjacent to other holes, they are
packed in the set of holes. At this point, it is
checked whether there is a process waiting for
memory and whether this newly freed hole and
recombined memory could satisfy the demands
of any of the waiting processes.
Prepared by Dr. Amjad Mahmood
Merging of holes
Prepared by Dr. Amjad Mahmood
Pros and Cons of Dynamic Allocation
 Advantages
 Efficient use of memory
 Disadvantages
 Partition management
 Compaction or external fragmentation
 Internal fragmentation
Dynamic Allocation Placement Algorithms
 How to satisfy a request of size n bytes from a list of
free holes?
 There are many solutions to this problem, but most
common are as follows:
 First-fit: Allocate the first hole that is big
enough. The search can start either at the
beginning of the set of holes or where the
previous first-fit search was ended.
 Best-fit: Allocate the smallest hole that is big
enough; must search entire list, unless ordered
by size. Produces the smallest leftover hole.
 Worst-fit: Allocate the largest hole; must also
search entire list. Produces the largest leftover
 First-fit and best-fit better than worst-fit in
terms of speed and storage utilization.
Prepared by Dr. Amjad Mahmood
 External Fragmentation – total memory space
exists to satisfy a request, but it is not contiguous.
 Internal Fragmentation – allocated memory may
be slightly larger than requested memory; this size
difference is memory internal to a partition, but not
being used.
 Reduce external fragmentation by compaction.
 Shuffle memory contents to place all free
memory together in one large block.
 Compaction is possible only if relocation is
dynamic, and is done at execution time.
 I/O problem
 Latch job in memory while it is involved in
 Do I/O only into OS buffers.
 High cost of compaction.
Prepared by Dr. Amjad Mahmood
 Another possible solution to external fragmentation
is paging.
 In paging, logical address space of a process can
be noncontiguous; process is allocated physical
memory whenever the latter is available.
 Physical memory is divided into fixed-sized blocks
called frames (size is power of 2, between 512
bytes and 8192 bytes).
 Logical memory is also divided into blocks of same
size called pages. The page size is defined by the
hardware and is typically a power of 2 varying
between 512 bytes and 16 megabytes per page.
The selection of a power of 2 as a page size makes
the translation of logical address into a page
number and offset easy.
 To run a program of size n pages, need to find n
free frames and load program.
 Set up a page table (for each process) to translate
logical to physical addresses.
 Internal fragmentation is possible.
Prepared by Dr. Amjad Mahmood
Memory Allocation in Paging
 When a new process arrives, its size, in pages, is
 If the process has n pages then n frames must
be available in the physical memory.
 The first page of the process is then loaded into the
first available frame, the next into next available
frame and so on.
 Since OS is managing the physical memory, it must
be aware of the allocation details of physical
memory (allocated and free frames). This
information is generally kept in a data structure
known as frame table which has one entry for each
frame and indicates whether it is free or not and if
allocated to which page of which process.
Prepared by Dr. Amjad Mahmood
Free frames – (a) before allocation (b) after allocation
Address Translation
 Address generated by CPU is divided into:
 Page number (p) – used as an index into a
page table which contains base address of
each page in physical memory.
 Page offset (d) – Byte within the page
Address translation architecture
Prepared by Dr. Amjad Mahmood
Implementation of Page Table
 Page table for each process is kept in main
 Page-table base register (PTBR) points to the page
 Page-table length register (PTLR) indicates size of
the page table.
 In this scheme every data/instruction access
requires two memory accesses. One for the page
table and one for the data/instruction.
 The two memory access problem can be solved by
the use of a special fast-lookup hardware cache
called associative memory or translation look-aside
buffers (TLBs).
Prepared by Dr. Amjad Mahmood
Associative memory
 Associative memory allows parallel search
 Contains page table entries that have been most
recently used
 Functions same way as a memory cache
 Given a virtual address, processor examines the
 If page table entry is present (a hit), the frame
number is retrieved and the real address is formed
 If page table entry is not found in the TLB (a miss),
the page number is used to index the process page
 First checks if page is already in main memory
 if not in main memory a page fault is issued
 The TLB is updated to include the new page entry
 Note that TLB must be flushed every time a new
page table is selected (during context switching, for
Prepared by Dr. Amjad Mahmood
Effective Access Time
 Associative Lookup =  time unit
 Assume memory cycle time is 1 microsecond
 Hit ratio – percentage of times that a page number
is found in the associative registers; ration related to
number of associative registers.
 Hit ratio = α
 Effective Access Time (EAT)
EAT = (1 +) α + (2 +)(1 – α)
Memory Protection
 Memory protection implemented by associating
protection bit with each frame. Normally these bits
are kept in the page table. One bit can define a
page to be read and write or read-only. Since every
reference to a memory location is through the page
table, and protection bits can be checked at the
address translation time. An illegal operation causes
a trap.
 One more bit, Valid-invalid bit, is also attached to
each entry in the page table:
 “valid” indicates that the associated page is in
the process’ logical address space, and is thus
a legal page.
 “invalid” indicates that the page is not in the
process’ logical address space.
Prepared by Dr. Amjad Mahmood
Valid and invalid bit in a page table
Page Table Structure
 Hierarchical Paging
 Hashed Page Tables
 Inverted Page Tables
Hierarchical Page Tables
 Break up the logical address space into multiple
page tables.
 A simple technique is a two-level page table.
Two Level Paging Example
 A logical address (on 32-bit machine with 4K page
size) is divided into:
 a page number consisting of 20 bits.
 a page offset consisting of 12 bits.
 Since the page table is paged, the page number is
further divided into:
 a 10-bit page number.
Prepared by Dr. Amjad Mahmood
 a 10-bit page offset.
 Thus, a logical address is as follows:
Page number
Page offset
where pi is an index into the outer page table, and p2 is
the displacement within the page of the outer page table.
Address Translation Scheme
Address-translation scheme for a two-level 32-bit paging
Prepared by Dr. Amjad Mahmood
Hashed Page Tables
 Common in address spaces > 32 bits.
 The virtual page number is hashed into a page
table. This page table contains a chain of elements
hashing to the same location.
 Virtual page numbers are compared in this chain
searching for a match. If a match is found, the
corresponding physical frame is extracted.
Hash page table
Inverted Page Table
 One entry for each real page of memory.
 Entry consists of the virtual address of the page
stored in that real memory location with information
about the process that owns that page.
 Decreases memory needed to store each page
table, but increases time needed to search the table
when a page reference occurs.
 Use hash table to limit the search to one — or at
most a few — page-table entries.
Prepared by Dr. Amjad Mahmood
Inverted page table architecture
Shared Pages
 Shared code
 One copy of read-only (reentrant) code shared
among processes (i.e., text editors, compilers,
window systems).
 Shared code must appear in same location in
the logical address space of all processes.
 Private code and data
 Each process keeps a separate copy of the
code and data.
 The pages for the private code and data can
appear anywhere in the logical address space.
Prepared by Dr. Amjad Mahmood
Pros and Cons of Paging
 Advantages:
 Efficient memory use
 Simple
discontinuous loading and fixed partition size
 No compaction is necessary
 Easy to share pages
 Disadvantages
 Job size <= memory size
 Internal fragmentation
 Need special hardware for address translation
 Some main memory is used for page table
 Address translation lengthens memory cycle
 Memory-management scheme that supports user
view of memory.
 A program is a collection of segments. A segment
is a logical unit such as:
main program,
local variables, global variables,
common block,
symbol table, arrays
Prepared by Dr. Amjad Mahmood
physical memory
Address Mapping in Segmentation
 The user specifies each address by two quantities:
segment name/number and offset.
<segment-number, offset>,
 Mapping from logical address to physical address is
done with the help of a segment table.
 Segment table – maps two-dimensional physical
addresses; each table entry has:
 base – contains the starting physical address
where the segments reside in memory.
 limit – specifies the length of the segment.
Prepared by Dr. Amjad Mahmood
 Segment-table base register (STBR) points to the
segment table’s location in memory.
 Segment-table length register (STLR) indicates
number of segments used by a program;
 Segment number s is legal if s < STLR.
Segmentation Example
Prepared by Dr. Amjad Mahmood
Comparison of Paging and Segmentation
Prepared by Dr. Amjad Mahmood