Linking - Computer System Laboratory

advertisement
Computer System
Chapter 10. Virtual Memory
Lynn Choi
Korea University
A System with Physical Memory Only
Examples:
Most Cray machines, early PCs, nearly all embedded systems, etc.
Memory
Physical
Addresses
0:
1:
CPU
N-1:
Addresses generated by the CPU correspond directly to bytes in physical memory
A System with Virtual Memory
Examples:
Memory
Workstations, servers, modern PCs, etc.
0:
1:
Page Table
Virtual
Addresses
0:
1:
Physical
Addresses
CPU
P-1:
N-1:
Disk
Address Translation: Hardware converts virtual addresses to physical addresses
via OS-managed lookup table (page table)
Page Faults (like “Cache Misses”)
What if an object is on disk rather than in memory?
Page table entry indicates virtual address not in memory
OS exception handler invoked to move data from disk into memory
Current process suspends, others can resume
OS has full control over placement, etc.
Before fault
After handling fault
Memory
Memory
Page Table
Virtual
Addresses
Physical
Addresses
CPU
Page Table
Virtual
Addresses
Physical
Addresses
CPU
Disk
Disk
Servicing a Page Fault
(1) Initiate Block Read
Processor Signals Controller
Read block of length P starting at disk
address X and store starting at memory
address Y
Read Occurs
Processor
Reg
(3) Read
Done
Cache
Direct Memory Access (DMA)
Under control of I/O controller
I / O Controller Signals Completion
Interrupt processor
OS resumes suspended process
Memory-I/O bus
(2) DMA
Transfer
Memory
I/O
controller
disk
Disk
disk
Disk
Memory Management
Multiple processes can reside in physical memory.
How do we resolve address conflicts?
What if two processes access something at the same address?
kernel virtual memory
stack
%esp
memory invisible to
user code
Memory mapped region
for shared libraries
Linux/x86
process
memory
image
the “brk” ptr
runtime heap (via malloc)
0
uninitialized data (.bss)
initialized data (.data)
program text (.text)
forbidden
Solution: Separate Virt. Addr. Spaces
Virtual and physical address spaces divided into equal-sized blocks
Blocks are called “pages” (both virtual and physical)
Each process has its own virtual address space
Operating system controls how virtual pages as assigned to physical memory
0
Virtual
Address
Space for
Process 1:
Address Translation
0
VP 1
VP 2
PP 2
...
N-1
PP 7
Virtual
Address
Space for
Process 2:
Physical
Address
Space
(DRAM)
0
VP 1
VP 2
PP 10
...
N-1
M-1
(e.g., read/only
library code)
Protection
Page table entry contains access rights information
Hardware enforces this protection (trap into OS if violation occurs)
Page Tables
Read? Write?
Process i:
Physical Addr
VP 0: Yes
No
PP 9
VP 1: Yes
Yes
PP 4
No
XXXXXXX
VP 2:
No
•
•
•
•
•
•
Read? Write?
Process j:
Memory
•
•
•
Physical Addr
VP 0: Yes
Yes
PP 6
VP 1: Yes
No
PP 9
VP 2:
No
XXXXXXX
No
•
•
•
•
•
•
0:
1:
•
•
•
N-1:
Address Translation Symbols
Virtual Address Components
VPO: virtual page offset
VPN: virtual page number
TLBI: TLB index
TLBT: TLB tag
Physical Address Components
PPO: physical page offset
PPN: physical page number
CO: byte offset within cache block
CI: cache index
CT: cache tag
Simple Memory System Example
Addressing
14-bit virtual addresses
12-bit physical address
Page size = 64 bytes
13
12
11
10
9
8
7
6
5
4
VPN
10
2
1
0
VPO
(Virtual Page Offset)
(Virtual Page Number)
11
3
9
8
7
6
5
4
3
2
1
PPN
PPO
(Physical Page Number)
(Physical Page Offset)
0
Simple Memory System Page Table
Only show first 16 entries
VPN
PPN
Valid
VPN
PPN
Valid
00
28
1
08
13
1
01
–
0
09
17
1
02
33
1
0A
09
1
03
02
1
0B
–
0
04
–
0
0C
–
0
05
16
1
0D
2D
1
06
–
0
0E
11
1
07
–
0
0F
0D
1
Simple Memory System TLB
TLB
16 entries
4-way associative
TLBT
13
12
11
10
TLBI
9
8
7
6
5
4
3
VPN
2
1
0
VPO
Set
Tag
PPN
Valid
Tag
PPN
Valid
Tag
PPN
Valid
Tag
PPN
Valid
0
03
–
0
09
0D
1
00
–
0
07
02
1
1
03
2D
1
02
–
0
04
–
0
0A
–
0
2
02
–
0
08
–
0
06
–
0
03
–
0
3
07
–
0
03
0D
1
0A
34
1
02
–
0
Simple Memory System Cache
Cache
16 lines
4-byte line size
Direct mapped
CI
CT
11
10
9
8
7
6
5
4
CO
3
PPN
2
1
0
PPO
Idx
Tag
Valid
B0
B1
B2
B3
Idx
Tag
Valid
B0
B1
B2
B3
0
19
1
99
11
23
11
8
24
1
3A
00
51
89
1
15
0
–
–
–
–
9
2D
0
–
–
–
–
2
1B
1
00
02
04
08
A
2D
1
93
15
DA
3B
3
36
0
–
–
–
–
B
0B
0
–
–
–
–
4
32
1
43
6D
8F
09
C
12
0
–
–
–
–
5
0D
1
36
72
F0
1D
D
16
1
04
96
34
15
6
31
0
–
–
–
–
E
13
1
83
77
1B
D3
7
16
1
11
C2
DF
03
F
14
0
–
–
–
–
Address Translation Example #1
Virtual Address 0x03D4
TLBT
13
12
11
TLBI
10
9
8
7
6
5
4
3
VPN
VPN ___
TLBI ___
TLBT ____
TLB Hit? __
CI___
0
9
8
CT ____
PPN: ____
CI
7
6
PPN
Offset ___
Page Fault? __
CT
10
1
VPO
Physical Address
11
2
5
4
CO
3
2
PPO
Hit? __
Byte: ____
1
0
Address Translation Example #2
Virtual Address 0x0B8F
TLBT
13
12
11
TLBI
10
9
8
7
6
5
4
3
VPN
VPN ___
TLBI ___
2
1
0
VPO
TLBT ____
TLB Hit? __
Page Fault? __
PPN: ____
Physical Address
CI
CT
11
10
9
8
7
6
PPN
Offset ___
CI___
CT ____
5
4
CO
3
2
PPO
Hit? __
Byte: ____
1
0
Address Translation Example #3
Virtual Address 0x0040
TLBT
13
12
11
TLBI
10
9
8
7
6
5
4
3
VPN
VPN ___
TLBI ___
2
1
0
VPO
TLBT ____
TLB Hit? __
Page Fault? __
PPN: ____
Physical Address
CI
CT
11
10
9
8
7
6
PPN
Offset ___
CI___
CT ____
5
4
CO
3
2
PPO
Hit? __
Byte: ____
1
0
Program Start Scenario
Before starting the process
Load the page directory into physical memory
Load the PDBR (page directory base register) with the beginning of the page directory
Load the PC with the start address of code
When the 1st reference to code triggers
iTLB miss (translation failed for instruction address)
Exception handler looks up PTE1
dTLB miss (translation failed for PTE1)
Exception handler looks up PTE2
Lookup page directory and find PTE2
Add PTE2 to dTLB
dTLB hit, but page miss (PTE1 not in memory)
Load page containing PTE1
Lookup page table and find PTE1
Add PTE1 to iTLB
iTLB hit, but page miss (code page not present in memory)
Load the instruction page
Cache miss, but memory returns the instruction
P6 Memory System
32 bit address space
DRAM
4 KB page size
L1, L2, and TLBs
external
system bus
(e.g. PCI)

inst TLB
L2
cache



instruction
fetch unit
L1
i-cache
inst
TLB
data
TLB
L1
d-cache

64 entries
16 sets
L1 i-cache and d-cache



16 KB
32 B line size
128 sets
L2 cache

processor package
32 entries
8 sets
data TLB
cache bus
bus interface unit
4-way set associative

unified
128 KB -- 2 MB
Overview of P6 Address Translation
32
result
CPU
20
VPN
12
virtual address (VA)
VPO
L1 (128 sets, 4 lines/set)
TLB
hit
...
...
TLB (16 sets,
4 entries/set)
10 10
VPN1 VPN2
20
PPN
PDE
PDBR
L1
miss
L1
hit
16
4
TLBT TLBI
TLB
miss
L2 and DRAM
PTE
Page tables
20
CT
12
PPO
physical
address (PA)
7 5
CI CO
P6 2-level Page Table Structure
Page directory
1024 4-byte page directory entries (PDEs)
that point to page tables
One page directory per process.
Page directory must be in memory when it
s process is running
page
Always pointed to by PDBR
directory
Page tables:
1024 4-byte page table entries (PTEs) that
point to pages.
Page tables can be paged in and out.
1024
PDEs
Up to
1024
page
tables
1024
PTEs
...
1024
PTEs
...
1024
PTEs
P6 Page Directory Entry (PDE)
31
12 11
Page table physical base addr
9
Avail
8
7
G
PS
6
5
A
4
3
2
1
0
CD WT U/S R/W P=1
Page table physical base address: 20 most significant bits of physical
page table address (forces page tables to be 4KB aligned)
Avail: These bits available for system programmers
G: global page (don’t evict from TLB on task switch)
PS: page size 4K (0) or 4M (1)
A: accessed (set by MMU on reads and writes, cleared by software)
CD: cache disabled (1) or enabled (0)
WT: write-through or write-back cache policy for this page table
U/S: user or supervisor mode access
R/W: read-only or read-write access
P: page table is present in memory (1) or not (0)
31
1
Available for OS (page table location in secondary storage)
0
P=0
P6 Page Table Entry (PTE)
31
12 11
Page physical base address
9
Avail
8
7
6
5
G
0
D
A
4
3
2
1
0
CD WT U/S R/W P=1
Page base address: 20 most significant bits of physical page
address (forces pages to be 4 KB aligned)
Avail: available for system programmers
G: global page (don’t evict from TLB on task switch)
D: dirty (set by MMU on writes)
A: accessed (set by MMU on reads and writes)
CD: cache disabled or enabled
WT: write-through or write-back cache policy for this page
U/S: user/supervisor
R/W: read/write
P: page is present in physical memory (1) or not (0)
31
1
Available for OS (page location in secondary storage)
0
P=0
How P6 Page Tables Map Virtual
Addresses to Physical Ones
10
VPN1
10
VPN2
word offset into
page directory
12
VPO
word offset into
page table
page directory
word offset into
physical and virtual
page
page table
PTE
PDE
PDBR
physical address
of page directory
Virtual address
physical address
of page table base
(if P=1)
20
PPN
physical
address
of page base
(if P=1)
12
PPO
Physical address
Representation of Virtual Address Space
PT 3
Page Directory
P=1, M=1
P=1, M=1
P=0, M=0
P=0, M=1
•
•
•
•
PT 2
PT 0
Simplified Example
P=1, M=1
P=0, M=0
P=1, M=1
P=0, M=1
P=1, M=1
P=0, M=0
P=1, M=1
P=0, M=1
P=0, M=1
P=0, M=1
P=0, M=0
P=0, M=0
•
•
•
•
•
•
•
•
•
•
•
•
16 page virtual address space
Flags
P: Is entry in physical memory?
M: Has this part of VA space been mapped?
Page 15
Page 14
Page 13
Page 12
Page 11
Page 10
Page 9
Page 8
Page 7
Page 6
Page 5
Page 4
Mem Addr
Disk Addr
Page 3
Page 2
Page 1
In Mem
On Disk
Page 0
Unmapped
P6 TLB Translation
32
result
CPU
20
VPN
12
virtual address (VA)
VPO
L1 (128 sets, 4 lines/set)
TLB
hit
...
...
TLB (16 sets,
4 entries/set)
10 10
VPN1 VPN2
20
PPN
PDE
PDBR
L1
miss
L1
hit
16
4
TLBT TLBI
TLB
miss
L2 andDRAM
PTE
Page tables
20
CT
12
PPO
physical
address (PA)
7 5
CI CO
P6 TLB
TLB entry (not all documented, so this is speculative):
32
16
1
1
PDE/PTE
Tag
PD
V
V: indicates a valid (1) or invalid (0) TLB entry
PD: is this entry a PDE (1) or a PTE (0)?
tag: disambiguates entries cached in the same set
PDE/PTE: page directory or page table entry
Structure of the data TLB:
16 sets, 4 entries/set
entry
entry
entry
entry
entry
entry
entry
entry
entry
entry
entry
entry
entry
entry
set 0
set 1
set 2
entry
entry
set 15
...
Translating with the P6 Page Tables (case 1/1)
20
VPN
Case 1/1: page table a
nd page present.
MMU Action:
12
VPO
20
PPN
VPN1 VPN2
12
PPO
Mem
OS action
PDE p=1
PTE p=1
data
Page
directory
Page
table
Data
page
PDBR
Disk
MMU builds physica
l address and fetches
data word.
none
Translating with the P6 Page Tables (case 1/0)
20
VPN
Case 1/0: page table present
but page missing.
MMU Action:
12
VPO
Page fault exception
Handler receives the followin
g args:
VPN1 VPN2
Mem
PDE p=1
PTE p=0
Page
directory
Page
table
VA that caused fault
Fault caused by non-present
page or page-level protectio
n violation
Read/write
User/supervisor
PDBR
Disk
data
Data
page
Translating with the P6 Page Tables (case 1/0)
OS Action:
20
VPN
12
VPO
20
PPN
VPN1 VPN2
Mem
PDE p=1
PTE p=1
data
Page
directory
Page
table
Data
page
PDBR
Disk
12
PPO
Check for a legal virtual addr
ess.
Read PTE through PDE.
Find free physical page (swa
pping out current page if nec
essary)
Read virtual page from disk a
nd copy to virtual page
Restart faulting instruction b
y returning from exception ha
ndler.
Translating with the P6 Page Tables (case 0/1)
20
VPN
Case 0/1: page table missi
ng but page present.
Introduces consistency iss
ue.
12
VPO
Potentially every page out
requires update of disk pa
ge table.
VPN1 VPN2
Linux disallows this
Mem
PDE p=0
data
Page
directory
Data
page
PDBR
Disk
PTE p=1
Page
table
If a page table is swapped
out, then swap out its data
pages too.
Translating with the P6 Page Tables (case 0/0)
20
VPN
Case 0/0: page table a
nd page missing.
MMU Action:
12
VPO
Page fault exception
VPN1 VPN2
PDE p=0
Mem
PDBR
Page
directory
Disk
PTE p=0
data
Page
table
Data
page
Translating with the P6 Page Tables (case 0/0)
20
VPN
12
VPO
OS action:
Swap in page table.
Restart faulting instru
ction by returning fro
m handler.
VPN1 VPN2
Mem
PDE p=1
PTE p=0
Page
directory
Page
table
PDBR
Disk
Like case 1/0 from her
e on.
data
Data
page
P6 L1 Cache Access
32
result
CPU
20
VPN
12
virtual address (VA)
VPO
L1 (128 sets, 4 lines/set)
TLB
hit
...
...
TLB (16 sets,
4 entries/set)
10 10
VPN1 VPN2
20
PPN
PDE
PDBR
L1
miss
L1
hit
16
4
TLBT TLBI
TLB
miss
L2 andDRAM
PTE
Page tables
20
CT
12
PPO
physical
address (PA)
7 5
CI CO
Speeding Up L1 Access
Tag Check
20
CT
7 5
CI CO
PPN
PPO
Physical address (PA)
Addr.
Trans.
virtual
address (VA)
No
Change
VPN
VPO
20
12
CI
Observation
Bits that determine CI identical in virtual and physical address
Can index into cache while address translation taking place
Then check with CT from physical address
“Virtually indexed, physically tagged”
Cache carefully sized to make this possible
Linux Organizes VM as Collection of “Areas”
Area
Contiguous chunk of (allocated) virtual memory whose pages are related
Examples: code segment, data segment, heap, shared library segment, etc.
Any existing virtual page is contained in some area.
Any virtual page that is not part of some area does not exist and cannot be referenced!
Thus, the virtual address space can have gaps.
The kernel does not keep track of virtual pages that do not exist.
task_struct
Kernel maintains a distinct task structure for each process
Contain all the information that the kernel needs to run the process
PID, pointer to the user stack, name of the executable object file, program counter, etc.
mm_struct
One of the entries in the task structure that characterizes the current state of virtual
memory
pgd – base of the page directory table
mmap – points to a list of vm_area_struct
Linux Organizes VM as Collection of “Areas”
task_struct
mm
vm_area_struct
mm_struct
pgd
mmap
vm_end
vm_start
vm_prot
vm_flags
vm_next
vm_prot:
read/write permissions for this
area
vm_flags
process virtual memory
vm_end
vm_start
vm_prot
vm_flags
shared libraries
0x40000000
data
0x0804a020
vm_next
shared with other processes or
private to this process
text
vm_end
vm_start
vm_prot
vm_flags
vm_next
0x08048000
0
Linux Page Fault Handling
process virtual memory
Is the VA legal?
vm_area_struct
i.e. is it in an area defined by a
vm_area_struct?
if not then signal segmentation
violation (e.g. (1))
vm_end
vm_start
r/o
shared libraries
vm_next
1
read
vm_end
vm_start
r/w
3
read
data
vm_next
2
write
vm_end
vm_start
r/o
text
vm_next
0
Is the operation legal?
i.e., can the process read/write
this area?
if not then signal protection
violation fault (e.g., (2))
If OK, handle the page fault
e.g., (3)
Memory Mapping
Linux (also, UNIX) initializes the contents of a virtual memory area by
associating it with an object on disk
Create new vm_area_struct and page tables for area
Areas can be mapped to one of two types of objects (i.e., get its initial values from) :
Regular file on disk (e.g., an executable object file)
The file is divided into page-sized pieces.
The initial contents of a virtual page comes from each piece.
If the area is larger than file section, then the area is padded with zeros.
Anonymous file (e.g., bss)
An area can be mapped to an anonymous file, created by the kernel.
The initial contents of these pages are initialized as zeros
Also, called demand-zero pages
Key point: no virtual pages are copied into physical memory until they are
referenced!
Known as “demand paging”
Crucial for time and space efficiency
User-Level Memory Mapping
void *mmap(void *start, int len,
int prot, int flags, int fd, int offset)
map len bytes starting at offset offset of the file specified by file description
fd, preferably at address start (usually 0 for don’t care).
prot: PROT_EXEC, PROT_READ, PROT_WRITE
flags: MAP_PRIVATE, MAP_SHARED, MAP_ANON
MAP_PRIVATE indicates a private copy-on-write object
MAP_SHARED indicates a shared object
MAP_ANON with NULL fd indicates an anonymous file (demand-zero pages)
Return a pointer to the mapped area.
Int munmap(void *start, int len)
Delete the area starting at virtual address start and length len
Shared Objects
Why shared objects?
Many processes need to share identical read-only text areas. For example,
Each tcsh process has the same text area.
Standard library functions such as printf
It would be extremely wasteful for each process to keep duplicate copies in physical memory
An object can be mapped as either a shared object or a private object
Shared object
Any write to that area is visible to any other processes that have also mapped the shared object.
The changes are also reflected in the original object on disk.
A virtual memory area into which a shared object is mapped is called a shared area.
Private object
Any write to that area is not visible to other processes.
The changes are not reflected back to the object on disk.
Private objects are mapped into virtual memory using copy-on-write.
Only one copy of the private object is stored in physical memory.
The page table entries for the private area are flagged as read-only
Any write to some page in the private area triggers a protection fault
The hander needs to create a new copy of the page in physical memory and then restores the write
permission to the page.
After the handler returns, the process proceeds normally
Shared Object
Process 1
virtual memory
Physical
memory
Shared
object
Process 2
Process 1
virtual memory virtual memory
Physical
memory
Shared
object
Process 2
virtual memory
Private Object
Process 1
virtual memory
Physical
memory
Process 2
virtual memory
Process 1
virtual memory
Physical
memory
Process 2
virtual
memory
Copy-on-write
Write to
private
copy-on-write
page
Private
copy-on-write object
Private
copy-on-write object
Exec() Revisited
To run a new program p in the c
urrent process using exec():
process-specific data
structures
(page tables,
task and mm structs)
physical memory
same
for each
process
stack
kernel
VM
demand-zero
process
VM
Memory mapped region
for shared libraries
.data
.text
kernel code/data/stack
0xc0
%esp
libc.so
brk
runtime heap (via malloc)
0
uninitialized data (.bss)
initialized data (.data)
program text (.text)
forbidden
demand-zero
.data
.text
p
Free vm_area_struct’s and page tab
les for old areas.
Create new vm_area_struct’s and p
age tables for new areas.
stack, bss, data, text, shared libs.
text and data backed by ELF executa
ble object file.
bss and stack initialized to zero.
Set PC to entry point in .text
Linux will swap in code and data pa
ges as needed.
Fork() Revisited
To create a new process using fork():
Make copies of the old process’s mm_struct, vm_area_struct’s, and page tables.
At this point the two processes are sharing all of their pages.
How to get separate spaces without copying all the virtual pages from one space to a
nother?
“copy on write” technique.
copy-on-write
Make pages of writeable areas read-only
flag vm_area_struct’s for these areas as private “copy-on-write”.
Writes by either process to these pages will cause page faults.
Fault handler recognizes copy-on-write, makes a copy of the page, and restores write permi
ssions.
Net result:
Copies are deferred until absolutely necessary (i.e., when one of the processes tries to
modify a shared page).
Dynamic Memory Allocation
Heap
An area of demand-zero memory that begins immediately after the bss area.
Allocator
Maintains the heap as a collection of various sized blocks.
Each block is a contiguous chunk of virtual memory that is either allocated or
free.
Explicit allocator requires the application to allocate and free space
E.g., malloc and free in C
Implicit allocator requires the application to allocate, but not to free sp
ace
The allocator needs to detect when an allocated block is no longer being used
Implicit allocators are also known as garbage collectors.
The process of automatically freeing unused blocks is known as garbage
collection.
E.g. garbage collection in Java, ML or Lisp
Heap
kernel virtual memory
memory invisible to
user code
stack
%esp
Memory mapped region for
shared libraries
run-time heap (via malloc)
uninitialized data (.bss)
initialized data (.data)
program text (.text)
0
the “brk” ptr
points to the top of
the heap
Malloc Package
#include <stdlib.h>
void *malloc(size_t size)
If successful:
Returns a pointer to a memory block of at least size bytes
(Typically) aligned to 8-byte boundary so that any kind of data object can be contained in the block
If size == 0, returns NULL
If unsuccessful (i.e. larger than virtual memory): returns NULL (0) and sets errno.
Two other variations: calloc (initialize the allocated memory to zero) and realloc
Use the mmap or munmap function, or use sbrk function
void *realloc(void *p, size_t size)
Changes the size of block pointed by p and returns pointer to the new block.
Contents of the new block unchanged up to min of old and new size.
void free(void *p)
Returns the block pointed by p to pool of available memory
p must come from a previous call to malloc or realloc.
Malloc Example
void foo(int n, int m) {
int i, *p;
/* allocate a block of n ints */
if ((p = (int *) malloc(n * sizeof(int))) == NULL) {
perror("malloc");
exit(0);
}
for (i=0; i<n; i++)
p[i] = i;
/* add m bytes to end of p block */
if ((p = (int *) realloc(p, (n+m) * sizeof(int))) == NULL) {
perror("realloc");
exit(0);
}
for (i=n; i < n+m; i++)
p[i] = i;
/* print new array */
for (i=0; i<n+m; i++)
printf("%d\n", p[i]);
free(p); /* return p to available memory pool */
}
Allocation Examples
p1 = malloc(4)
p2 = malloc(5)
p3 = malloc(6)
free(p2)
p4 = malloc(2)
Requirements (Explicit Allocators)
Applications:
Can issue arbitrary sequence of allocation and free requests
Free requests must correspond to an allocated block
Allocators
Can’t control the number or the size of allocated blocks
Must respond immediately to all allocation requests
i.e., can’t reorder or buffer requests
Must allocate blocks from free memory
i.e., can only place allocated blocks in free memory
Must align blocks so they satisfy all alignment requirements
8 byte alignment for GNU malloc (libc malloc) on Linux boxes
Can only manipulate and modify free memory
Can’t move the allocated blocks once they are allocated
i.e., compaction is not allowed
Goals of Allocators
Maximize throughput
Throughput: number of completed requests per unit time
Example:
5,000 malloc calls and 5,000 free calls in 10 seconds
Throughput is 1,000 operations/second
Maximize memory utilization
Need to minimize “fragmentation”.
Fragmentation (holes) – unused area
There is a tradeoff between throughput and memory utilization
Need to balance these two goals
Good locality properties
“Similar” objects should be allocated close in space
Internal Fragmentation
Poor memory utilization caused by fragmentation.
Comes in two forms: internal and external fragmentation
Internal fragmentation
For some block, internal fragmentation is the difference between the block size and the payl
oad size.
block
Internal
fragmentation
payload
Internal
fragmentation
Caused by overhead of maintaining heap data structures, i.e. padding for alignment purposes.
Any virtual memory allocation policy using the fixed sized block such as paging can suffer
from internal fragmentation
External Fragmentation
Occurs when there is enough aggregate heap memory, but no single
free block is large enough
p1 = malloc(4)
p2 = malloc(5)
p3 = malloc(6)
free(p2)
p4 = malloc(6)
oops!
External fragmentation depends on the pattern of future requests, and
thus is difficult to measure.
Implementation Issues
Free block organization
How do we know the size of a free block?
How do we keep track of the free blocks?
Placement
How do we choose an appropriate free block in which to place a newly allocated
block?
Splitting
What do we do with the extra space after the placement?
Coalescing
What do we do with small blocks that have been freed
p1 = malloc(1)
How do we know the size of a block?
Standard method
Keep the length of a block in the word preceding the block.
This word is often called the header field or header
Requires an extra word for every allocated block
Format of a simple heap block
31
malloc returns a
pointer to the beginning
of the payload
3
Block size
2 1
0
00a
Payload
(allocated block only)
Padding (optional)
a = 1: Allocated
a = 0: Free
The block size includes
the header, payload, and
any padding.
Example
p0 = malloc(4)
p0
5
free(p0)
Block size
data
Keeping Track of Free Blocks
Method 1: Implicit list using lengths -- links all blocks
5
4
6
2
Method 2: Explicit list among the free blocks using pointers within the fre
e blocks
5
4
6
Method 3: Segregated free list
Different free lists for different size classes
2
Placement Policy
First fit:
Search list from the beginning, choose the first free block that fits
Can take linear time in total number of blocks (allocated and free)
(+) Tend to retain large free blocks at the end
(-) Leave small free blocks at beginning
Next fit:
Like first-fit, but search the list starting from the end of previous search
(+) Run faster than the first fit
(-) Worse memory utilization than the first fit
Best fit:
Search the list, choose the free block with the closest size that fits
(+) Keeps fragments small – better memory utilization than the other two
(-) Will typically run slower – requires an exhaustive search of the heap
Splitting
Allocating in a free block - splitting
Since allocated space might be smaller than free space, we might want to split the
block
4
4
6
2
p
addblock(p, 2)
4
4
4
2
2
Coalescing
Coalescing
When the allocator frees a block, there might be other free blocks that are
adjacent.
Such adjacent free blocks can cause a false fragmentation, where there is an
enough free space, but chopped up into small, unusable free spaces.
Need to coalesce next and/or previous block if they are free
Coalescing with next block
4
4
4
2
2
p
free(p)
4
4
6
But how do we coalesce with previous block?
2
Bidirectional Coalescing
Boundary tags [Knuth73]
Replicate size/allocated word (called footer) at the bottom of a block
Allows us to traverse the “list” backwards, but requires extra space
Important and general technique! – allow constant time coalescing
1 word
Header
Format of
allocated and
free blocks
a
payload and
padding
Boundary tag
(footer)
4
size
4 4
size
4 6
a = 1: allocated block
a = 0: free block
size: total block size
a
payload: application data
(allocated blocks only)
6 4
4
Constant Time Coalescing
block being
freed
Case 1
Case 2
Case 3
Case 4
allocated
allocated
free
free
allocated
free
allocated
free
Constant Time Coalescing (Case 1)
m1
1
m1
1
m1
1
m1
1
n
1
n
0
n
1
n
0
m2
1
m2
1
m2
1
m2
1
Constant Time Coalescing (Case 2)
m1
1
m1
1
m1
1
m1
1
n
1
n+m2
0
n
1
m2
0
m2
0
n+m2
0
Constant Time Coalescing (Case 3)
m1
0
n+m1
0
m1
0
n
1
n
1
n+m1
0
m2
1
m2
1
m2
1
m2
1
Constant Time Coalescing (Case 4)
m1
0
m1
0
n
1
n
1
m2
0
m2
0
n+m1+m2
0
n+m1+m2
0
Implicit Lists: Summary
Implementation is very simple
Allocate takes linear time in the worst case
Free takes constant time in the worst case -- even with coalescing
Memory usage will depend on placement policy
First fit, next fit or best fit
Not used in practice for malloc/free because of linear time allocate.
Used for special purpose applications where the total number of blocks is known
beforehand to be small
However, the concepts of splitting and boundary tag coalescing are gener
al to all allocators.
Keeping Track of Free Blocks
Method 1: Implicit list using lengths -- links all blocks
5
4
6
2
Method 2: Explicit list among the free blocks using pointers within the
free blocks
5
4
6
Method 3: Segregated free lists
Different free lists for different size classes
2
Explicit Free Lists
A
B
C
Use data space for pointers
Typically doubly linked
Still need boundary tags for coalescing
Forward links
A
4
B
4 4
4 6
6 4
C
4 4
4
Back links
Format of Doubly-Linked Heap Blocks
31
3
2 1
Block size
a/f
0
31
Header
3
2 1
Block size
a/f
0
Header
pred (Predecessor)
Payload
succ (Successor)
Padding (optional)
Padding (optional)
Block size
a/f
Allocated Block
Footer
Block size
Free Block
Old payload
a/f
Footer
Freeing With Explicit Free Lists
Insertion policy: Where in the free list do you put a newly freed block?
LIFO (last-in-first-out) policy
Insert freed block at the beginning of the free list
(+) Simple and freeing a block can be performed in constant time.
If boundary tags are used, coalescing can also be performed in constant time.
Address-ordered policy
Insert freed blocks so that free list blocks are always in address order
i.e. addr(pred) < addr(curr) < addr(succ)
(-) Freeing a block requires linear-time search
(+) Studies suggest address-ordered first fit enjoys better memory utilization than
LIFO-ordered first fit.
Explicit List Summary
Comparison to implicit list:
Allocation time takes linear in the number of free blocks instead of total blocks
Much faster allocates when most of the memory is full
Slightly more complicated allocate and free since needs to splice blocks in and out
of the list
Extra space for the links (2 extra words needed for each block)
This results in a larger minimum block size, and potentially increase the degree of
internal fragmentation
Main use of linked lists is in conjunction with segregated free lists
Keep multiple linked lists of different size classes, or possibly for different types o
f objects
Keeping Track of Free Blocks
Method 1: Implicit list using lengths -- links all blocks
5
4
6
2
Method 2: Explicit list among the free blocks using pointers within the fre
e blocks
5
4
6
2
Method 3: Segregated free list
Different free lists for different size classes
Can be used to reduce the allocation time compared to a linked list organization
Segregated Storage
Partition the set of all free blocks into equivalent classes called size classes
The allocator maintains an array of free lists, with one free list per size
class ordered by increasing size.
1-2
3
4
5-8
9-16
Often have separate size class for every small size (2,3,4,…)
Classes with larger sizes typically have a size class for each power of 2
Variations of segregated storage
They differ in how they define size classes, when they perform coalescing, and when
they request additional heap memory to OS, whether they allow splitting, and so on.
Examples: simple segregated storage, segregated fits
Simple Segregated Storage
Separate heap and free list for each size class
Free list for each size class contains same-sized blocks of the largest
element size
For example, the free list for size class {17-32} consists entirely of block size 32
To allocate a block of size n:
If free list for size n is not empty, allocate the first block in its entirety
If free list is empty, get a new page from OS, create a new free list from all the blo
cks in page, and then allocate the first block on list
To free a block:
Simply insert the free block at the front of the appropriate free list
(+) Both allocating and freeing blocks are fast constant-time operations.
(+) Little per-block memory overhead: no splitting and no coalescing
(-) Susceptible to internal and external fragmentation
Internal fragmentation: since free blocks are never split
External fragmentation: since free blocks are never coalesced
Segregated Fits
Array of free lists, each one for some size class
Free list for each size class contains potentially different-sized blocks
To allocate a block of size n:
Do a first-fit search of the appropriate free list
If an appropriate block is found:
Split (option) the block and place the fragment on the appropriate list
If no block is found, try the next larger class and repeat this until block is found
If none of free lists yields a block that fits, request additional heap memory to OS,
allocate the block out of this new heap memory, and place the remainder in the
largest size
To free a block:
Coalesce and place on the appropriate list
(+) Fast
Since searches are limited to part of the heap rather than the entire heap area
However, coalescing can increase search times
(+) Good memory utilization
A simple first-fit search approximates a best-fit search of the entire heap
Popular choice for production-quality allocators such as GNU malloc
Garbage Collection
Garbage collector: dynamic storage allocator that automatically frees
allocated blocks that are no longer used
Implicit memory management: an application never has to free
void foo() {
int *p = malloc(128);
return; /* p block is now garbage */
}
Common in functional languages, scripting languages, and modern object
oriented languages:
Lisp, ML, Java, Perl, Mathematica,
Variants (conservative garbage collectors) exist for C and C++
Cannot collect all garbages
Garbage Collection
How does the memory manager know when memory can be freed?
In general we cannot know what is going to be used in the future since it depends
on conditionals
But we can tell that certain blocks cannot be used if there are no pointers to them
Need to make certain assumptions about pointers
Memory manager need to distinguish pointers from non-pointers
Garbage Collection
Garbage collectors views memory as a reachability graph and periodically reclaim
the unreachable nodes
Classical GC Algorithms
Mark and sweep collection (McCarthy, 1960)
Does not move blocks (unless you also “compact”)
Reference counting (Collins, 1960)
Does not move blocks (not discussed)
Copying collection (Minsky, 1963)
Moves blocks (not discussed)
Memory as a Graph
Reachability graph: we view memory as a directed graph
Each block is a node in the graph
Each pointer is an edge in the graph
Locations not in the heap that contain pointers into the heap are called root node
e.g. registers, locations on the stack, global variables
Root nodes
Heap nodes
reachable
Not-reachable
(garbage)
A node (block) is reachable if there is a path from any root to that node.
Non-reachable nodes are garbage (never needed by the application)
Mark and Sweep Garbage Collectors
A Mark&Sweep garbage collector consists of a mark phase followed by a
sweep phase
Use extra mark bit in the head of each block
When out of space:
Mark: Start at roots and set mark bit on all reachable memory blocks
Sweep: Scan all blocks and free blocks that are not marked
Mark bit set
root
Before mark
After mark
After sweep
free
free
Mark and Sweep (cont.)
Mark using depth-first traversal of the memory graph
ptr mark(ptr p) {
if (!is_ptr(p)) return;
if (markBitSet(p)) return
setMarkBit(p);
for (i=0; i < length(p); i++)
mark(p[i]);
return;
}
//
//
//
//
Sweep using lengths to find next block
ptr sweep(ptr p, ptr end) {
while (p < end) {
if markBitSet(p)
clearMarkBit();
else if (allocateBitSet(p))
free(p);
p += length(p);
}
do nothing if not pointer
check if already marked
set the mark bit
mark all children
Functions
is_ptr(p): If p is a pointer to an allocated
block, return a pointer b to the beginning
of that block. Return NULL otherwise.
blockMarked(b): return true if block b is
already marked
blockAllocated(b): return true if block b
is allocated
length(b): returns the length of block b
Common Memory-Related Bugs in C
Dereferencing bad pointers
Reading uninitialized memory
Stack buffer overflow
Assuming pointers and the objects are the same size
Making Off-by-One errors
Referencing a pointer instead of the object
Misunderstanding pointer arithmetic
Referencing nonexistent variables
Freeing blocks multiple times
Referencing freed blocks
Memory leaks
Dereferencing Bad Pointers
Bad pointers
There are large holes in the virtual address space of a process that are not mapped
to any meaningful data.
If we attempt to dereference a pointer into one of these holes, the process will
cause a segmentation exception
The classic scanf bug
Read an integer from stdin into a variable
scanf(“%d”, val);
In the best case, the program terminates immediately with an exception
In the worst case, the content of val correspond to some valid read/write area, and we
overwrite memory, usually with disastrous consequence much later
The correct form is
scanf(“%d”, &val);
Reading Uninitialized Memory
Assuming that heap data is initialized to zero
While .bss sections are always initialized to zeros by the loader, this is not true for
heap memory.
/* return y = Ax */
int *matvec(int **A, int *x) {
int *y = malloc(N*sizeof(int));
int i, j;
for (i=0; i<N; i++)
for (j=0; j<N; j++)
y[i] += A[i][j]*x[j];
return y;
}
Should use ‘calloc’ instead of ‘malloc’
Stack Overflow
Buffer overflow
A program can run into a buffer overflow bug if it writes to a target buffer on the
stack without examining the size of the input string
Void bufoverflow()
{
char buf[64];
gets(buf);
return;
}
gets function copies an arbitrary length string to the buffer.
To fix this, should use fgets, which limits the size of the input string.
Basis for classic buffer overflow attacks
1988 Internet worm
Modern attacks on Web servers
AOL/Microsoft IM war
Pointers and the Objects are Different in Size
Allocating the (possibly) wrong sized object
Create an array of n pointers, each of which points to an array of m ints.
int **p;
p = malloc(N*sizeof(int));
for (i=0; i<N; i++) {
p[i] = malloc(M*sizeof(int));
}
If we run this code on Alpha processor, where a pointer is larger than an int,
The for loop will write past the end of the A array.
Should use sizeof(int *) for the first malloc
Off-by-One Errors
Off-by-one error
Try to initialize n+1 elements instead of n
int **p;
p = malloc(N*sizeof(int *));
for (i=0; i<=N; i++) {
p[i] = malloc(M*sizeof(int));
}
Pointer vs Object
Referencing a pointer instead of the object it points to
int *BinheapDelete(int **binheap, int *size) {
int *packet;
packet = binheap[0];
binheap[0] = binheap[*size - 1];
*size--;
Heapify(binheap, *size, 0);
return(packet);
}
The two unary operators – and * have the same precedence and right-associativity
Will decrement pointer and then dereference
The correct form is
(*size)--
Pointer Arithmetic
Misunderstanding pointer arithmetic
int *search(int *p, int val) {
while (*p && *p != val)
p += sizeof(int);
return p;
}
p += 4 will incorrectly scans every fourth integer in the array
The correct form is p++
Referencing Nonexistent Variables
Forgetting that local variables disappear when a function returns
Later, if the program assigns some value to the pointer, it might modify an entry
in another function’s stack frame
int *foo () {
int val;
return &val;
}
Referencing Freed Blocks
Evil!
Reference data in heap blocks that have already been freed!
x = malloc(N*sizeof(int));
<manipulate x>
free(x);
...
y = malloc(M*sizeof(int));
for (i=0; i<M; i++)
y[i] = x[i]++;
Failing to Free Blocks (Memory Leaks)
Slow, long-term killer!
foo() {
int *x = malloc(N*sizeof(int));
...
return;
}
Memory leaks are particularly serious for programs such as deamons
and servers, which by definition never terminate.
Homework 7
Read Chapter 8 from Computer System Textbook
Exercise
9.11
9.13
9.15
9.17
9.19
Download