Dynamic Memory Allocation Topics Simple explicit allocators Data structures

advertisement
Dynamic Memory Allocation
Topics

Simple explicit allocators
 Data structures
 Mechanisms
 Policies
Harsh Reality
Memory Matters
Memory is not unbounded


It must be allocated and managed
Many applications are memory dominated
 Especially those based on complex, graph algorithms
Memory referencing bugs especially pernicious

Effects are distant in both time and space
Memory performance is not uniform


–2–
Cache and virtual memory effects can greatly affect
program performance
Adapting program to characteristics of memory system can
lead to major speed improvements
Dynamic Memory Allocation
Application
Dynamic Memory Allocator
Heap Memory
Explicit vs. Implicit Memory Allocator

Explicit: application allocates and frees space
 E.g., malloc and free in C

Implicit: application allocates, but does not free space
 E.g. garbage collection in Java, ML or Lisp
Allocation


–3–
In both cases the memory allocator provides an
abstraction of memory as a set of blocks
Doles out free memory blocks to application
Process Memory Image
kernel virtual memory
stack
%esp
Memory mapped region for
shared libraries
Allocators request
additional heap memory
from the operating
system using the sbrk
function.
the “brk” ptr
run-time heap (via malloc)
uninitialized data (.bss)
initialized data (.data)
program text (.text)
–4–
memory invisible to
user code
0
Malloc Package
#include <stdlib.h>
void *malloc(size_t size)

If successful:
 Returns a pointer to a memory block of at least size bytes,
(typically) aligned to 8-byte boundary.
 If size == 0, returns NULL

If unsuccessful: returns NULL (0) and sets errno.
void free(void *p)


Returns the block pointed at by p to pool of available memory
p must come from a previous call to malloc or realloc.
void *realloc(void *p, size_t size)


–5–
Changes size of block p and returns pointer to new block.
Contents of new block unchanged up to min of old and new size.
Malloc Example
void foo(int n, int m) {
int i, *p;
/* allocate a block of n ints */
if ((p = (int *) malloc(n * sizeof(int))) == NULL) {
perror("malloc");
exit(0);
}
for (i=0; i<n; i++)
p[i] = i;
/* add m bytes to end of p block */
if ((p = (int *) realloc(p, (n+m) * sizeof(int))) == NULL) {
perror("realloc");
exit(0);
}
for (i=n; i < n+m; i++)
p[i] = i;
/* print new array */
for (i=0; i<n+m; i++)
printf("%d\n", p[i]);
free(p); /* return p to available memory pool */
}
–6–
Assumptions
Assumptions made in this lecture

Memory is word addressed (each word can hold a pointer)
Allocated block
(4 words)
–7–
Free block
(3 words)
Free word
Allocated word
Allocation Examples
p1 = malloc(4)
p2 = malloc(5)
p3 = malloc(6)
free(p2)
p4 = malloc(2)
–8–
Constraints
Applications:


Can issue arbitrary sequence of allocation and free requests
Free requests must correspond to an allocated block
Allocators


Can’t control number or size of allocated blocks
Must respond immediately to all allocation requests
 i.e., can’t reorder or buffer requests

Must allocate blocks from free memory
 i.e., can only place allocated blocks in free memory

Must align blocks so they satisfy all alignment requirements
 8 byte alignment for GNU malloc (libc malloc) on Linux boxes


–9–
Can only manipulate and modify free memory
Can’t move the allocated blocks once they are allocated
 i.e., compaction is not allowed
Goals of Good malloc/free
Primary goals

Good time performance for malloc and free
 Ideally should take constant time (not always possible)
 Should certainly not take linear time in the number of blocks

Good space utilization
 User allocated structures should be large fraction of the
heap.
 Want to minimize “fragmentation”.
Some other goals

Good locality properties
 Structures allocated close in time should be close in space
 “Similar” objects should be allocated close in space

Robust
 Can check that free(p1) is on a valid allocated object p1
 Can check that memory references are to allocated space
– 10 –
Performance Goals: Throughput
Given some sequence of malloc and free requests:

R0, R1, ..., Rk, ... , Rn-1
Want to maximize throughput and peak memory
utilization.

These goals are often conflicting
Throughput:


Number of completed requests per unit time
Example:
 5,000 malloc calls and 5,000 free calls in 10 seconds
 Throughput is 1,000 operations/second.
– 11 –
Performance Goals:
Peak Memory Utilization
Given some sequence of malloc and free requests:

R0, R1, ..., Rk, ... , Rn-1
Def: Aggregate payload Pk:


malloc(p) results in a block with a payload of p bytes..
After request Rk has completed, the aggregate payload Pk is
the sum of currently allocated payloads.
Def: Current heap size is denoted by Hk

Assume that Hk is monotonically nondecreasing
Def: Peak memory utilization:

After k requests, peak memory utilization is:
 Uk = ( maxi<k Pi ) / Hk
– 12 –
Internal Fragmentation
Poor memory utilization caused by fragmentation.

Comes in two forms: internal and external fragmentation
Internal fragmentation

For some block, internal fragmentation is the difference between
the block size and the payload size.
block
Internal
fragmentation


– 13 –
payload
Internal
fragmentation
Caused by overhead of maintaining heap data structures, padding
for alignment purposes, or explicit policy decisions (e.g., not to
split the block).
Depends only on the pattern of previous requests, and thus is
easy to measure.
External Fragmentation
Occurs when there is enough aggregate heap memory, but no single
free block is large enough
p1 = malloc(4)
p2 = malloc(5)
p3 = malloc(6)
free(p2)
p4 = malloc(6)
oops!
External fragmentation depends on the pattern of future requests, and
thus is difficult to measure.
– 14 –
Implementation Issues
 How do we know how much memory to free just
given a pointer?
 How do we keep track of the free blocks?
 What do we do with the extra space when
allocating a structure that is smaller than the free
block it is placed in?
 How do we pick a block to use for allocation -many might fit?
 How do we reinsert freed block?
p0
free(p0)
– 15 –
p1 = malloc(1)
Knowing How Much to Free
Standard method

Keep the length of a block in the word preceding the block.
 This word is often called the header field or header

Requires an extra word for every allocated block
p0 = malloc(4)
p0
5
free(p0)
– 16 –
Block size
data
Keeping Track of Free Blocks
Method 1: Implicit list using lengths -- links all blocks
5
4
6
2
Method 2: Explicit list among the free blocks using pointers
within the free blocks
5
4
6
2
Method 3: Segregated free list

Different free lists for different size classes
Method 4: Blocks sorted by size

– 17 –
Can use a balanced tree (e.g. Red-Black tree) with pointers within
each free block, and the length used as a key
Method 1: Implicit List
Need to identify whether each block is free or
allocated


Can use extra bit
Bit can be put in the same word as the size if block sizes
are always multiples of two (mask out low order bit when
reading size).
1 word
size
Format of
allocated and
free blocks
a = 1: allocated block
a = 0: free block
size: block size
payload
payload: application data
(allocated blocks only)
optional
padding
– 18 –
a
Implicit List: Finding a Free
Block
First fit:

Search list from beginning, choose first free block that fits
p = start;
while ((p < end) ||
(*p & 1) ||
(*p <= len));


\\ not passed end
\\ already allocated
\\ too small
Can take linear time in total number of blocks (allocated and free)
In practice it can cause “splinters” at beginning of list
Next fit:


Like first-fit, but search list from location of end of previous search
Research suggests that fragmentation is worse
Best fit:



– 19 –
Search the list, choose the free block with the closest size that fits
Keeps fragments small --- usually helps fragmentation
Will typically run slower than first-fit
Implicit List: Allocating in Free
Block
Allocating in a free block - splitting

Since allocated space might be smaller than free space,
we might want to split the block
4
4
6
2
p
void addblock(ptr p, int len) {
int newsize = ((len + 1) >> 1) << 1;
int oldsize = *p & -2;
*p = newsize | 1;
if (newsize < oldsize)
*(p+newsize) = oldsize - newsize;
}
// add 1 and round up
// mask out low bit
// set new length
// set length in remaining
//
part of block
addblock(p, 2)
4
– 20 –
4
4
2
2
Implicit List: Freeing a Block
Simplest implementation:

Only need to clear allocated flag
void free_block(ptr p) { *p = *p & -2}

But can lead to “false fragmentation”
4
4
2
2
2
2
p
free(p)
4
malloc(5)
4
4
4
Oops!
There is enough free space, but the allocator won’t be able
to find it
– 21 –
Implicit List: Coalescing
Join (coelesce) with next and/or previous
block if they are free

Coalescing with next block
void free_block(ptr p) {
*p = *p & -2;
// clear allocated flag
next = p + *p;
// find next block
if ((*next & 1) == 0)
*p = *p + *next;
// add to this block if
}
//
not allocated
4
4
4

2
2
p
free(p)
– 22 –
4
4
6
2
But how do we coalesce with previous block?
Implicit List: Bidirectional Coalescing
Boundary tags [Knuth73]



Replicate size/allocated word at bottom of free blocks
Allows us to traverse the “list” backwards, but requires extra
space
Important and general technique!
1 word
Header
Format of
allocated and
free blocks
– 23 –
a
payload and
padding
Boundary tag
(footer)
4
size
4 4
size
4 6
a = 1: allocated block
a = 0: free block
size: total block size
a
payload: application data
(allocated blocks only)
6 4
4
Constant Time Coalescing
block being
freed
– 24 –
Case 1
Case 2
Case 3
Case 4
allocated
allocated
free
free
allocated
free
allocated
free
Constant Time Coalescing (Case 1)
– 25 –
m1
1
m1
1
m1
1
m1
1
n
1
n
0
n
1
n
0
m2
1
m2
1
m2
1
m2
1
Constant Time Coalescing (Case 2)
– 26 –
m1
1
m1
1
m1
1
m1
1
n
1
n+m2
0
n
1
m2
0
m2
0
n+m2
0
Constant Time Coalescing (Case 3)
– 27 –
m1
0
n+m1
0
m1
0
n
1
n
1
n+m1
0
m2
1
m2
1
m2
1
m2
1
Constant Time Coalescing (Case 4)
– 28 –
m1
0
m1
0
n
1
n
1
m2
0
m2
0
n+m1+m2
0
n+m1+m2
0
Summary of Key Allocator Policies
Placement policy:


First fit, next fit, best fit, etc.
Trades off lower throughput for less fragmentation
 Interesting observation: segregated free lists (next lecture)
approximate a best fit placement policy without having the search
entire free list.
Splitting policy:


When do we go ahead and split free blocks?
How much internal fragmentation are we willing to tolerate?
Coalescing policy:


Immediate coalescing: coalesce adjacent blocks each time free is
called
Deferred coalescing: try to improve performance of free by
deferring coalescing until needed. e.g.,
 Coalesce as you scan the free list for malloc.
 Coalesce when the amount of external fragmentation reaches some
threshold.
– 29 –
Implicit Lists: Summary
 Implementation: very simple
 Allocate: linear time worst case
 Free: constant time worst case -- even with coalescing
 Memory usage: will depend on placement policy

First fit, next fit or best fit
Not used in practice for malloc/free because of
linear time allocate. Used in many special purpose
applications.
However, the concepts of splitting and boundary tag
coalescing are general to all allocators.
– 30 –
Keeping Track of Free Blocks
 Method 1: Implicit list using lengths -- links all blocks
5
4
6
2
 Method 2: Explicit list among the free blocks using pointers
within the free blocks
5
4
6
 Method 3: Segregated free lists

2
Different free lists for different size classes
 Method 4: Blocks sorted by size (not discussed)

– 31 –
Can use a balanced tree (e.g. Red-Black tree) with pointers
within each free block, and the length used as a key
Explicit Free Lists
A
B
C
Use data space for link pointers


Typically doubly linked
Still need boundary tags for coalescing
Forward links
A
4
B
4 4
4 6
6 4
C

– 32 –
4 4
4
Back links
It is important to realize that links are not necessarily in
the same order as the blocks
Allocating From Explicit Free
Lists
pred
Before:
succ
free block
pred
After:
(with splitting)
– 33 –
succ
free block
Freeing With Explicit Free
Lists
Insertion policy: Where in the free list do you put a
newly freed block?

LIFO (last-in-first-out) policy
 Insert freed block at the beginning of the free list
 Pro: simple and constant time
 Con: studies suggest fragmentation is worse than address
ordered.

Address-ordered policy
 Insert freed blocks so that free list blocks are always in
address order
» i.e. addr(pred) < addr(curr) < addr(succ)
 Con: requires search
 Pro: studies suggest fragmentation is better than LIFO
– 34 –
Freeing With a LIFO Policy
pred (p)
Case 1: a-a-a

Insert self at beginning
of free list
a
succ (s)
self
a
p
Case 2: a-a-f

before:
Splice out next, coalesce
self and next, and add to
beginning of free list
a
self
f
p
after:
a
– 35 –
s
f
s
Freeing With a LIFO Policy
(cont)
p
s
Case 3: f-a-a

Splice out prev, coalesce
with self, and add to
beginning of free list
before:
f
p
self
a
s
after:
f
p1
Case 4: f-a-f

a
s1
p2
before:
f
Splice out prev and next,
coalesce with self, and
add to beginning of list
p1
self
s1
f
p2
after:
f
– 36 –
s2
s2
Explicit List Summary
Comparison to implicit list:



Allocate is linear time in number of free blocks instead of
total blocks -- much faster allocates when most of the
memory is full
Slightly more complicated allocate and free since needs to
splice blocks in and out of the list
Some extra space for the links (2 extra words needed
for each block)
Main use of linked lists is in conjunction with
segregated free lists

– 37 –
Keep multiple linked lists of different size classes, or
possibly for different types of objects
Keeping Track of Free Blocks
Method 1: Implicit list using lengths -- links all blocks
5
4
6
2
Method 2: Explicit list among the free blocks using pointers
within the free blocks
5
4
6
2
Method 3: Segregated free list

Different free lists for different size classes
Method 4: Blocks sorted by size

– 38 –
Can use a balanced tree (e.g. Red-Black tree) with pointers within
each free block, and the length used as a key
Segregated Storage
Each size class has its own collection of blocks
1-2
3
4
5-8
9-16


– 39 –
Often have separate size class for every small size
(2,3,4,…)
For larger sizes typically have a size class for each power
of 2
Simple Segregated Storage
Separate heap and free list for each size class
No splitting
To allocate a block of size n:

If free list for size n is not empty,
 allocate first block on list (note, list can be implicit or explicit)

If free list is empty,
 get a new page
 create new free list from all blocks in page
 allocate first block on list

Constant time
To free a block:


Add to free list
If page is empty, return the page for use by another size
(optional)
Tradeoffs:

– 40 –
Fast, but can fragment badly
Segregated Fits
Array of free lists, each one for some size class
To allocate a block of size n:


Search appropriate free list for block of size m > n
If an appropriate block is found:
 Split block and place fragment on appropriate list (optional)


If no block is found, try next larger class
Repeat until block is found
To free a block:

Coalesce and place on appropriate list (optional)
Tradeoffs



Faster search than sequential fits (i.e., log time for
power of two size classes)
Controls fragmentation of simple segregated storage
Coalescing can increase search times
 Deferred coalescing can help
– 41 –
For More Info on Allocators
D. Knuth, “The Art of Computer Programming, Second Edition”,
Addison Wesley, 1973

The classic reference on dynamic storage allocation
Wilson et al, “Dynamic Storage Allocation: A Survey and Critical
Review”, Proc. 1995 Int’l Workshop on Memory Management,
Kinross, Scotland, Sept, 1995.


– 42 –
Comprehensive survey
Available from CS:APP student site (csapp.cs.cmu.edu)
Implicit Memory Management:
Garbage Collection
Garbage collection: automatic reclamation of heapallocated storage -- application never has to free
void foo() {
int *p = malloc(128);
return; /* p block is now garbage */
}
Common in functional languages, scripting languages,
and modern object oriented languages:

Lisp, ML, Java, Perl, Mathematica,
Variants (conservative garbage collectors) exist for C
and C++
– 43 –

Cannot collect all garbage
Garbage Collection
How does the memory manager know when memory
can be freed?


In general we cannot know what is going to be used in the
future since it depends on conditionals
But we can tell that certain blocks cannot be used if
there are no pointers to them
Need to make certain assumptions about pointers



– 44 –
Memory manager can distinguish pointers from nonpointers
All pointers point to the start of a block
Cannot hide pointers (e.g., by coercing them to an int,
and then back again)
Classical GC algorithms
Mark and sweep collection (McCarthy, 1960)

Does not move blocks (unless you also “compact”)
Reference counting (Collins, 1960)

Does not move blocks (not discussed)
Copying collection (Minsky, 1963)

Moves blocks (not discussed)
For more information, see Jones and Lin, “Garbage
Collection: Algorithms for Automatic Dynamic
Memory”, John Wiley & Sons, 1996.
– 45 –
Memory as a Graph
We view memory as a directed graph



Each block is a node in the graph
Each pointer is an edge in the graph
Locations not in the heap that contain pointers into the heap are
called root nodes (e.g. registers, locations on the stack, global
variables)
Root nodes
Heap nodes
reachable
Not-reachable
(garbage)
A node (block) is reachable if there is a path from any root to that
node.
Non-reachable
nodes are garbage (never needed by the application)
– 46 –
Assumptions For This Lecture
Application

new(n): returns pointer to new block with all locations cleared
read(b,i): read location i of block b into register

write(b,i,v): write v into location i of block b

Each block will have a header word


addressed as b[-1], for a block b
Used for different purposes in different collectors
Instructions used by the Garbage Collector



– 47 –
is_ptr(p): determines whether p is a pointer
length(b): returns the length of block b, not including the
header
get_roots(): returns all the roots
Mark and Sweep Collecting
Can build on top of malloc/free package

Allocate using malloc until you “run out of space”
When out of space:



Use extra mark bit in the head of each block
Mark: Start at roots and set mark bit on all reachable
memory
Sweep: Scan all blocks and free blocks that are not marked
Mark bit set
root
Before mark
After mark
After sweep
– 48 –
free
free
Mark and Sweep (cont.)
Mark using depth-first traversal of the memory graph
ptr mark(ptr p) {
if (!is_ptr(p)) return;
if (markBitSet(p)) return
setMarkBit(p);
for (i=0; i < length(p); i++)
mark(p[i]);
return;
}
//
//
//
//
Sweep using lengths to find next block
ptr sweep(ptr p, ptr end) {
while (p < end) {
if markBitSet(p)
clearMarkBit();
else if (allocateBitSet(p))
free(p);
p += length(p);
}
– 49 –
do nothing if not pointer
check if already marked
set the mark bit
mark all children
Conservative Mark and Sweep in
C
A conservative collector for C programs


Is_ptr() determines if a word is a pointer by checking if
it points to an allocated block of memory.
But, in C pointers can point to the middle of a block.
ptr
header
So how do we find the beginning of the block?


Can use balanced tree to keep track of all allocated blocks
where the key is the location
Balanced tree pointers can be stored in header (use two
additional words)
head
data
size
– 50 –
left
right
Memory-Related Bugs
Dereferencing bad pointers
Reading uninitialized memory
Overwriting memory
Referencing nonexistent variables
Freeing blocks multiple times
Referencing freed blocks
Failing to free blocks
– 51 –
Dereferencing Bad Pointers
The classic scanf bug
scanf(“%d”, val);
– 52 –
Reading Uninitialized Memory
Assuming that heap data is initialized to zero
/* return y = Ax */
int *matvec(int **A, int *x) {
int *y = malloc(N*sizeof(int));
int i, j;
for (i=0; i<N; i++)
for (j=0; j<N; j++)
y[i] += A[i][j]*x[j];
return y;
}
– 53 –
Overwriting Memory
Allocating the (possibly) wrong sized object
int **p;
p = malloc(N*sizeof(int));
for (i=0; i<N; i++) {
p[i] = malloc(M*sizeof(int));
}
– 54 –
Overwriting Memory
Off-by-one error
int **p;
p = malloc(N*sizeof(int *));
for (i=0; i<=N; i++) {
p[i] = malloc(M*sizeof(int));
}
– 55 –
Overwriting Memory
Not checking the max string size
char s[8];
int i;
gets(s);
/* reads “123456789” from stdin */
Basis for classic buffer overflow attacks



– 56 –
1988 Internet worm
Modern attacks on Web servers
AOL/Microsoft IM war
Overwriting Memory
Referencing a pointer instead of the object it points
to
int *BinheapDelete(int **binheap, int *size) {
int *packet;
packet = binheap[0];
binheap[0] = binheap[*size - 1];
*size--;
Heapify(binheap, *size, 0);
return(packet);
}
– 57 –
Overwriting Memory
Misunderstanding pointer arithmetic
int *search(int *p, int val) {
while (*p && *p != val)
p += sizeof(int);
return p;
}
– 58 –
Referencing Nonexistent Variables
Forgetting that local variables disappear when a
function returns
int *foo () {
int val;
return &val;
}
– 59 –
Freeing Blocks Multiple Times
Nasty!
x = malloc(N*sizeof(int));
<manipulate x>
free(x);
y = malloc(M*sizeof(int));
<manipulate y>
free(x);
– 60 –
Referencing Freed Blocks
Evil!
x = malloc(N*sizeof(int));
<manipulate x>
free(x);
...
y = malloc(M*sizeof(int));
for (i=0; i<M; i++)
y[i] = x[i]++;
– 61 –
Download