On the limits of partial compaction

advertisement
On the limits of partial
compaction
Anna Bendersky & Erez Petrank
Technion
Fragmentation
• When a program allocates and de-allocates, holes
appear in the heap.
• These holes are called “fragmentation” and then
–
–
–
–
Large objects cannot be allocated (even after GC),
The heap gets larger and locality deteriorates
Garbage collection work becomes tougher
Allocation gets complicated.
• The amount of fragmentation is hard to define or
measure because it depends on future allocations.
How Bad can Fragmentation Be?
• Consider a game:
– Program tries to consume as much space as possible, but:
– Program does not keep more than M bytes of live space at
any point in time.
– Allocator tries to satisfy program demands within a given
space
– How much space may the allocator need to satisfy the
requests at worst case?
• [Robson 1971, 1974]: There exists a program that will
make any allocator use ½Mlog(n) space,
where n is the size of the largest object.
• [Robson 1971, 1974]: There is an allocator that can do
with ½Mlog(n).
Compaction Kills Fragmentation
• Compaction moves all objects to the beginning of the
heap (and updates the references).
• A memory manager that applies compaction after
each deletion never needs more than M bytes.
• But compaction is costly !
• A common solution: partial compaction.
– “Once in a while” compact “some of the objects”.
– (Typically, during GC, find sparse pages & evacuate objects.)
• Theoretical bounds have never been studied.
• Our focus: the effectiveness of partial compaction.
Setting a Limit on Compaction
• How do we measure the amount of (partial)
compaction?
• Compaction ratio 1/c: after the program allocates B
bytes, it is allowed to move B/c bytes.
• Now we can ask: how much fragmentation still
exists when the partial compaction is limited by a
budget 1/c?
Theorem 1
• There exists a program, such that for all allocators
the heap space required is at least:
1/10  M min{ c , log(n)/log(c) }
• Recall: 1/c is the compaction ratio, M is the overall
space alive at any point in time, n is the size of the
largest object.
Theorem 1 Digest
• If a lot of compaction is allowed (a small c), then the
program needs space Mc/10.
• If little compaction is allowed (a large c), then the
program needs space M log(n) / ( 10 log(c) )
• Recall: 1/c is the compaction ratio, M is the overall
space alive at any point in time, n is the size of the
largest object.
What Can a Memory Manager Achieve?
• Recall Theorem 1: There exists a program, such that
for all allocators the heap space required is at least:
1/10  M min{ c , log(n)/log(c) }
• Theorem 2 (Matching upper bound):
There exists an allocator that can do with
M min{ c+1 , ½log(n) }
• Parameters: 1/c is the compaction ratio, M is the
overall space alive at any point in time, n is the size of
the largest object.
Upper Bound Proof Idea
• There exists a memory manager that can serve any
allocation and de-allocation sequence in space
M  min( c+1 , ½log(n) )
• Idea when c is small (a lot of compaction is allowed):
allocate using first-fit until M(c+1) space is used, and
then compact the entire heap.
• Idea when c is large (little compaction is allowed):
use Robson’s allocator without compacting at all.
Proving the Lower Bound
• Provide a program that behaves “terribly”
• Show that it consumes a large space overhead against
any allocator.
• Let’s start with Robson: the allocator cannot move
objects at all.
– The bad program is provably bad for any allocator.
– (Even if the allocator is designed specifically to handle this
program only…)
Robson’s “Bad” Program (Simplified)
• Allocate objects in phases.
• Phase i allocates objects of size 2i.
• Remove objects selectively so that future allocations
cannot reuse space.
• For (i=0, i<=log(n), ++i)
– Request allocations of objects of size 2i (as many as
possible).
– Delete as many objects as possible so that an object of size
2i+1 cannot be placed in the freed spaces.
11
Bad Program Against First Fit
• Assume (max live space) M=48.
• Start by allocating 48 1-byte objects.
The heap:
12
Bad Program Against First Fit
• Phase 0: Start by allocating 48 1-byte objects.
The heap:
13
Bad Program Against First Fit
• Phase 0: Start by allocating 48 1-byte objects.
• Next, delete so that 2-byte objects cannot be placed.
The heap:
x24
Memory
available for
allocations
14
Bad Program Against First Fit
• Phase 0: Start by allocating 48 1-byte objects.
• Next, delete so that 2-byte objects cannot be placed.
• Phase 1: allocate 12 2-byte objects.
The heap:
x24
Memory
available for
allocations
15
Bad Program Against First Fit
•
•
•
•
Phase 0: Start by allocating 48 1-byte objects.
Next, delete so that 2-byte objects cannot be placed.
Phase 1: allocate 12 2-byte objects.
Next, delete so that 4-byte objects cannot be placed.
The heap:
x24
Memory
available for
allocations
16
Bad Program Against First Fit
• Phase 1: allocate 12 2-byte objects.
• Next, delete so that 4-byte objects cannot be placed.
• Phase 2: allocate 6 4-byte objects.
x24
Memory
available for
allocations
17
First Fit Example -- Observations
• In each phase (after the first), we allocate ½M bytes,
and space reuse is not possible.
• We have log(n) phases.
• Thus, ½Mlog(n) space must be used.
• To be accurate (because we are being videotaped):
– The proof for a general allocator is more complex.
– This bad program only obtains 4/13log(n) M for a general
collector. The actual bad program is more complex.
Is This Program Bad Also When Partial
Compaction is Allowed?
• Observation:
– Small objects are surrounded by large gaps.
– We could move a few and make room for future allocations.
• Idea: a bad program in the presence of partial
compaction, monitors the density of objects in all
areas.
The heap:
19
Is This Program Bad Also When Partial
Compaction is Allowed?
• Observation:
– Small objects are surrounded by large gaps.
– We could move a few and make room for future allocations.
• Idea: a bad program in the presence of partial
compaction, monitors the density of objects in all
areas.
Guideline for adversarial program:
remove as much space as possible, but
maintain a minimal “density” !
20
Simplifications for the Presentation
• Objects are aligned.
• The compaction budget c is a power of 2.
Aligned allocation: an object of size 2i, must be placed at
a location k*2i, for some integer k.
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23
21
The Adversarial Program
• For (i=0, i<=log(n), ++i)
– Request allocations of as many as possible objects of size
2i, not exceeding M.
– Partition the memory into consecutive aligned areas of size
2i+1
– Delete as many object as possible, so that each area
remains at least 2/c full.
22
An Execution Example
• i=0, allocate M=48 objects of size 1.
Assume compaction
budget C=4
The heap:
Compaction
quota: 48/4 = 12
23
An Execution Example
• i=0, allocate M=48 objects of size 1.
• i=1, memory manager does not compact. Assume compaction
budget C=4
• i=1, deletion step.
The heap:
Compaction
Quota: = 12
x23
Memory
available for
allocations
24
An Execution Example
• i=1, allocate 11 objects of size 2.
Assume compaction
budget C=4
The heap:
Compaction
Quota: = 12+22/4=17.5
12
x1
Memory
available for
allocations
An Execution Example
• i=1, allocate 11 objects of size 2.
• Memory manager compacts.
The heap:
Assume compaction
budget C=4
Compaction
Quota: = 17.5
17.5-8=9.5
x1
Memory
available for
allocations
An Execution Example
• i=1, allocate 11 objects of size 2.
• Memory manager compacts.
• i=1, deletion. Lowest density 2/c=1/2.
The heap:
Assume compaction
budget C=4
Compaction
Quota: = 9.5
x1
x20
Memory
available for
allocations
An Execution Example
• i=2, allocate 5 objects of size 4.
Assume compaction
budget C=4
The heap:
Compaction
Quota: = 9.5
x20
Memory
available for
allocations
Bad Program Intuition
• The goal is to minimize reuse of space.
• If we leave an area with density 1/c, then the memory
manager can:
– Move allocated space out (lose budget area-size/c)
– Allocate on space (win budget area-size/c)
• Therefore, we maintain density 2/c in each area.
Proof Skeleton
The following holds for all memory managers.
Fact: space used ≥ space allocated – space reused.
Lemma 1: space reused < compaction  c/2
By definition: compaction < ( space allocated ) / c
Conclusion:
space used ≥ space allocated – compaction  c/2
≥ space allocated ( 1 - ½ )
• Lemma 2: either allocation is sparse (and a lot of
space is thus used) or there is a lot of space allocated
during the run.
• And the lower bound follows.
•
•
•
•
•
The full proof works with non-aligned objects...
The Worst-Case for Specific Systems
• Real memory managers use specific allocation
techniques.
• The space overhead for a specific allocator may be
large.
– Until now, we considered a program that is bad for all
allocators.
• A malicious programs can take advantage of knowing
the allocation technique.
• One popular allocation method is segregated free list.
Block-Oriented Segregated Free List
Segregated free list:
• Keep an array for different object sizes (say, a power
of 2).
• Each entry has an associated range of sizes and it
points to a free list of chunks with sizes in the range.
Block Oriented:
• Partition the heap to blocks (typically of page size).
• Each block only holds objects of the same size.
Segregated Free List
n
1 2 4 8
2
Different
object sizes
n
…
Each object is allocated
in a block matching its
size.
The first free chunk in33
the list is typically used.
If there is no free chunk,
a new block is allocated
and split into chunks.
What’s the Worst Space Consumption?
• No specific allocators were investigated in previous
work (except for obvious observations).
• Even with no compaction: what’s the worst space
usage for a segregated free list allocator?
Lower Bound Differ by Possible Sizes
• For all possible sizes:
1 2 3 4 5
n 1
n
– If no compaction allowed: space used ≥ 4/5  M√n
• For exponentially increasing sizes:
– With no compaction: space used ≥ M log(n)
n
1 2 4 8
2
• (Should be interpreted as: there exists a bad program that will
make the segregated free list allocator use at least … space.)
35
n
Lower Bound Differ by Possible Sizes
• For all possible sizes:
1 2 3 4 5
n 1
n
– If no compaction allowed: space used ≥ 4/5  M√n
– With partial compaction: space used ≥ ⅙ M min(√n,c/2)
• For exponentially increasing sizes:
n
1 2 4 8
– With no compaction: space used ≥ M log(n)
– With partial compaction: space used ≥ ¼M min(log(n),c/2)
2
• (Should be interpreted as: there exists a bad program that will
make the segregated free list allocator use at least … space.)
36
n
The Adversarial Program for SFL
• When no compaction is allowed:
For (i=0; i<k; ++i) // k is the number of different sizes.
– Allocate as many objects as possible of size si
– Delete as many objects as possible while leaving a single
object in each block.
• In the presence of partial compaction:
For (i=0; i<k; ++i)
– Allocate as many objects as possible of size si
– Delete as many objects as possible while leaving at least
2b/c bytes in each block. // where b is the size of the block.
37
Related Work
• Theoretical Works:
– Robson’s work [1971, 1974]
– Luby-Naor-Orda [1994,1996]
• Various memory managers employ partial compaction.
For example:
–
–
–
–
Ben Yitzhak et.al [2003]
Metronome by Bacon et al. [2003]
Pauless collector by Click et al. [2005]
Concurrent Real-Time Garbage Collectors by Pizlo et al.
[2007, 2008]
38
Conclusion
• Partial compaction is used to ameliorate the pauses
imposed by full compaction.
• We studied the efficacy of partial compaction in
reducing fragmentation.
• Compaction is budgeted as a fraction of the allocated
space.
• We have shown a lower bound on fragmentation for
any given compaction ratio.
• We have shown a specific lower bound for segregated
free list.
39
Download