Difference Engine:
Harnessing Memory Redundancy
in Virtual Machines
by Diwaker Gupta et al.
presented by Jonathan Berkhahn
• Virtualization has improved and spread over the past
• Servers often run at 5-10% of CPU Capacity
High capacity needed for peak workloads
Fault isolation for certain services
Certain services run best on particular configurations
• Solution: Virtual Machines
• CPU's suited to multiplexing, main memory is not
• Upgrading not an ideal option
o Expensive
o Limited by slots on the motherboard
o Limited by ability to support higher capacity modules
o Consumes significant power, and therefore produces
significant heat
• Further exacerbated by current trends toward many-core
How do we fix this memory
bottleneck for virtual machines?
Difference Engine
• Implemented as an extension to the Xen VMM
o Sub-page granularity page sharing
o In-memory page compression
• Reduces the memory footprint by up to 90% for
homogenous workloads and up to 65% for
heterogeneous workloads
Related Work
Difference Engine algorithms
Page Sharing
• Transparent page sharing
o Requires guest OS modification
• Content-based
o VMWare ESX
Delta Encoding
• Manber
o Rabin fingerprints
o Inefficient
• Broder
o Combined Rabin fingerprints and sampling
• Both focused on identifying similar files, but not
encoding the differences
Memory Compression
• Douglis et al.
o Sprite OS
o Double-edge sword
• Wilson et al.
o Previous results due to slow hardware
o Developed algorithms that exploit virtual memory
Related Work
Difference Engine algorithms
Page Sharing
• Content-based
• Hash pages and index by hash value
o Hash collisions indicate a potential match
o Compare byte-by-byte to ensure pages are
o Reclaim one page, update virtual memory
o Writes cause a page fault trapped by the VMM
• Sharing of similar pages
• Identify similar pages, store differences as a
• Compresses multiple pages down to single
reference copy and a collection of patches
Identifying Candidate Pages
• Compression of live pages in main memory
o Useful only for high compression ratios
• VMM traps requests for compressed pages
Paging Machine Memory
Last resort
Copy pages to disk
Extremely expensive operation
Leaves policy decisions to end user
Both patching and compression are only
useful for infrequently accessed pages.
So, how do we determine "infrequent"?
• Not-Recently Used policy
• Checks if page has been referenced/modified
o C1 - Recently Modified
o C2 - Recently Referenced
o C3 - Not Recently Accessed
o C4 - Not Accessed for a While
Related Work
Difference Engine algorithms
• Modification to Xen VMM
• Roughly 14,500 lines of code, plus 20,000 for ports of
existing patching and compression algorithms
• Shadow Page Table
o Difference Engine relies on modifying the shadow page
and P2M tables
o Ignored pages mapped by Dom-0
• Complications: Real Mode and I/O support
• Booting on bare metal disables paging
o Requires paging to be enabled within guest OS
• I/O
o Xen hypervisor emulates I/O hardware with a Dom-0
process ioemu, which directly accesses guest pages
o Conflicts with policy of not acting on Dom-0 pages
o Unmap VM pages every 10 seconds
• NRU policy
• Tracked by Referenced and Modified bits on each
• Modified Xen's shadow page tables to set bits
when creating mappings
• C1 - C4
Page Sharing
• Hash table in Xen heap
o Memory limitations - 12 Mb
• Hash table only holds entries for 1/5 memory
o 1.76 Mb hash table
• Covers all of memory in 5 passes
Detecting Similar Pages
• Hash Similarity Detector (2,1)
o Hash similarity table cleared after all pages have
been considered
• Only building the patch and replaced the page
requires a lock
o May result in a differently sized patch, but will
still be correct
Compression & Disk Paging
• Antagonistic relationship with patching
o Compressed/Disk pages can't be patched
• Delayed until all pages have been checked for similarity and
the page has not been accessed for a while (C4)
• Disk paging done by daemon running in Dom-0
Disk Paging
Related Work
Difference Engine algorithms
• Experiments run on dual-processor, dual-core 2.33
GHz Intel Xeon, 4 KB page size
• Tested each operation individually for overhead
Page Lifetime
Homogenous VMs
Homogenous Workload
Heterogeneous Workload
Heterogeneous Workload 2
Utilizing Savings
• Main memory is a primary bottleneck for VMs
• Significant memory savings can be achieved from:
o Sharing identical pages
o Patching similar pages
o In-memory page compression
• Implemented DE and showed memory savings of
as much as 90%
• Saved memory can be used to run more VMs