Fast, Inexpensive Content-Addressed Storage in

advertisement
Fast, Inexpensive ContentAddressed Storage in Foundation
Sean Rhea*
Meraki, Inc.
Russ Cox, Alex Pesterev*
MIT CSAIL
*Work done while at Intel Research, Berkeley.
“Digital Dark Ages?”
• Users increasingly store their most valuable data digitally
– Wedding/baby photographs
– Letters (now called email)
– Diaries, scrapbooks, tax returns
• Yet digital information remains especially vulnerable
• Terry Kuny: “We are living in the midst of digital Dark Ages”
–
–
–
–
Hard drives crash
Removable media evolve (e.g., 5 ¼” floppies)
File formats become obsolete (e.g., WordStar, Lotus 1-2-3)
What will the world remember of the late 20th century?
As a community, we’re not bad at storing
important data over the long term.
We’ve only just begun to think about how
we’ll interpret that data 30 years from now.
For Example…
• Viewing an old PowerPoint presentation
– Do we still have PowerPoint at all? And Windows?
– Does the presentation use non-standard fonts/codecs?
– Has some newer application overwritten a shared
library with an incompatible version (“DLL Hell”)?
• Not just a Microsoft problem: consider a web page
– Even current IE/Safari/Firefox don’t agree on formatting
– All kinds of plugins necessary: sound, video, Flash
The Foundation Idea
• Make daily backups of entire software stack
– Archives users’ applications, OS, and configuration state
• Don’t worry about identifying dependencies
– Just save it all: “Every byte, every night”
• To recover an obscure file, boot the relevant stack
in an emulator
– View file with the application that created it
Foundation FAQ
• Why preserve the entire disk?
– Preserve software stack dependencies: preserve the data with the
right application, libraries, and operating system as a single unit
– Works for all applications, not just ones designed for preservation
• Why daily images?
– Want to preserve machine state as close as possible to last write of
user’s data (i.e., preserve image before something changes)
– Also allows recovery from user errors
• Why emulate hardware?
– Much better track record than emulating software
– Software example: OpenOffice emulating Microsoft Word (yikes)
– Hardware emulators available today for Amiga, PDP-11, Nintendo…
I would love to give a talk about why
Foundation is a great solution to the
digital preservation problem.
Really, though, I think it’s just a pretty
good start.
Instead, I’m going to talk about a fun
problem we had to solve to make it work.
Every Byte, Every Night?
Indefinitely? Really?
• Plan 9 did exactly that
– Archive changed blocks every night to optical jukebox
– Found that storage capacity grew faster than usage
• Later with Content-Addressable Storage (Venti)
– Automatically coalesces duplicate data to save space
– Required multiple, high-speed disks for performance
• Challenge for Foundation: provide similar storage
efficiency on consumer hardware
– “Time Machine model”: one external USB drive
Talk Outline
• Introduction
– What is Foundation?
– Review of Content-Addressed Storage (Venti)
• Contributions
– Making Cheap Content-Addressed Storage Fast
– Avoiding Concerns over Hash Collisions
• Related Work
• Conclusions
Venti Review
• Plan 9 file system was two-level
– Spinning storage, mostly a normal file system
– Archival storage, optical write-once jukebox
• Venti replaced optical jukebox
– Still write-once
– Chunks of data named by their SHA-1 hashes
“Content-Addressable Storage (CAS)”
– Automatically coalesces duplicate writes
Venti Review
reads 2nd
4th
block
reads
1st block
append
hash to
summary
seen
it before?
update
index
Archival
Process
Summary
h( ),h( ),h( ),
h( )
Hash  Offset
5:h( )1
0:
6:
1:
7:
2:
3:h( )0 8: h( )2
9:
4:
append
to log
no log
write!
Data Log
User’s Hard Drive
RAM
External USB Drive
Venti Review
restore block
lookup hash
of 1st block
Crash!
map hash to log offset
Hash  Offset
0:h( )4 5:h( )1
6:h( )6
1:
2:h( )3 7:h( )5
3:h( )0 8:h( )2
4:h( )7 9:
Restore
Process
Summary
h( ), h( ), h(
h( ), h( ), h(
h( ), h( ), h(
h( ), h( ), h(
h( ), h( ), h(
),
),
),
),
)
Final step (not shown): archive
summary in data log as well
User’s Hard Drive
RAM
read block
from log
Data Log
External USB Drive
Notes on Venti
• The Good News:
– CAS stores each block with particular contents only once
– Changing any one block and re-archiving uses only one
more block in archive
– Adding a duplicate file from a different source uses no
additional storage
• The Bad News:
– Synchronous, random reads to on-disk index
Venti Review
reads 4th block
seen it before?
Archival
Process
Summary
h( ),h( ),h( )
Hash  Offset
5:h( )1
0:
6:
1:
7:
2:
3:h( )0 8:h( )2
9:
4:
Have to seek to the
right bucket
Data Log
User’s Hard Drive
RAM
External USB Drive
Venti Review
lookup hash
of 1st block
map hash to log offset
Restore
Process
Hash  Offset
0:h( )4 5:h( )1
6:h( )6
1:
2:h( )3 7:h( )5
3:h( )0 8:h( )2
4:h( )7 9:
Summary
h( ), h( ), h( ),
h( ), h( ), h(Have
),
to seek to the
h( ), h( ), h( ),
right bucket
h( ), h( ), h( ),
h( ), h( ), h( )
Data Log
User’s Hard Drive
RAM
External USB Drive
Notes on Venti
• The Good News:
– CAS stores each block with particular contents only once
– Changing any one block and re-archiving uses only one
more block in archive
– Adding a duplicate file from a different source uses no
additional storage
• The Bad News:
– Synchronous, random reads to on-disk index
– Best case, one-disk performance for 512-byte blocks:
one 5 ms seek per 512 bytes archived = 100 kB/s
– That’s 12 days to archive a 100 GB disk!
– Larger blocks give better throughput, less sharing
Notes on Venti (con’t.)
• Venti’s solution: use 8 high-speed disks for index
– Untennable in consumer space
– Wears disks out pretty quickly, too
• The “compare-by-hash” controversy:
– Fear of hash collisions: two different blocks with same
hash breaks Venti
– May be very unlikely, but cost (data corruption) is huge
Does CAS really require a cryptographically strong hash?
Talk Outline
• Introduction
– What is Foundation?
– Review of Content-Addressed Storage (Venti)
• Contributions
– Making Cheap Content-Addressed Storage Fast
– Avoiding Concerns over Hash Collisions
• Related Work
• Conclusions
Making Inexpensive CAS Fast
•
The problem: disk seeks
– Secure hash randomizes an otherwise sequential diskto-disk transfer
– To reduce seeks, must reduce hash table lookups
•
When do hash table lookups occur?
1. When writing data, to determine if we’ve seen it before
2. When writing data, to update the index
3. When reading data, to map hashes to disk locations
2. Updating the Index
• After appending a block to the data log,
must update the index
– Psuedorandom hash causes a seek
Updating the Index
reads 2nd block
update index
Archival
Process
Summary
h( )
Hash  Offset
5:h( )1
0:
6:
1:
7:
2:
3:h( )0 8:
9:
4:
Have to seek to the
append
right
bucket
to log
Data Log
User’s Hard Drive
RAM
External USB Drive
2. Updating the Index
• After appending a block to the data log,
must update the index
– Psuedorandom hash causes a seek
• Easy to fix: use a write-back index cache
– Store index writes in memory
– Flush to disk sequentially in large batches
– On crash, reconstruct index from the data log
3. Mapping Hashes to Disk
Locations During Reads
• To restore disk
– Start with the list of original blocks’ hashes
– Lookup each block in index
– Read block from data log and restore to disk
map hash to log offset
lookup hash
of 1st block
Restore
Process
Hash  Offset
0:h( )4 5:h( )1
6:h( )6
1:
2:h( )3 7:h( )5
3:h( )0 8:h( )2
4:h( )7 9:
Summary
h( ), h( ), h( ),
h( ), h( ), h(Have
),
to seek to the
h( ), h( ), h( ),
right bucket
h( ), h( ), h( ),
h( ), h( ), h( )
Data Log
User’s Hard Drive
RAM
External USB Drive
3. Mapping Hashes to Disk
Locations During Reads
• To restore disk
– Start with the list of original blocks’ hashes
– Lookup each block in index
– Read block from data log and restore to disk
• Observation: data log is mostly ordered
– Duplicate blocks often occur as part of duplicate files
Ordering in Data Log
Summary
h( ), h( ), h(
h( ), h( ), h(
h( ), h( ), h(
h( ), h( ), h(
h( ), h( ), h(
Hash  Offset
0:h( )4 5:h( )1
6:h( )6
1:
2:h( )3 7:h( )5
3:h( )0 8:h( )2
4:h( )7 9:
),
),
),
),
)
Data Log
User’s Hard Drive
RAM
External USB Drive
3. Mapping Hashes to Disk
Locations During Reads
• To restore disk
– Start with the list of original blocks’ hashes
– Lookup each block in index
– Read block from data log and restore to disk
• Observation: data log is mostly ordered
– Duplicate blocks often occur as part of duplicate files
– Idea: add another index, ordered by log offset
– Read-ahead in this index to eliminate future lookups
in original index
Index by Offset
map hash to log offset (seek!)
restore block
lookup
lookuphash
hash
ofof
2nd
1stblock
block
Crash!
Hash  Offset
0:h( )4 5:h( )1
6:h( )6
1:
2:h( )3 7:h( )5
3:h( )0 8:h( )2
4:h( )7 9:
Restore
Process
Summary
h( ), h( ), h( ),
h( ), h( ), h( ),
h( ), h( ), h( ),
h( ), h( ), h( ),
new
index,
h( ), h( ), h( )
prefetch
find log hashes
offset sorted by offset
forsecondary
next few
Hash  Offset
in
offsets from
h( )0 h( )3
index – no seek!
secondary index h( )1 h( )4
(seek!)
h( )2
User’s Hard Drive
RAM
Offset  Hash
0:h( )
6:h( )
1:h( )block
7:h( )
read
read block
8:
2:h( )
from
log
from
9:
3:h(
) log
(seek!)
(no
10:
4:h(
)seek!)
5:h( ) 11:
Data Log
External USB Drive
1. Is a Block New, or Duplicate?
• Optimization for reads also helps duplicate writes
– Index misses on first duplicate block
– Hits on subsequent blocks rewritten in same order
• Doesn’t help for new data
– Every lookup in primary index fails
– Still suffer a seek for every new block
1. Is a Block New, or Duplicate?
• Idea: use a Bloom filter to identify new blocks
– Lossy representation of the primary index
– Uses much less memory than index itself
• For any given block, Bloom filter tells us:
– It’s definitely new  append to log, update index
– It might be duplicate  lookup in index
• If it really is a duplicate, we get the prefetch benefit
• Otherwise, called a “false positive”
• Using enough memory keeps false positives at ~1%
Results
• Do these optimizations pay off?
– Buffering index writes is an obvious win
– Bloom filter is, too: removes 99% of seeks when
writing new data
– Both trade RAM for seeks
• Benefit of secondary index less clear
– If duplicate data comes in long sequences, it reduces
index seeks to two per sequence
– If duplicate data comes in little fragments, it doubles
the number of index seeks
– Need traces of real data to answer this question
Results (con’t.)
• Research group at MIT has been running Venti
as its backup server for two years
– We looked at 400 nightly snapshots
– Simulated archiving and restoring these in both Venti
and Foundation
Average archival speed
% time spent seeking
Average restore speed
% time spent seeking
Venti
< 1 MB/s
96%
1.2 MB/s
95%
Foundation
20.1 MB/s
10%
13.6 MB/s
58%
Talk Outline
• Introduction
– What is Foundation?
– Review of Content-Addressed Storage (Venti)
• Contributions
– Making Cheap Content-Addressed Storage Fast
– Avoiding Concerns over Hash Collisions
• Related Work
• Conclusions
Eliminating “Compare by Hash”
• Some worried that same SHA-1 doesn’t imply
same contents (i.e., hash collisions are possible)
– Even if very rare, consequences (corruption) too great
• Stepping back a bit, CAS as a black box:
– Give it a data block, get back an opaque ID
– Give it an opaque ID, get back the data block
• Do we care that the ID is a SHA-1 hash?
– What if the “opaque” ID was just the block’s location
in the data log?
Using Locations As IDs
• Pros
+ Reads require no index lookups at all
+ System can still find potential duplicates using
hashing (with a weaker, faster hash function)
• Cons
– Need another mechanism to check integrity
– Since hash untrusted, must compare suspected
duplicates byte-by-byte
• Others have claimed these byte-by-byte
comparisons are a non-starter
2nd Disk Arm to the Rescue
• Once we eliminate most index reads (via our
previous optimizations), the backup disk is
otherwise idle while backing up duplicate data
• Can instead put it to work doing byte-by-byte
comparisons of suspected duplicates
Archival
Restore
Venti
< 1 MB/s
1.2 MB/s
Foundation
By Hash
By Value
20.1 MB/s
15.4 MB/s
13.6 MB/s
15.0 MB/s
Talk Outline
• Introduction
– What is Foundation?
– Review of Content-Addressed Storage (Venti)
• Contributions
– Making Cheap Content-Addressed Storage Fast
– Avoiding Concerns over Hash Collisions
• Related Work
• Conclusions
Related Work
• Apple Time Machine
– Duplicates coalesced at file level via hard links
• Netapp WAFL, ZFS
– Copy-on-write coalesces blocks at the FS level
– Misses duplicates that come into system separately
• Data Domain Deduplication FS
– Very similar to Foundation, in enterprise context
– Depends on collision-freeness of hash function
• Lots of other Content-Addressed Storage work
– LBFS, SUNDR, Peabody
Conclusions
• Consumer-grade CAS works now
– A single, external USB drive is enough
– Just have to be crafty about avoiding seeks
• Lots of uses other than preservation
– E.g., inexpensive household backup server that
automatically coalesces duplicate media collections
• Doesn’t require a collision-free hash function
Download