flash

advertisement
Flash Memory based Storage
(CSE598D)
Thursday April 5, 2007
Youngjae Kim
1
Disk Drive vs. Flash Memory
Read / Write
Read / Program / Erase
(+) Lost cost per bit
(+) Random Access
(+) Non-volatile
(+) Low Power Consumption (2W)
(-) Mechanical movement (SPM & VCM)
(-) High power consumption (10-15W)
(-) Heavy weight compared to flash
(-) Erase before Write
(-) Erasing operation in the unit of block
(not page)
(-) Maximum # of erase operations per cell
(-) High cost per bit
2
MOS (Metal-Oxide Semiconductor) Memory Hierarchy
3
History of Flash Memory
4
NOR and NAND Flash Array
(a) NOR
(b) NAND
5
NOR and NAND Flash Array
6
NAND Flash Memory – Program/Erase
• F-N tunneling
– Give a higher voltage and electrons are trapped through gate into
floating gate transistor.
7
Flash Memory Comparison
•
NOR (Code Executable in Place like Memory)
–
•
Fast read and slow write
NAND (Data-storage)
–
Fast write and lower cost
Flash Type
Code
Storage
File
Storage
NOR
-Intel/Sharp
-AMD/Fujitsu/Toshiba
NAND
-Samsung/Thoshiba
Performance
Important:
-High Random Access
-Byte Programming
Acceptable:
-Slow Programming
-Slow Erasing
Important:
-High Sped Programming
-High Speed Erasing
-High Speed Serial Read
Acceptable:
-Slow Random Access
Application
Program Storage
-Cellular Phone
-DVD, Set TOP Box for
BIOS
Small form factor
-Digital Still Camera
-Silicon Audio, PDA
-Mass storage as Silicon
Disk-Drive
8
NAND Flash Non-Volatile Flash Cards
• Various Standard Memory Cards
9
Functional Block Diagram for SAMSUNG K9K8G08U0M NAND Flash
10
Array Organization for SAMSUNG K9K8G08U0M NAND Flash
•
•
Block: Erasing Unit
Page: Addressable Unit
11
NAND Flash Technology
http://www.samsung.com/Products/Semiconductor/NANDFlash/index.htm
12
Comparison for Different Memory Types
Design and evaluation of the compressed flash translation layer for high-speed and large-scale flash
memory storages Proc. SoC Design Conference, pp. 740-745, September, 2003
13
Outline
• Flash Memory Technology
– NAND vs. NOR
• Block Mapping Schemes
– Emulating Disk with Flash Memory
• Garbage Collection
• Hybrid Hard Drives
– Window Vista
14
NAND Type Flash Memory
• Operation
– Read / Write
• Page unit (Size of a page = Size of a sector (512B) in hard drive)
– Erase
• Block unit (A set of pages)
•
Characteristics
– Not in-place update
• Erase an entire erase block for in-place update of page
Original block
Free block
1. Update page 0
in free block
2. Copy the rest of
Pages (1,2,3,4)
3. Obsolete original
block
15
Block-Mapping Technique (1/2)
• Emulate Block device (Disk-Drive) with Flash Memory
– In traditional disk drive,
1. File system calls a device drive, requesting block read/write
2. Device driver stores the data and retrieve it from flash device
• Problems in Simple Linear Mapping
– Lifetime shortening of flash memory by the limit of write operations
• 100,000 – 1,000,000 per cell
– High risk of data loss due to the size difference between file system
data block and erase block unit of flash
16
Block-Mapping Technique (2/2)
• Maximum number of write operation
– Some data block may be written much more than others
• No problem in hard drive
• Operation time to the cell get slow down => wear and burn out
• Data loss risk from size difference between data block and
erase unit in flash
– Example
• Copy an entire unit (128KB)
into RAM and modify 4KB while erasing
the entire unit and write back.
• But, power loss
– (128KB + 4KB) data loss
17
Block-Mapping Idea (1/2)
•
Maintain mapping table
– Virtual Block # - Physical Flash Address (Sector)
•
Update Process
1. Do not overwrite the sector, instead, write to another free sector
2. Update mapping table
(+) Evenly distribute the wear of erase units
(+) Fast write (because of not-erasing process)
(+) Minimize data loss when power off (possibly revert to the previous state)
•
Write Process
1.
2.
3.
4.
5.
6.
Search for a free/erased sector
Initially, all the bits for (Sector and Header) should be 1s.
Clear free/used bit
Write virtual block # into the header and then write data in the sector
Clear prevalid/valid bit
Clear valid/obsolete bit of previous sector
18
Block-Mapping Idea (2/2)
• Power off during write operation
(Case 1) If the power off occurs before new sector is set to be valid
then ignore the data written
(Case 2) Even if new sector is set valid, but if the power is off before
the previous sector becomes obsolete,
then both of them are valid.
• Select any one according to their versioning numbers
19
Data Structure for Mapping
20
Flash Translation Layer
• Fully emulate magnetic disks with flash memory
– Support random-access
• Two features of flash memory
– Erase before write
– Erase unit size (block) is not the same as read/write size (page).
Flash Device
HOST
File System
Block Device
Driver
IDE,
SCSI
Controller
Flash Memory
ROM
RAM
21
Page-Level Mapping
• Logical Sector Number to Physical Sector Number
• Limitation
– Large SRAM => High Cost
22
Block-Level Mapping
• Logical Block Number to Physical Block Number + Offset
2
3
• Limitation
– Involving extra flash memory operation with write requests
23
Hybrid Approach (Page + Block)
Write Trace
1
Page
1 2 3 4
4 3 4
Block
1 2 3 4
1 2 3 4
1 2 3 4
Replace
Block
1 2 3 4
3 4
4
Log-Block
1 2 3 4
2
3
4
4
3
4
1 2 3 4
4 3 4
24
Garbage Collection
•
To make spare for new and update blocks
– Obsolete sectors must be reclaimed.
– To reclaim a sector is done by erasing an entire unit.
(Reclamation operates on entire erase units.)
•
Reclamation
– In background (e.g., when CPU is idle)
– On-demand (e.g., when no free sectors)
– Goal
•
•
•
Wear-leveling
Efficient reclamation
Reclaiming process
1.
2.
3.
4.
Select erase units for reuse
Copy valid sectors (within an erasing unit)
Update mapping table
Erase the reclaimed erase units and add them to sector-reserve
25
Wear Leveling
• Limitation
– Maximum number of erases/writes per cell (10K – 1M)
• Reliability of cell decreases (e.g., bad block).
• Wear Leveling
– To evenly distribute the cell usages over the cells
– Wear-Leveling versus Efficiency
• They are contradictory.
• For, example, erasing unit containing STATIC data
– For efficiency, it should not be reclaimed
because any storage is not free up.
– For wear-leveling, it should be reclaimed
because it reduces the wear of other units.
26
Wear-Centric Reclamation (1/3) [Lofgren et al. 2000, 2003]
•
Flash
Least Worn-Out Unit
Most Worn-Out Unit
(Reclaimed)
Sector: Free
Sector: Valid
Sector: Valid
Sector: Valid
Sector: Free
2
Sector: Valid
Sector: Invalid
Sector: Valid
1. When the most worn-out unit is
reclaimed, its counter is
compared to that of the least
worn-out unit.
2. If greater than threshold
(e.g.,15,000),
the contents of the most worn out
unit are copied to the least worn
out unit.
And the most worn-out unit
becomes spare.
3. Otherwise, just keep going as it
does
Counter
Counter
Spare Unit
1
Counter
3
Spare Unit
Using an erase counter of
erase unit
Sector: Free
Sector: Free
Sector: Free
Sector: Free
•
Wear-leveling
– Moving static blocks to worn-out
sectors
– Usually sector with static data is
least-worn out unit
27
Wear-Centric Reclamation (2/3) [Jou and Jeppesen III 1996]
•
Flash
Reclaimed Unit 0
Reclaimed Unit 1
Sector: Free
Sector: Free
Sector: Valid
Sector: Valid
Sector: Valid
Sector: Valid
Sector: Valid
Sector: Valid
Free Unit
1
Free Unit
Sector: Free
Sector: Free
Sector: Free
Sector: Free
Sector: Free
Sector: Free
Sector: Free
Sector: Free
2
Using the wear (number of erasure)
–
5
–
–
3
–
–
The valid contents of erase unit
reclaimed are copied to another unit.
But, the unit is not erased
immediately
It’s marked as erasure unit and added
to queue of erase candidate (RAM)
The queue is sorted by wear
Whenever system needs a free unit
and the unit with least wear is
erased
4
U0
U1
RAM
Priority queue sorted by wear
28
Other Wear-Centric Reclamations (3/3)
• Using erase latencies [Han 2000]
– Erase latency increases with wear.
– Erase times are used to rank erase unit by wear.
– It avoids to store erase counters.
• Randomized wear-leveling [Woodhouse 2001]
– Every 1000th reclamation, a unit containing only valid data is
selected.
– Pros and Cons
• (+) Moving static data from units with little wear to units with more wear
• (-) Extreme wear imbalance can occur. (e.g., a little worn-out unit with
invalid data many never be reclaimed.)
29
Wear-Leveling with Efficient Reclamation (1/2)
• Using a weighted benefit/cost [Kawaguchi et al. 1995]
– Benefit: the amount of invalid space in the unit
– Cost: the need to read the valid data and to write back
elsewhere
– Weight: the age of the block, time since last invalidation
• Large weight → the remaining valid data are relatively static.
benefit
cost X weight
30
Wear-Leveling with Efficient Reclamation (2/2)
• Using Hot block and Cold block [Kawaguchi et al. 1995]
– Cold block: Block allocated with low wear level
– Hot block: Block with high wear level
– Observation
• Units with dynamic data tend to be almost empty upon reclamation.
• Static units do not need to be reclaimed at all.
Flash
Cold Blocks
Hot Blocks
1
2
Free Blocks
31
Hybrid Hard Disk (HDD)
• Seagate’s ReadyDrive
– HDD Prototype of Samsung & Seagate for Labtop (WinHEC conf.
2006)
– 128 MB NAND Flash Memory in hard disk
• Store frequently accessed sectors of data for quick reads
(e.g., FAT table)
– Flash is used to make less frequent disk power down and up.
– Advantages
• Reliability, Power-Efficient, and Improved Performance
• Experiment
– Run Office Applications
– Spun up every three and four minutes
– 10% power saving
http://www.extremetech.com/article2/0,1697,1966806,00.asp (May 24, 2006)
32
Seagate’s 5400 RPM Hybrid Hard Drive
• 160 GB of regular perpendicular (PMR) space
• 256 KB flash memory
• It is for Windows Vista in Q1 2007.
33
NAND Flash: Possible to replace the existing hard disks?
• This is suitable for mobile device.
– Mobile device (e.g., Portable Digital Player, Cell Phone, etc.)
• Most applications are media (audio, video, etc.)
• Reads are dominate rather than writes.
• How about for server disk?
– High cost/Large capacity for NAND flash, compared to traditional
disk
• 64GB flash disk (Samsung, 2006) vs. 300GB Seagate Cheetah 15K.5
– Low reliability of data because of wear-out
• Many writes could wear out the cells.
34
FLASHCACHE
•
•
•
[HCSS’94]
DRAM Management
– LRU block replacement
Flash Management
– Segment = A set of blocks/Erasing unit
– Segment list (Free/Clean/Dirty)
– Segment replacement (FIFO or LRU)
Disk Management
– Power management by spin up/down
35
eNVy [ASPLOS’94]
<Diagram of eNVy in a Host>
<Diagram of eNVy Architecture>
36
eNVy [ASPLOS’94]
<Copy on Write: Atomic operation>
• Write operation
– Flash memory: Copy-on-write operation
– SRAM is used as a Write Buffer for fast writes on flash.
– Page replacement on SRAM: FIFO
• Page-mapping (logical to physical addresses) on SRAM
37
NVCache [MASCOTS’06]
• To reduce the power consumption of disk
• NVCache
– To reduce disk power consumption by combining adaptive disk
spin-down algorithm
– To extend spin-down periods by undertaking in NVCache
38
Non-volatile Memory File Systems
• JFFS2 (Journaling
Flash File System)
– Built-in linux kernel after
2.4.X
– JFFS1 (1999) → JFFS2
(2001) → JFFS3 (ongoing)
39
Non-volatile Memory File Systems
• Using NVRAM at file system level
– Conquest file system [USENIX’02]
• Persistent RAM (a sort of NVRAM)
• NVRAM stores “metadata”, “small files”, “executables”, and “shared
libraries”.
– HeRMES file system [HotOS’01]
• Magnetic RAM (a sort of NVRAM)
• NVRAM stores “metadata” and “small data / a few of first blocks”.
– (+) reduce metadata overhead for writes/reads to improve performance
• NVRAM is use for write cache.
– (+) enhance write performance (by buffering and reordering writes)
40
Download