Lecture 1: Course Introduction and Overview

advertisement
Motivation: Who Cares About I/O?
• CPU Performance: 60% per year
• I/O system performance limited by mechanical
delays (disk I/O)
< 10% per year (IO per sec)
• Amdahl's Law: system speed-up limited by the
slowest part!
10% IO &
10x CPU => 5x Performance (lose 50%)
10% IO & 100x CPU => 10x Performance (lose 90%)
• I/O bottleneck:
Diminishing fraction of time in CPU
Diminishing value of faster CPUs
Big Picture: Who cares about CPUs?
• Why still important to keep CPUs busy vs. IO
devices ("CPU time"), as CPUs not costly?
– Moore's Law leads to both large, fast CPUs but also to very
small, cheap CPUs
– 2001 Hypothesis: 600 MHz PC is fast enough for Office
Tools?
– PC slowdown since fast enough unless games, new apps?
• People care more about about storing information
and communicating information than calculating
– "Information Technology" vs. "Computer Science"
– 1960s and 1980s: Computing Revolution
– 1990s and 2000s: Information Age
I/O Systems
Processor
interrupts
Cache
Memory - I/O Bus
Main
Memory
I/O
Controller
Disk
Disk
I/O
Controller
I/O
Controller
Graphics
Network
Storage Technology Drivers
• Driven by the prevailing computing paradigm
– 1950s: migration from batch to on-line processing
– 1990s: migration to ubiquitous computing
» computers in phones, books, cars, video cameras, …
» nationwide fiber optical network with wireless tails
• Effects on storage industry:
– Embedded storage
» smaller, cheaper, more reliable, lower power
– Data utilities
» high capacity, hierarchically managed storage
Disk Device Terminology
Arm Head
Inner Outer
Sector
Track Track
Actuator
Platter
• Several platters, with information recorded magnetically on both
surfaces (usually)
• Bits recorded in tracks, which in turn divided into sectors (e.g.,
512 Bytes)
• Actuator moves head (end of arm,1/surface) over track (“seek”),
select surface, wait for sector rotate under head, then read or
write
–
“Cylinder”: all tracks under heads
Photo of Disk Head, Arm,
Actuator
Spindle
Arm
Head
Actuator
Platters (12)
Disk Device Performance
Outer
Track
Platter
Inner Sector
Head Arm Controller
Spindle
Track
Actuator
• Disk Latency = Seek Time + Rotation Time + Transfer
Time + Controller Overhead
• Seek Time? depends no. tracks move arm, seek speed of disk
• Rotation Time? depends on speed disk rotates, how far sector is
from head
• Transfer Time? depends on data rate (bandwidth) of disk (bit
density), size of request
Disk Device Performance
• Average distance sector from head?
• 1/2 time of a rotation
– 10000 Revolutions Per Minute  166.67 Rev/sec
– 1 revolution = 1/ 166.67 sec  6.00 milliseconds
– 1/2 rotation (revolution)  3.00 ms
• Average no. tracks move arm?
– Sum all possible seek distances
from all possible tracks / # possible
» Assumes average seek distance is random
– Disk industry standard benchmark
Data Rate: Inner vs. Outer Tracks
• To keep things simple, originally kept same number of
sectors per track
– Since outer track longer, lower bits per inch
• Competition  decided to keep BPI the same for all
tracks (“constant bit density”)
 More capacity per disk
 More of sectors per track towards edge
 Since disk spins at constant speed,
outer tracks have faster data rate
• Bandwidth outer track 1.7X inner track!
– Inner track highest density, outer track lowest, so not really
constant
– 2.1X length of track outer / inner, 1.7X bits outer / inner
• Purpose:
Devices: Magnetic Disks
– Long-term, nonvolatile storage
– Large, inexpensive, slow level in
the storage hierarchy
Track
Sector
• Characteristics:
– Seek Time (~8 ms avg)
»
positional latency
»
rotational latency
•
Transfer rate
–
–
10-40 MByte/sec
Blocks
Cylinder
Head
Platter
7200 RPM = 120 RPS => 8 ms per rev
ave rot. latency = 4 ms
128 sectors per track => 0.25 ms per sector
1 KB per sector => 16 MB / s
• Capacity
–
–
Gigabytes
Quadruples every 2 years
(aerodynamics)
Response time
= Queue + Controller + Seek + Rot + Xfer
Service time
Disk Performance Model /Trends
• Capacity
+ 100%/year (2X / 1.0 yrs)
• Transfer rate (BW)
+ 40%/year (2X / 2.0 yrs)
• Rotation + Seek time
+ 8%/ year (1/2 in 10 yrs)
• MB/$
> 100%/year (2X / 1.0 yrs)
Fewer chips + areal density
State of the Art: Barracuda 180
Track
Sector
Cylinder
Track Arm
Platter
Head
Buffer
Latency =
Queuing Time +
Controller time +
per access Seek Time +
+
Rotation Time +
per byte
Size / Bandwidth
{
source: www.seagate.com
–
–
–
–
–
–
–
–
181.6 GB, 3.5 inch disk
12 platters, 24 surfaces
24,247 cylinders
7,200 RPM; (4.2 ms avg.
latency)
7.4/8.2 ms avg. seek
(r/w)
64 to 35 MB/s (internal)
0.1 ms controller time
10.3 watts (idle)
Disk Performance Example (will fix later)
• Calculate time to read 64 KB (128 sectors) for
Barracuda 180 X using advertised performance;
sector is on outer track
Disk latency = average seek time + average
rotational delay + transfer time + controller
overhead
= 7.4 ms + 0.5 * 1/(7200 RPM)
+ 64 KB / (65 MB/s) + 0.1 ms
= 7.4 ms + 0.5 /(7200 RPM/(60000ms/M))
+ 64 KB / (65 KB/ms) + 0.1 ms
= 7.4 + 4.2 + 1.0 + 0.1 ms = 12.7 ms
Areal Density
• Bits recorded along a track
– Metric is Bits Per Inch (BPI)
• Number of tracks per surface
– Metric is Tracks Per Inch (TPI)
• Disk Designs Brag about bit density per unit area
– Metric is Bits Per Square Inch
– Called Areal Density
– Areal Density = BPI x TPI
Areal Density
Year
Areal Density
1.7
1979
7.7
1989
63
1997
3090
2000
17100
100000
10000
1000
Areal Density
1973
100
10
1
1970
1980
1990
Year
– Areal Density = BPI x TPI
– Change slope 30%/yr to 60%/yr about 1991
2000
MBits per square inch:
DRAM as % of Disk over time
9 v. 22 Mb/si
50%
40%
30%
20%
470 v. 3000 Mb/si
10%
0% 0.2
v. 1.7 Mb/si
1974 1980 1986 1992 1998 2000
source: New York Times, 2/23/98, page C3,
“Makers of disk drives crowd even mroe data into even smaller spaces”
Historical Perspective
• 1956 IBM Ramac — early 1970s Winchester
– Developed for mainframe computers, proprietary interfaces
– Steady shrink in form factor: 27 in. to 14 in
• Form factor and capacity drives market, more than
performance
• 1970s: Mainframes  14 inch diameter disks
• 1980s: Minicomputers,Servers  8”,5 1/4” diameter
• PCs, workstations Late 1980s/Early 1990s:
– Mass market disk drives become a reality
» industry standards: SCSI, IPI, IDE
– Pizzabox PCs  3.5 inch diameter disks
– Laptops, notebooks  2.5 inch disks
– Palmtops didn’t use disks,
so 1.8 inch diameter disks didn’t make it
• 2000s:
– 1 inch for cameras, cell phones?
Disk History
Data
density
Mbit/sq. in.
Capacity of
Unit Shown
Megabytes
1973:
1. 7 Mbit/sq. in
140 MBytes
1979:
7. 7 Mbit/sq. in
2,300 MBytes
source: New York Times, 2/23/98, page C3,
“Makers of disk drives crowd even more data into even smaller spaces”
Disk History
1989:
63 Mbit/sq. in
60,000 MBytes
1997:
1450 Mbit/sq. in
2300 MBytes
1997:
3090 Mbit/sq. in
8100 MBytes
source: New York Times, 2/23/98, page C3,
“Makers of disk drives crowd even more data into even smaller spaces”
1 inch disk drive!
• 2000 IBM MicroDrive:
– 1.7” x 1.4” x 0.2”
– 1 GB, 3600 RPM,
5 MB/s, 15 ms seek
– Digital camera, PalmPC?
• 2006 MicroDrive?
• 9 GB, 50 MB/s!
– Assuming it finds a niche
in a successful product
– Assuming past trends continue
Disk Characteristics in 2000
Seagate
IBM
IBM 1GB
Cheetah
Travelstar
Microdrive
ST173404LC 32GH DJSA - DSCM-11000
Ultra160 SCSI 232 ATA-4
Disk diameter
(inches)
Formatted data
capacity (GB)
Cylinders
3.5
2.5
1.0
73.4
32.0
1.0
14,100
21,664
7,167
Disks
12
4
1
Recording
Surfaces (Heads)
Bytes per sector
24
8
2
512 to 4096
512
512
~ 424
~ 360
~ 140
6.0
14.0
15.2
Avg Sectors per
track (512 byte)
Max. areal
density(Gbit/sq.in.)
$828
$447
$435
Disk Characteristics in 2000
Seagate
IBM
IBM 1GB
Cheetah
Travelstar
Microdrive
ST173404LC 32GH DJSA - DSCM-11000
Ultra160 SCSI 232 ATA-4
Rotation speed
(RPM)
Avg. seek ms
(read/write)
Minimum seek
ms (read/write)
Max. seek ms
Data transfer
rate MB/second
Link speed to
buffer MB/s
Power
idle/operating
Watts
10033
5411
3600
5.6/6.2
12.0
12.0
0.6/0.9
2.5
1.0
14.0/15.0
23.0
19.0
27 to 40
11 to 21
2.6 to 4.2
160
67
13
16.4 / 23.5
2.0 / 2.6
0.5 / 0.8
Disk Characteristics in 2000
Seagate
IBM
IBM 1GB
Cheetah
Travelstar
Microdrive
ST173404LC 32GH DJSA - DSCM-11000
Ultra160 SCSI 232 ATA-4
Buffer size in MB
4.0
2.0
0.125
Size: height x
width x depth
inches
Weight pounds
1.6 x 4.0 x
5.8
2.00
0.5 x 2.7 x 0.2 x 1.4 x
3.9
1.7
0.34
0.035
Rated MTTF in
powered-on hours
1,200,000
% of POH per
month
% of POH
seeking, reading,
writing
100%
(300,000?) (20K/5 yr
life?)
45%
20%
90%
20%
20%
Disk Characteristics in 2000
Seagate
IBM Travelstar
Cheetah
32GH DJSA ST173404LC
232 ATA-4
Ultra160 SCSI
IBM 1GB Microdri
DSCM-11000
Load/Unload
cycles (disk
powered on/off)
Nonrecoverable
read errors per
bits read
Seek errors
250 per year
300,000
300,000
<1 per 1015
< 1 per 1013
< 1 per 1013
not available
not available
Shock tolerance:
Operating, Not
operating
Vibration
tolerance:
Operating, Not
operating (sine
swept, 0 to peak)
10 G, 175 G 150 G, 700 G
<1 per 10
7
175 G, 1500 G
5-400 Hz @ 5-500 Hz @ 5-500 Hz @ 1G, 1
0.5G, 22-400 1.0G, 2.5-500
500 Hz @ 5G
Hz @ 2.0G Hz @ 5.0G
Fallacy: Use Data Sheet “Average Seek” Time
• Manufacturers needed standard for fair comparison
(“benchmark”)
– Calculate all seeks from all tracks, divide by number of seeks =>
“average”
• Real average would be based on how data laid out on
disk, where seek in real applications, then measure
performance
– Usually, tend to seek to tracks nearby, not to random track
• Rule of Thumb: observed average seek time is
typically about 1/4 to 1/3 of quoted seek time
(i.e., 3X-4X faster)
– Barracuda 180 X avg. seek: 7.4 ms  2.5 ms
Fallacy: Use Data Sheet Transfer
Rate
• Manufacturers quote the speed off the data rate off
the surface of the disk
• Sectors contain an error detection and correction
field (can be 20% of sector size) plus sector number
as well as data
• There are gaps between sectors on track
• Rule of Thumb: disks deliver about 3/4 of internal
media rate (1.3X slower) for data
• For example, Barracuda 180X quotes
64 to 35 MB/sec internal media rate
 47 to 26 MB/sec external data rate (74%)
Disk Performance Example
• Calculate time to read 64 KB for UltraStar 72
again, this time using 1/3 quoted seek time, 3/4 of
internal outer track bandwidth; (12.7 ms before)
Disk latency = average seek time + average
rotational delay + transfer time + controller
overhead
= (0.33 * 7.4 ms) + 0.5 * 1/(7200 RPM)
+ 64 KB / (0.75 * 65 MB/s) + 0.1 ms
= 2.5 ms + 0.5 /(7200 RPM/(60000ms/M))
+ 64 KB / (47 KB/ms) + 0.1 ms
= 2.5 + 4.2 + 1.4 + 0.1 ms = 8.2 ms (64% of 12.7)
Future Disk Size and Performance
• Continued advance in capacity (60%/yr) and
bandwidth (40%/yr)
• Slow improvement in seek, rotation (8%/yr)
• Time to read whole disk
Year
Sequentially
Randomly
(1 sector/seek)
1990
4 minutes
6 hours
2000
12 minutes
1 week(!)
• 3.5” form factor make sense in 5 yrs?
– What is capacity, bandwidth, seek time, RPM?
– Assume today 80 GB, 30 MB/sec, 6 ms, 10000 RPM
Tape vs. Disk
• Longitudinal tape uses same technology as
hard disk; tracks its density improvements
• Disk head flies above surface, tape head lies on surface
• Disk fixed, tape removable
• Inherent cost-performance based on geometries:
fixed rotating platters with gaps
(random access, limited area, 1 media / reader)
vs.
removable long strips wound on spool
(sequential access, "unlimited" length, multiple / reader)
• Helical Scan (VCR, Camcoder, DAT)
Spins head at angle to tape to improve density
Current Drawbacks to Tape
• Tape wear out:
– Helical 100s of passes to 1000s for longitudinal
• Head wear out:
– 2000 hours for helical
• Both must be accounted for in economic /
reliability model
• Bits stretch
• Readers must be compatible with multiple
generations of media
• Long rewind, eject, load, spin-up times;
not inherent, just no need in marketplace
• Designed for archival
Automated Cartridge System:
StorageTek Powderhorn 9310
7.7 feet
8200 pounds,
1.1 kilowatts
10.7 feet
• 6000 x 50 GB 9830 tapes = 300 TBytes in
2000 (uncompressed)
– Library of Congress: all information in the world; in 1992,
ASCII of all books = 30 TB
– Exchange up to 450 tapes per hour (8 secs/tape)
• 1.7 to 7.7 Mbyte/sec per reader, up to 10
readers
Library vs. Storage
• Getting books today as quaint as the way I
learned to program
– punch cards, batch processing
– wander thru shelves, anticipatory purchasing
•
•
•
•
•
Cost $1 per book to check out
$30 for a catalogue entry
30% of all books never checked out
Write only journals?
Digital library can transform campuses
Whither tape?
• Investment in research:
– 90% of disks shipped in PCs; 100% of PCs have disks
– ~0% of tape readers shipped in PCs; ~0% of PCs have disks
• Before, N disks / tape; today, N tapes / disk
– 40 GB/DLT tape (uncompressed)
– 80 to 192 GB/3.5" disk (uncompressed)
• Cost per GB:
–
–
–
–
–
–
In past, 10X to 100X tape cartridge vs. disk
Jan 2001: 40 GB for $53 (DLT cartridge), $2800 for reader
$1.33/GB cartridge, $2.03/GB 100 cartridges + 1 reader
($10995 for 1 reader + 15 tape autoloader, $10.50/GB)
Jan 2001: 80 GB for $244 (IDE,5400 RPM), $3.05/GB
Will $/GB tape v. disk cross in 2001? 2002? 2003?
• Storage field is based on tape backup; what should we do?
Long-Time Archiving
• Need to preserve
– Bit values
– Interpretation of bit values
• Two approaches
– Migration
– Emulation: Long-lived format/specification
• Digital microfilm project
– Bitmap as the unified format
– Decompression with source code
– Continuous disk migration
Use Arrays of Small Disks?
•Katz and Patterson asked in 1987:
•Can smaller disks be used to close gap in
performance between disks and CPUs?
Conventional:
4 disk
3.5” 5.25”
designs
Low End
Disk Array:
1 disk design
3.5”
10”
14”
High End
Advantages of Small Form-factor
Disk Drives
Low cost/MB
High MB/volume
High MB/watt
Low cost/Actuator
Cost and Environmental Efficiencies
Replace Small Number of Large Disks with
Large Number of Small Disks! (1988 Disks)
IBM 3390K IBM 3.5" 0061
x70
20 GBytes 320 MBytes 23 GBytes
Capacity
97 cu. ft.
11 cu. ft. 9X
Volume
0.1 cu. ft.
3 KW
1 KW 3X
Power
11 W
15 MB/s
120 MB/s 8X
Data Rate
1.5 MB/s
600 I/Os/s
3900 IOs/s 6X
I/O Rate
55 I/Os/s
250 KHrs
??? Hrs
MTTF
50 KHrs
$250K
$150K
Cost
$2K
Disk Arrays have potential for large data and
I/O rates, high MB per cu. ft., high MB per KW,
but what about reliability?
Array Reliability
• Reliability of N disks = Reliability of 1 Disk ÷ N
50,000 Hours ÷ 70 disks = 700 hours
Disk system MTTF: Drops from 6 years to 1 month!
• Arrays (without redundancy) too unreliable to be useful!
Hot spares support reconstruction in parallel with
access: very high media availability can be achieved
Redundant Arrays of (Inexpensive) Disks
• Files are "striped" across multiple disks
• Redundancy yields high data availability
– Availability: service still provided to user, even if some components
failed
• Disks will still fail
• Contents reconstructed from data
stored in the array
 Capacity penalty to store redundant info
 Bandwidth penalty to update redundant info
redundantly
Redundant Arrays of Inexpensive Disks
RAID 1: Disk Mirroring/Shadowing
recovery
group
• Each disk is fully duplicated onto its “mirror”
Very high availability can be achieved
• Bandwidth sacrifice on write:
Logical write = two physical writes
• Reads may be optimized
• Most expensive solution: 100% capacity overhead
• (RAID 2 not interesting, so skip)
Redundant Array of Inexpensive Disks
RAID 3: Parity Disk
10010011
11001101
10010011
...
logical record
1
1
0
1
Striped physical
1
0
records
0
0
P contains sum of
0
1
other disks per stripe 0
1
mod 2 (“parity”)
1
0
If disk fails, subtract 1
1
P from sum of other
disks to find missing information
P
1
0
1
0
0
0
1
1
1
1
0
0
1
1
0
1
RAID 3
• Sum computed across recovery group to protect against hard disk
failures, stored in P disk
• Logically, a single high capacity, high transfer rate disk: good
for large transfers
• Wider arrays reduce capacity costs, but decreases availability
• 33% capacity cost for parity in this configuration
Inspiration for RAID 4
• RAID 3 relies on parity disk to discover errors
on Read
• But every sector has an error detection field
• Rely on error detection field to catch errors on read, not on the
parity disk
• Allows independent reads to different disks simultaneously
Redundant Arrays of Inexpensive Disks
RAID 4: High I/O Rate Parity
Insides of
5 disks
Example:
small read
D0 & D5,
large write
D12-D15
D0
D1
D2
D3
P
D4
D5
D6
D7
P
D8
D9
D10
D11
P
D12
D13
D14
D15
P
D16
D17
D18
D19
P
D20
D21
D22
D23
P
.
.
.
.
Columns
.
.
.
.
.
.
.
.
.
.
Disk
.
Increasing
Logical
Disk
Address
Stripe
Inspiration for RAID 5
• RAID 4 works well for small reads
• Small writes (write to one disk):
– Option 1: read other data disks, create new sum and write to Parity
Disk
– Option 2: since P has old sum, compare old data to new data, add
the difference to P
• Small writes are limited by Parity Disk: Write to D0,
D5 both also write to P disk
D0
D1
D2
D3
P
D4
D5
D6
D7
P
Redundant Arrays of Inexpensive Disks
RAID 5: High I/O Rate Interleaved
Parity
Independent
writes
possible
because of
interleaved
parity
Example:
write to
D0, D5
uses disks
0, 1, 3, 4
D0
D1
D2
D3
P
D4
D5
D6
P
D7
D8
D9
P
D10
D11
D12
P
D13
D14
D15
P
D16
D17
D18
D19
D20
D21
D22
D23
P
.
.
.
.
.
.
.
.
.
.
Disk Columns
.
.
.
.
.
Increasing
Logical
Disk
Addresses
Problems of Disk Arrays:
Small Writes
RAID-5: Small Write Algorithm
1 Logical Write = 2 Physical Reads + 2 Physical Writes
D0'
new
data
D0
D1
D2
D3
old
data (1. Read)
P
old
(2. Read)
parity
+ XOR
+ XOR
(3. Write)
D0'
D1
(4. Write)
D2
D3
P'
Log-Structured File System (LFS)
•
•
•
•
•
Organize the file system as a log
Aggregate small writes into a big write
Write-optimized
Garbage collection could be a problem
Read performance may suffer (FFS-style
clustering is not possible)
• Applied only to file system metadata 
journaling file system
Track-based Logging (Trail)
• A normal disk and a log disk
• Write to where the disk head of the log
disk happens to be
– Track-by-track logging: one batched write per track
– Trade space for time
• Write to the normal disk asynchronously
• Very low write latency ( < 1 msec)
• Extensions
– For continuous data protection (CDP)
– Multiple writes per track
– Multiple log disks
System Availability: Orthogonal RAIDs
Array
Controller
String
Controller
. . .
String
Controller
. . .
String
Controller
. . .
String
Controller
. . .
String
Controller
. . .
String
Controller
. . .
Data Recovery Group: unit of data redundancy
Redundant Support Components: fans, power supplies, controller, cables
End to End Data Integrity: internal parity protected data paths
System-Level Availability
host
host
Fully dual redundant
I/O Controller
Array Controller
I/O Controller
Array Controller
...
...
...
...
Goal: No Single
Points of
Failure
...
Recovery
Group
.
.
.
with duplicated paths, higher performance can be
obtained when there are no failures
Berkeley History:
RAID-I
• RAID-I (1989)
– Consisted of a Sun 4/280
workstation with 128 MB of
DRAM, four dual-string SCSI
controllers, 28 5.25-inch SCSI
disks and specialized disk striping
software
• Today RAID is $19 billion
dollar industry, 80% nonPC
disks sold in RAIDs
Summary: RAID Techniques: Goal
was performance, popularity due to
reliability of storage
1
• Disk Mirroring, Shadowing (RAID 1)
Each disk is fully duplicated onto its "shadow"
Logical write = two physical writes
100% capacity overhead
• Parity Data Bandwidth Array (RAID 3)
Parity computed horizontally
Logically a single high data bw disk
• High I/O Rate Parity Array (RAID 5)
Interleaved parity blocks
Independent reads and writes
Logical write = 2 reads + 2 writes
1
0
0
1
0
0
1
1
1
0
0
1
0
0
1
1
0
0
1
0
0
1
1
1
1
0
0
1
1
0
1
1
0
0
1
0
0
1
1
0
0
1
1
0
0
1
0
Summary Storage
• Disks:
– Extraodinary advance in capacity/drive, $/GB
– Currently 17 Gbit/sq. in. ; can continue past 100
Gbit/sq. in.?
– Bandwidth, seek time not keeping up: 3.5 inch form
factor makes sense? 2.5 inch form factor in near
future? 1.0 inch form factor in long term?
• Tapes
– No investment, must be backwards compatible
– Are they already dead?
– What is a tapeless backup system?
Review: RAID Techniques: Goal was
performance, popularity due to
reliability of storage
1
• Disk Mirroring, Shadowing (RAID 1)
Each disk is fully duplicated onto its "shadow"
Logical write = two physical writes
100% capacity overhead
• Parity Data Bandwidth Array (RAID 3)
Parity computed horizontally
Logically a single high data bw disk
• High I/O Rate Parity Array (RAID 5)
Interleaved parity blocks
Independent reads and writes
Logical write = 2 reads + 2 writes
1
0
0
1
0
0
1
1
1
0
0
1
0
0
1
1
0
0
1
0
0
1
1
1
1
0
0
1
1
0
1
1
0
0
1
0
0
1
1
0
0
1
1
0
0
1
0
Outline
• Reliability Terminology
• Examlpes
• Discuss Jim Gray’s Turing paper
Definitions
• Examples on why precise definitions so important
for reliability
• Is a programming mistake a fault, error, or failure?
– Are we talking about the time it was designed
or the time the program is run?
– If the running program doesn’t exercise the mistake,
is it still a fault/error/failure?
• If an alpha particle hits a DRAM memory cell, is it a
fault/error/failure if it doesn’t change the value?
– Is it a fault/error/failure if the memory doesn’t access the
changed bit?
– Did a fault/error/failure still occur if the memory had error
correction and delivered the corrected value to the CPU?
IFIP Standard terminology
• Computer system dependability: quality of delivered service such
that reliance can be placed on service
• Service is observed actual behavior as perceived by other system(s)
interacting with this system’s users
• Each module has ideal specified behavior, where service
specification is agreed description of expected behavior
• A system failure occurs when the actual behavior deviates from the
specified behavior
• failure occurred because an error, a defect in module
• The cause of an error is a fault
• When a fault occurs it creates a latent error, which becomes
effective when it is activated
• When error actually affects the delivered service, a failure occurs
(time from error to failure is error latency)
Fault v. (Latent) Error v. Failure
• A fault creates one or more latent errors
• Properties of errors are
– a latent error becomes effective once activated
– an error may cycle between its latent and effective states
– an effective error often propagates from one component to another,
thereby creating new errors
• Effective error is either a formerly-latent error in that
component or it propagated from another error
• A component failure occurs when the error affects the
delivered service
• These properties are recursive, and apply to any component in
the system
• An error is manifestation in the system of a fault,
a failure is manifestation on the service of an error
Fault v. (Latent) Error v. Failure
• An error is manifestation in the system of a fault,
a failure is manifestation on the service of an error
• Is a programming mistake a fault, error, or failure?
– Are we talking about the time it was designed
or the time the program is run?
– If the running program doesn’t exercise the mistake,
is it still a fault/error/failure?
•
•
•
•
A programming mistake is a fault
the consequence is an error (or latent error) in the software
upon activation, the error becomes effective
when this effective error produces erroneous data which
affect the delivered service, a failure occurs
Fault v. (Latent) Error v. Failure
• An error is manifestation in the system of a fault,
a failure is manifestation on the service of an error
• Is If an alpha particle hits a DRAM memory cell, is it a
fault/error/failure if it doesn’t change the value?
– Is it a fault/error/failure if the memory doesn’t access the changed bit?
– Did a fault/error/failure still occur if the memory had error correction
and delivered the corrected value to the CPU?
•
•
•
•
An alpha particle hitting a DRAM can be a fault
if it changes the memory, it creates an error
error remains latent until effected memory word is read
if the effected word error affects the delivered service, a
failure occurs
Fault v. (Latent) Error v. Failure
• An error is manifestation in the system of a fault,
a failure is manifestation on the service of an error
• What if a person makes a mistake, data is altered,
and service is affected?
• fault:
• error:
• latent:
• failure:
Fault Tolerance vs Disaster Tolerance
• Fault-Tolerance (or more properly, ErrorTolerance): mask local faults
(prevent errors from becoming failures)
– RAID disks
– Uninterruptible Power Supplies
– Cluster Failover
• Disaster Tolerance: masks site errors
(prevent site errors from causing service
failures)
– Protects against fire, flood, sabotage,..
– Redundant system and service at remote site.
– Use design diversity
From Jim Gray’s “Talk at UC Berkeley on Fault Tolerance " 11/9/00
Defining reliability and availability
quantitatively
• Users perceive a system alternating between 2 states of service
with respect to service specification:
1. service accomplishment, where service is delivered as specified,
2. service interruption, where the delivered service is different from the
specified service, measured as Mean Time To Repair (MTTR)
Transitions between these 2 states are caused by
failures (from state 1 to state 2) or restorations (2 to 1)
• module reliability: a measure of continuous service accomplishment
(or of time to failure) from a reference point, e.g, Mean Time To
Failure (MTTF)
– The reciprocal of MTTF is failure rate
• module availability: measure of service accomplishment with
respect to alternation between the 2 states of accomplishment
and interruption
= MTTF / (MTTF+MTTR)
Fail-Fast is Good, Repair is Needed
Lifecycle of a module
fail-fast gives
short fault latency
High Availability
is low UN-Availability
Unavailability
MTTR
MTTF+MTTR
As MTTF>>MTTR, improving either MTTR or MTTF gives
benefit
Note: Mean Time Between Failures (MTBF)= MTTF+MTTR
From Jim Gray’s “Talk at UC Berkeley on Fault Tolerance " 11/9/00
Dependability: The 3 ITIES
• Reliability / Integrity:
does the right thing.
(Also large MTTF)
• Availability: does it now.
(Also small MTTR
MTTF+MTTR
System Availability:
Integrity Security
Reliability
if 90% of terminals up & 99% of DB up?
Availability
(=>89% of transactions are serviced on time).
From Jim Gray’s “Talk at UC Berkeley on Fault Tolerance " 11/9/00
Reliability Example
• If assume collection of modules have exponentially distributed
lifetimes (age of compoent doesn't matter in failure probability)
and modules fail independently, overall failure rate of collection is
sum of failure rates of modules
• Calculate MTTF of a disk subsystem with
– 10 disks, each rated at 1,000,000 hour MTTF
–
–
–
–
1
1
1
1
SCSI controller, 500,000 hour MTTF
power supply, 200,000 hour MTTF
fan, 200,000 MTTF
SCSI cable, 1,000,000 hour MTTF
• Failure Rate = 10*1/1,000,000 + 1/500,000
+ 1/200,000 + 1/200,000 + 1/1,000,000
= (10 +2 +5 +5 +1)/1,000,000 = 23/1,000,000
• MTTF=1/Failure Rate = 1,000,000/23 = 43,500 hrs
What's wrong with MTTF?
• 1,000,000 MTTF > 100 years; ~ infinity?
• How calculated?
•
•
•
•
•
•
– Put, say, 2000 in a room, calculate failures in 60 days,
and then calculate the rate
– As long as <=3 failures => 1,000,000 hr MTTF
Suppose we did this with people?
1998 deaths per year in US ("Failure Rate")
Deaths 5 to 14 year olds = 20/100,000
MTTFhuman = 100,000/20 = 5,000 years
Deaths >85 year olds = 20,000/100,000
MTTFhuman = 100,000/20,000 = 5 years
source: "Deaths: Final Data for 1998," www.cdc.gov/nchs/data/nvs48_11.pdf
What's wrong with MTTF?
• 1,000,000 MTTF > 100 years; ~ infinity?
• But disk lifetime is 5 years!
• => if you replace a disk every 5 years, on average it
wouldn't fail until 21st replacement
• A better unit: % that fail
• Fail over lifetime if had 1000 disks for 5 years
= (1000 disks * 365*24) / 1,000,000 hrs/failure
= 43,800,000 / 1,000,000 = 44 failures
= 4.4% fail with 1,000,000 MTTF
• Detailed disk spec lists failures/million/month
• Typically about 800 failures per month per million disks at
1,000,000 MTTF,
or about 1% per year for 5 year disk lifetime
Dependability Big Idea: No Single
Point of Failure
• Since Hardware MTTF is often 100,000 to
1,000,000 hours and MTTF is often 1 to 10
hours, there is a good chance that if one
component fails it will be repaired before a
second component fails
• Hence design systems with sufficient
redundancy that there is No Single Point of
Failure
HW Failures in Real Systems: Tertiary Disks
•A cluster of 20 PCs in seven 7-foot high, 19-inch wide
racks with 368 8.4 GB, 7200 RPM, 3.5-inch IBM disks.
The PCs are P6-200MHz with 96 MB of DRAM each.
They run FreeBSD 3.0 and the hosts are connected via
switched 100 Mbit/second Ethernet
Component
SCSI Controller
SCSI Cable
SCSI Disk
IDE Disk
Disk Enclosure -Backplane
Disk Enclosure - Power Supply
Ethernet Controller
Ethernet Switch
Ethernet Cable
CPU/Motherboard
Total in System Total Failed % Failed
44
1
2.3%
39
1
2.6%
368
7
1.9%
24
6
25.0%
46
13
28.3%
92
3
3.3%
20
1
5.0%
2
1
50.0%
42
1
2.3%
20
0
0%
When To Repair?
Chances Of Tolerating A Fault are 1000:1 (class 3)
A 1995 study: Processor & Disc Rated At ~ 10khr MTTF
Computed Single
Observed
Failures
Double Fails
Ratio
10k Processor Fails
14 Double
~ 1000 : 1
40k Disc Fails,
26 Double
~ 1000 : 1
Hardware Maintenance:
On-Line Maintenance "Works" 999 Times Out Of 1000.
The chance a duplexed disc will fail during maintenance?1:1000
Risk Is 30x Higher During Maintenance
=> Do It Off Peak Hour
Software Maintenance:
Repair Only Virulent Bugs
Wait For Next Release To Fix Benign Bugs
From Jim Gray’s “Talk at UC Berkeley on Fault Tolerance " 11/9/00
Sources of Failures
MTTF
Power Failure:
Phone Lines
Soft
Hard
Hardware Modules:
Software:
MTTR
2000 hr
>.1 hr
4000 hr
100,000hr
1 hr
.1 hr
10 hr
10hr (many are transient)
1 Bug/1000 Lines Of Code (after vendor-user testing)
=> Thousands of bugs in System!
Most software failures are transient: dump & restart system.
Useful fact: 8,760 hrs/year ~ 10k hr/year
From Jim Gray’s “Talk at UC Berkeley on Fault Tolerance " 11/9/00
Case Study - Japan
"Survey on Computer Security", Japan Info Dev Corp., March 1986. (trans: Eiichi Watanabe).
Vendor
4 2%
Tele Comm
lines
12 %
2 5%
Application
Software
11.2
%
Environment
9.3%
Operations
Vendor (hardware and software)
Application software
Communications lines
Operations
Environment
5
Months
9 Months
1.5 Years
2 Years
2 Years
10 Weeks
1,383 institutions reported (6/84 - 7/85)
7,517 outages, MTTF ~ 10 weeks,
avg duration ~ 90 MINUTES
To Get 10 Year MTTF, Must Attack All These Areas
From Jim Gray’s “Talk at UC Berkeley on Fault Tolerance " 11/9/00
Case Studies - Tandem Trends
Reported MTTF by Component
Mean Time to System Failure (years)
by Cause
450
400
maintenance
350
300
250
hardware
environment
200
operations
150
100
software
50
total
0
1985
SOFTWARE
HARDWARE
MAINTENANCE
OPERATIONS
ENVIRONMENT
1987
1985
2
29
45
99
142
1987
53
91
162
171
214
1989
1990
33
310
409
136
346
SYSTEM
8
20
21
Problem: Systematic Under-reporting
From Jim Gray’s “Talk at UC Berkeley on Fault Tolerance " 11/9/00
Years
Years
Years
Years
Years
Years
Is Maintenance the Key?
• Rule of Thumb: Maintenance 10X HW
– so over 5 year product life, ~ 95% of cost is maintenance
• VAX crashes ‘85, ‘93 [Murp95]; extrap. to ‘01
• Sys. Man.: N crashes/problem, SysAdmin action
– Actions: set params bad, bad config, bad app install
• HW/OS 70% in ‘85 to 28% in ‘93. In ‘01, 10%?
OK: So Far
Hardware fail-fast is easy
Redundancy plus Repair is great (Class 7 availability)
Hardware redundancy & repair is via modules.
How can we get instant software repair?
We Know How To Get Reliable Storage
RAID Or Dumps And Transaction Logs.
We Know How To Get Available Storage
Fail Soft Duplexed Discs (RAID 1...N).
? How do we get reliable execution?
? How do we get available execution?
From Jim Gray’s “Talk at UC Berkeley on Fault Tolerance " 11/9/00
Does Hardware Fail Fast? 4 of 384
Disks that failed in Tertiary Disk
Messages in system log for failed disk
No. log Duration
msgs
(hours)
Hardware Failure (Peripheral device write fault
[for] Field Replaceable Unit)
1763
186
Not Ready (Diagnostic failure: ASCQ =
Component ID [of] Field Replaceable Unit)
1460
90
Recovered Error (Failure Prediction Threshold
Exceeded [for] Field Replaceable Unit)
1313
5
Recovered Error (Failure Prediction Threshold
Exceeded [for] Field Replaceable Unit)
431
17
High Availability System Classes
Goal: Build Class 6 Systems
Unavailable
System Type
(min/year)
Unmanaged
50,000
Managed
5,000
Well Managed
500
Fault Tolerant
50
High-Availability
5
Very-High-Availability
.5
Ultra-Availability
.05
Availability
90.%
99.%
99.9%
99.99%
99.999%
99.9999%
99.99999%
Availability
Class
1
2
3
4
5
6
7
UnAvailability = MTTR/MTBF
can cut it in ½ by cutting MTTR or MTBF
From Jim Gray’s “Talk at UC Berkeley on Fault Tolerance " 11/9/00
How Realistic is "5 Nines"?
• HP claims HP-9000 server HW and HP-UX OS can deliver
99.999% availability guarantee “in certain pre-defined, pretested customer environments”
– Application faults?
– Operator faults?
– Environmental faults?
• Collocation sites (lots of computers in 1 building on
Internet) have
– 1 network outage per year (~1 day)
– 1 power failure per year (~1 day)
• Microsoft Network unavailable recently for a day due to
problem in Domain Name Server: if only outage per year,
99.7% or 2 Nines
Demo: looking at some nodes
• Look at http://uptime.netcraft.com/
• Internet Node availability:
92% mean,
97% median
Darrell Long (UCSC)
ftp://ftp.cse.ucsc.edu/pub/tr/
– ucsc-crl-90-46.ps.Z "A Study of the Reliability of Internet Sites"
– ucsc-crl-91-06.ps.Z "Estimating the Reliability of Hosts Using the Internet"
– ucsc-crl-93-40.ps.Z "A Study of the Reliability of Hosts on the Internet"
– ucsc-crl-95-16.ps.Z "A Longitudinal Survey of Internet Host Reliability"
From Jim Gray’s “Talk at UC Berkeley on Fault Tolerance " 11/9/00
Discuss Gray's Paper
• "What Next? A dozen remaining IT
problems," June 1999, MS-TR-99-50
• http://research.microsoft.com/~gray/papers
/MS_TR_99_50_TuringTalk.pdf
ops/s/$ Had Three Growth Curves
1890-1990
1890-1945
Mechanical
Relay
7-year doubling
1945-1985
Tube, transistor,..
2.3 year doubling
Combination of Hans Moravac + Larry Roberts + Gordon Bell
WordSize*ops/s/sysprice
1.E+09
ops per second/$
doubles every
1.0 years
1.E+06
1.E+03
1985-2000
Microprocessor
1.0 year doubling
1.E+00
1.E-03
doubles every
7.5 years
doubles every
2.3 years
1.E-06
1880
1900
1920
1940
1960
1980
2000
The List
(Red is AI Complete)
•
•
Devise an architecture that scales up by 10^6.
The Turing test: win the impersonation game 30% of the time.
•
•
•
Hear as well as a person (native speaker): speech to text.
Speak as well as a person (native speaker): text to speech.
See as well as a person (recognize).
•
•
Remember what is seen and heard and quickly return it on request.
Build a system that, given a text corpus, can answer questions about the text and
summarize it as quickly and precisely as a human expert. Then add sounds:
conversations, music. Then add images, pictures, art, movies.
Simulate being some other place as an observer (Tele-Past) and a participant
(Tele-Present).
Build a system used by millions of people each day but administered by a ½ time
person.
Do 9 and prove it only services authorized users.
Do 9 and prove it is almost always available: (out less than 1 second per 100
years).
Automatic Programming: Given a specification, build a system that implements the
spec. Prove that the implementation matches the spec. Do it better than a team
of programmers.
•
•
•
•
•
•
•
3.Read and understand as well as a human.
4.Think and write as well as a human.
Illustrate as well as a person (done!) but virtual reality is still a major challenge.
Trouble-Free Systems
•
•
•
–
–
–
–
–
–
Manager
Sets goals
Sets policy
Sets budget
System does the rest.
Everyone is a CIO (Chief Information Officer)
Build a system
used by millions of people each day
Administered and managed by a ½ time person.
» On hardware fault, order replacement part
» On overload, order additional equipment
» Upgrade hardware and software automatically.
Trustworthy Systems
•
Build a system used by millions of people that
–
–
–
Only services authorized users
» Service cannot be denied (can’t destroy data or power).
» Information cannot be stolen.
Is always available: (out less than 1 second per 100 years = 8 9’s of availability)
» 1950’s
90% availability,
Today
99% uptime for web sites,
99.99% for well managed sites (50 minutes/year)
3 extra 9s in 45 years.
» Goal: 5 more 9s: 1 second per century.
And prove it.
Summary: Dependability
• Fault => Latent errors in system => Failure in service
• Reliability: quantitative measure of time to failure (MTTF)
– Assuming expoentially distributed independent failures, can calculate
MTTF system from MTTF of components
• Availability: quantitative measure % of time delivering desired
service
• Can improve Availability via greater MTTF or smaller MTTR
(such as using standby spares)
• No single point of failure a good hardware guideline, as
everything can fail
• Components often fail slowly
• Real systems: problems in maintenance, operation as well as
hardware, software
Summary: Dependability
• Fault => Latent errors in system => Failure in service
• Reliability: quantitative measure of time to failure (MTTF)
– Assuming expoentially distributed independent failures, can calculate
MTTF system from MTTF of components
• Availability: quantitative measure % of time delivering desired
service
• Can improve Availability via greater MTTF or smaller MTTR
(such as using standby spares)
• No single point of failure a good hardware guideline, as
everything can fail
• Components often fail slowly
• Real systems: problems in maintenance, operation as well as
hardware, software
I/O Benchmarks
• For better or worse, benchmarks shape a field
– Processor benchmarks classically aimed at response time for
fixed sized problem
– I/O benchmarks typically measure throughput, possibly with
upper limit on response times (or 90% of response times)
• What if fix problem size, given 60%/year increase in
DRAM capacity?
Benchmark
Size of Data
I/OStones
1 MB
Andrew
4.5 MB
– Not much time in I/O
– Not measuring disk (or even main memory)
% Time I/O
Year
26%
4%
1990
1988
I/O Benchmarks: Transaction
Processing
• Transaction Processing (TP) (or On-line TP=OLTP)
– Changes to a large body of shared information from many terminals,
with the TP system guaranteeing proper behavior on a failure
– If a bank’s computer fails when a customer withdraws money, the TP
system would guarantee that the account is debited if the customer
received the money and that the account is unchanged if the money
was not received
– Airline reservation systems & banks use TP
• Atomic transactions makes this work
• Each transaction => 2 to 10 disk I/Os & 5,000 and
20,000 CPU instructions per disk I/O
– Efficiency of TP SW & avoiding disks accesses by keeping information
in main memory
• Classic metric is Transactions Per Second (TPS)
– Under what workload? how machine configured?
I/O Benchmarks: Transaction
Processing
• Early 1980s great interest in OLTP
– Expecting demand for high TPS (e.g., ATM machines, credit
cards)
– Tandem’s success implied medium range OLTP expands
– Each vendor picked own conditions for TPS claims, report only CPU
times with widely different I/O
– Conflicting claims led to disbelief of all benchmarks=> chaos
• 1984 Jim Gray of Tandem distributed paper to
Tandem employees and 19 in other industries to
propose standard benchmark
• Published “A measure of transaction processing
power,” Datamation, 1985 by Anonymous et. al
– To indicate that this was effort of large group
– To avoid delays of legal department of each author’s firm
– Still get mail at Tandem to author
I/O Benchmarks: TP1 by Anon et. al
• DebitCredit Scalability: size of account, branch,
teller, history function of throughput
TPS Number of ATMs Account-file size
10
1,000
0.1 GB
100
10,000
1.0 GB
1,000
100,000
10.0 GB
10,000
1,000,000
100.0 GB
– Each input TPS =>100,000 account records, 10 branches, 100 ATMs
– Accounts must grow since a person is not likely to use the bank more
frequently just because the bank has a faster computer!
• Response time: 95% transactions take Š 1 second
• Configuration control: just report price (initial
purchase price + 5 year maintenance = cost of
ownership)
• By publishing, in public domain
I/O Benchmarks: TP1 by Anon et. al
• Problems
– Often ignored the user network to terminals
– Used transaction generator with no think time; made sense for database
vendors, but not what customer would see
• Solution: Hire auditor to certify results
– Auditors soon saw many variations of ways to trick system
• Proposed minimum compliance list (13 pages); still,
DEC tried IBM test on different machine with poorer results than
claimed by auditor
• Created Transaction Processing Performance Council in 1988:
founders were CDC, DEC, ICL, Pyramid, Stratus, Sybase,
Tandem, and Wang; ~40 companies today
• Led to TPC standard benchmarks in 1990,www.tpc.org
Unusual Characteristics of TPC
• Price is included in the benchmarks
– cost of HW, SW, and 5-year maintenance agreements
included => price-performance as well as performance
• The data set generally must scale in size as the
throughput increases
– trying to model real systems, demand on system and size of
the data stored in it increase together
• The benchmark results are audited
– Must be approved by certified TPC auditor, who enforces TPC
rules => only fair results are submitted
• Throughput is the performance metric but response
times are limited
– eg, TPC-C: 90% transaction response times < 5 seconds
• An independent organization maintains the
benchmarks
– COO ballots on changes, meetings, to settle disputes...
TPC Benchmark History/Status
Benchmark
Data Size (GB) Performance
Metric
0.1 to 10
transactions/
second
0.1 to 10
transactions
per second
100 to 3000
new order
(min.07 * tpm) trans/min.
100, 300, 1000 queries/hour
A: Debit Credit
(retired)
B: Batch Debit Credit
(retired)
C: Complex Query
OLTP
D: Decision Support
(retired)
H: Ad hoc decision
100, 300, 1000
support
R: Business reporting
1000
decision support
W: Transactional web
~ 50, 500
benchmark
1st
Results
Jul-90
Jul-91
Sep-92
Dec-95
queries/hour Oct-99
queries/hour Aug-99
web inter- Jul-00
actions/sec.
I/O Benchmarks: TPC-C Complex OLTP
•
•
•
•
Models a wholesale supplier managing orders
Order-entry conceptual model for benchmark
Workload = 5 transaction types
Users and database scale linearly with
throughput
• Defines full-screen end-user interface
• Metrics: new-order rate (tpmC)
and price/performance ($/tpmC)
• Approved July 1992
I/O Benchmarks: TPC-W
Transactional Web Benchmark
• Represent any business (retail store, software distribution, airline
reservation, ...) that markets and sells over the Internet/
Intranet
• Measure systems supporting users browsing, ordering, and
conducting transaction oriented business activities.
• Security (including user authentication and data encryption) and
dynamic page generation are important
• Before: processing of customer order by terminal operator working
on LAN connected to database system
• Today: customer accesses company site over Internet connection,
browses both static and dynamically generated Web pages, and
searches the database for product or customer information.
Customer also initiate, finalize & check on product orders &
deliveries
• Started 1/97; hoped to release Fall, 1998?Jul 2000!
1998 TPC-C Performance tpm(c)
Rank
1 IBM
2
HP
3
Sun
4
HP
5Fujitsu
6
Sun
7Digital
8
SGI
9 IBM
10Digital
Config
RS/6000 SP (12 node x 8-way)
HP 9000 V2250 (16-way)
Ultra E6000 c/s (2 node x 22-way)
HP 9000 V2200 (16-way)
GRANPOWER 7000 Model 800
Ultra E6000 c/s (24-way)
AlphaS8400 (4 node x 8-way)
Origin2000 Server c/s (28-way)
AS/400e Server (12-way)
AlphaS8400 5/625 (10-way)
tpmC
$/tpmC
57,053.80
$147.40
52,117.80
$81.17
51,871.62
$134.46
39,469.47
$94.18
34,116.93 $57,883.00
31,147.04
$108.90
30,390.00
$305.00
25,309.20
$139.04
25,149.75
$128.00
24,537.00
$110.48
• Notes: 7 SMPs , 3 clusters of SMPs,
• avg 30 CPUs/system
Database
Oracle8 8.0.4
Sybase ASE
Oracle8 8.0.3
Sybase ASE
Oracle8
Oracle8 8.0.3
Oracle7 V7.3
INFORMIX
DB2
Sybase SQL
1998 TPC-C Price/Performance $/tpm(c)
Rank
1
2
3
4
5
6
7
8
9
10
Acer
Dell
Compaq
ALR
HP
Fujitsu
Fujitsu
Unisys
Compaq
Unisys
Config
$/tpmC
AcerAltos 19000Pro4
$27.25
PowerEdge 6100 c/s
$29.55
ProLiant 5500 c/s
$33.37
Revolution 6x6 c/s
$35.44
NetServer LX Pro
$35.82
teamserver M796i
$37.62
GRANPOWER 5000 Model 670
$37.62
Aquanta HS/6 c/s
$37.96
ProLiant 7000 c/s
$39.25
Aquanta HS/6 c/s
$39.39
tpmC
11,072.07
10,984.07
10,526.90
13,089.30
10,505.97
13,391.13
13,391.13
13,089.30
11,055.70
12,026.07
• Notes: all Microsoft SQL Server Database
• All uniprocessors?
Database
M/S SQL 6.5
M/S SQL 6.5
M/S SQL 6.5
M/S SQL 6.5
M/S SQL 6.5
M/S SQL 6.5
M/S SQL 6.5
M/S SQL 6.5
M/S SQL 6.5
M/S SQL 6.5
2001 TPC-C Performance Results
Rank
Company System
tpmC $/tpmC CPUs Database Softwa
1 Compaq ProLiant 8500-700-192P 505,303 $ 19.80 192 SQL Server 200
2 IBM
Netfinity 8500R c/s
440,880 $ 32.28
128 DB2 UDB 7.1
3 Compaq ProLiant 8500-X700-96P 262,244 $ 20.24
96 SQL Server 200
4 Compaq ProLiant 8500-X550-96P 229,914 $ 23.08
96 SQL Server 200
5 Bull
Escala EPC2450
220,807 $ 43.31
24 Oracle 8i Enterp
6 IBM
IBM eServer pSeries 680 Model
220,807
7017-S85
$ 43.30
24 Oracle 8i Enterp
7 HP
HP 9000 Superdome Enterprise
197,024
Server
$ 66.27
48 Oracle8 Enterpri
8 Fujitsu PRIMEPOWER 2000 c/s w/32
183,771
Front-Ends
$ 56.16
64 SymfoWARE Se
9 Compaq ProLiant 8500-X700-64P 179,658 $ 19.75
64 SQL Server 200
10 IBM
IBM eServer iSeries 840-2420-001
163,776 $ 58.88
24 DB2 for AS/400
• Notes: 4 SMPs, 6 clusters of SMPs: 76 CPUs/system
• 3 years => Peak Performance 8.9X, 2X/yr
2001 TPC-C Price Performance Results
1
2
3
4
5
6
7
8
9
10
CompanySystem
Compaq ProLiant ML-570-6/700-3P
Dell
PowerEdge 6450/3P
Dell
PowerEdge 6400/3P
Dell
PowerEdge 6400
Dell
PowerEdge 6450
HP
NetServer LH 6000
Compaq ProLiant ML-570-6/700
HP
HP NetServer LXr 8500
Compaq ProLiant 8500-6/700-4
Compaq ProLiant 8500-550-6P
tpmC $/tpmC CPUs
Database Software
20,207 $ 9.51 3 SQL Server 2000
24,925 $ 9.90 3 SQL Server 2000
24,925 $ 9.91 3 SQL Server 2000
30,231 $ 11.07 4 SQL Server 2000
30,231 $ 11.08 4 SQL Server 2000
33,136 $ 11.85 6 SQL Server Enterp
32,328 $ 12.49 4 SQL Server 2000
43,047 $ 12.76 8 SQL Server 2000
34,600 $ 12.89 4 SQL Server 2000
33,617 $ 12.91 6 SQL Server Enterp
• Notes: All small SMPs, all running M/S SQL server
• 3 years => Cost Performance 2.9X, 1.4X/yr
SPEC SFS/LADDIS
• 1993 Attempt by NFS companies to agree on
standard benchmark: Legato, Auspex, Data General,
DEC, Interphase, Sun. Like NFSstones but
–
–
–
–
–
–
–
Run on multiple clients & networks (to prevent bottlenecks)
Same caching policy in all clients
Reads: 85% full block & 15% partial blocks
Writes: 50% full block & 50% partial blocks
Average response time: 50 ms
Scaling: for every 100 NFS ops/sec, increase capacity 1GB
Results: plot of server load (throughput) vs. response time & number
of users
» Assumes: 1 user => 10 NFS ops/sec
1998 Example SPEC SFS Result: DEC Alpha
Avg. NSF Resp. Time
• 200 MHz 21064: 8KI + 8KD + 2MB L2; 512 MB; 1
Gigaswitch
• DEC OSF1 v2.0
• 4 FDDI networks; 32 NFS Daemons, 24 GB file size
• 88 Disks, 16 controllers, 84 file systems
50
40
4817
30
20
10
0
0
1000
2000
3000
NFS Throughput (nfs ops/sec)
4000
5000
SPEC sfs97 for EMC Celera NFS servers:
2, 4, 8, 14 CPUs; 67, 133, 265, 433 disks
15,700, 32,000, 61,800 104,600 ops/sec
2 CPU s
4 C PUs
8 CPUs
1 4 CPU s
10
9
8
7
6
5
4
3
Ove ra ll response tim e (m s)
2
1
0
0
20000
40 000
60000
SPEC sfs 97.v3 Ops/se c
8 0000
10000 0
1 20000
SPEC WEB99
• Simulates accesses to web service provider, supports home pages for
several organizations. File sizes:
– less than 1 KB, representing an small icon: 35% of activity
– 1 to 10 KB: 50% of activity
– 10 to 100 KB: 14% of activity
– 100 KB to 1 MB: a large document and image,1% of activity
• Workload simulates dynamic operations: rotating advertisements on a
web page, customized web page creation, and user registration.
• workload gradually increased until server software is saturated with hits
and response time degrades significantly.
SPEC WEB99 for Dells in 2000
System Name
PowerEdge 2400/667
PowerEdge 2400/667
PowerEdge 4400/800
PowerEdge 4400/800
PowerEdge 6400/700
PowerEdge 6400/700
ResultHTTP Version/OS
CPUsCPU typeDRAM
732 IIS 5.0/Windows 2000
1 667 MHz2 Pentium
GB
1270 TUX 1.0/Red Hat Linux 6.21 667 MHz2 Pentium
GB
1060 IIS 5.0/Windows 2000
2 800 MHz4 Pentium
GB
2200 TUX 1.0/Red Hat Linux 6.22 800 MHz4 Pentium
GB
1598 IIS 5.0/Windows 2000
4 700 MHz8 Pentium
GB
4200 TUX 1.0/Red Hat Linux 6.24 700 MHz8 Pentium
GB
• Each uses 5 9GB, 10,000 RPM disks except the 5th system, which
had 7 disks, and the first 4 have 0.25 MB of L2 cache while the
last 2 have 2 MB of L2 cache
• Appears that the large amount of DRAM is used as a large file
cache to reduce disk I/O, so not really an I/O benchmark
Availability benchmark methodology
• Goal: quantify variation in QoS metrics as events
occur that affect system availability
• Leverage existing performance benchmarks
– to generate fair workloads
– to measure & trace quality of service metrics
• Use fault injection to compromise system
– hardware faults (disk, memory, network, power)
– software faults (corrupt input, driver error returns)
– maintenance events (repairs, SW/HW upgrades)
• Examine single-fault and multi-fault workloads
– the availability analogues of performance micro- and macrobenchmarks
Benchmark Availability?
Methodology for reporting results
• Results are most accessible graphically
– plot change in QoS metrics over time
– compare to “normal” behavior
» 99% confidence intervals calculated from no-fault runs
}
QoS Metric
normal behavior
(99% conf)
injected
fault
0
system handles fault
Time
Case study
• Availability of software RAID-5 & web server
– Linux/Apache, Solaris/Apache, Windows 2000/IIS
• Why software RAID?
– well-defined availability guarantees
» RAID-5 volume should tolerate a single disk failure
» reduced performance (degraded mode) after failure
» may automatically rebuild redundancy onto spare disk
– simple system
– easy to inject storage faults
• Why web server?
– an application with measurable QoS metrics that depend on
RAID availability and performance
Benchmark environment: faults
• Focus on faults in the storage system
(disks)
• Emulated disk provides reproducible faults
– a PC that appears as a disk on the SCSI bus
– I/O requests intercepted and reflected to local disk
– fault injection performed by altering SCSI command
processing in the emulation software
• Fault set chosen to match faults observed in
a long-term study of a large storage array
– media errors, hardware errors, parity errors, power
failures, disk hangs/timeouts
– both transient and “sticky” faults
Single-fault experiments
• “Micro-benchmarks”
• Selected 15 fault types
– 8 benign (retry required)
– 2 serious (permanently unrecoverable)
– 5 pathological (power failures and complete hangs)
• An experiment for each type of fault
– only one fault injected per experiment
– no human intervention
– system allowed to continue until stabilized or crashed
Multiple-fault experiments
• “Macro-benchmarks” that require human intervention
• Scenario 1: reconstruction
(1)
(2)
(3)
(4)
(5)
disk fails
data is reconstructed onto spare
spare fails
administrator replaces both failed disks
data is reconstructed onto new disks
• Scenario 2: double failure
(1)
(2)
(3)
(4)
disk fails
reconstruction starts
administrator accidentally removes active disk
administrator tries to repair damage
Comparison of systems
• Benchmarks revealed significant variation in
failure-handling policy across the 3 systems
– transient error handling
– reconstruction policy
– double-fault handling
• Most of these policies were undocumented
– yet they are critical to understanding the systems’
availability
Transient error handling
• Transient errors are common in large arrays
– example: Berkeley 368-disk Tertiary Disk array, 11mo.
» 368 disks reported transient SCSI errors (100%)
» 13 disks reported transient hardware errors (3.5%)
» 2 disk failures (0.5%)
– isolated transients do not imply disk failures
– but streams of transients indicate failing disks
» both Tertiary Disk failures showed this behavior
• Transient error handling policy is critical in
long-term availability of array
Transient error handling (2)
• Linux is paranoid with respect to transients
– stops using affected disk (and reconstructs) on any
error, transient or not
» fragile: system is more vulnerable to multiple faults
» disk-inefficient: wastes two disks per transient
» but no chance of slowly-failing disk impacting perf.
• Solaris and Windows are more forgiving
– both ignore most benign/transient faults
» robust: less likely to lose data, more disk-efficient
» less likely to catch slowly-failing disks and remove
them
• Neither policy is ideal!
– need a hybrid that detects streams of transients
Reconstruction policy
• Reconstruction policy involves an availability
tradeoff between performance & redundancy
– until reconstruction completes, array is vulnerable to
second fault
– disk and CPU bandwidth dedicated to reconstruction is
not available to application
» but reconstruction bandwidth determines
reconstruction speed
– policy must trade off performance availability and
potential data availability
Example single-fault result
220
Solaris
215
210
1
205
Reconstruction
200
0
195
190
0
10
20
30
40
50
60
70
80
90
100
110
160
2
140
Reconstruction
120
#failures tolerated
Hits per second
Linux
2
1
Hits/sec
# failures tolerated
100
0
80
0
10
20
30
40
50
60
70
80
90
100
110
Time (minutes)
• Compares Linux and Solaris reconstruction
– Linux: minimal performance impact but longer window of vulnerability
to second fault
– Solaris: large perf. impact but restores redundancy fast
Reconstruction policy (2)
• Linux: favors performance over data availability
– automatically-initiated reconstruction, idle bandwidth
– virtually no performance impact on application
– very long window of vulnerability (>1hr for 3GB RAID)
• Solaris: favors data availability over app. perf.
– automatically-initiated reconstruction at high BW
– as much as 34% drop in application performance
– short window of vulnerability (10 minutes for 3GB)
• Windows: favors neither!
– manually-initiated reconstruction at moderate BW
– as much as 18% app. performance drop
– somewhat short window of vulnerability (23 min/3GB)
Double-fault handling
• A double fault results in unrecoverable loss of some
data on the RAID volume
• Linux: blocked access to volume
• Windows: blocked access to volume
• Solaris: silently continued using volume, delivering
fabricated data to application!
– clear violation of RAID availability semantics
– resulted in corrupted file system and garbage data at the
application level
– this undocumented policy has serious availability implications for
applications
Availability Conclusions: Case study
• RAID vendors should expose and document policies
affecting availability
– ideally should be user-adjustable
• Availability benchmarks can provide valuable insight
into availability behavior of systems
– reveal undocumented availability policies
– illustrate impact of specific faults on system behavior
• We believe our approach can be generalized well
beyond RAID and storage systems
– the RAID case study is based on a general methodology
Conclusions: Availability benchmarks
• Our methodology is best for understanding the
availability behavior of a system
– extensions are needed to distill results for automated system
comparison
• A good fault-injection environment is critical
– need realistic, reproducible, controlled faults
– system designers should consider building in hooks for fault-injection
and availability testing
• Measuring and understanding availability will be crucial
in building systems that meet the needs of modern
server applications
– our benchmarking methodology is just the first step towards this
important goal
Summary: I/O Benchmarks
• Scaling to track technological change
• TPC: price performance as nomalizing
configuration feature
• Auditing to ensure no foul play
• Throughput with restricted response time is
normal measure
• Benchmarks to measure Availability,
Maintainability?
Review: Disk I/O Performance
300
Metrics:
Response Time
Throughput
Response
Time (ms)
200
100
0
100%
0%
Throughput
(% total BW)
Queue
Proc
IOC
Device
Response time = Queue + Device Service time
Download