Lecture 26: I/O Continued Prof. John Kubiatowicz Computer Science 252

advertisement
Lecture 26:
I/O Continued
Prof. John Kubiatowicz
Computer Science 252
Fall 1998
JDK.F98
Slide 1
Review:
Disk Device Terminology
Disk Latency =
Queuing Time + Seek Time + Rotation Time + Xfer Time + Ctrl Time
Order of magnitude times for 4K byte transfers:
Seek: 12 ms or less
Rotate: 4.2 ms @ 7200 rpm = 0.5 rev/(7200 rpm/60m/s)
(8.3 ms @ 3600 rpm )
Xfer: 1 ms @ 7200 rpm (2 ms @ 3600 rpm)
Ctrl: 2 ms (big variation)
Disk Latency = Queuing Time + (12 + 4.2 + 1 + 2)ms = QT + 19.2ms
Average Service Time = 19.2 ms
JDK.F98
Slide 2
But: What about queue time?
Or: why nonlinear response
300
Metrics:
Response Time
Throughput
Response
Time (ms)
200
100
0
100%
0%
Throughput
(% total BW)
Queue
Proc
IOC
Device
Response time = Queue + Device Service time
JDK.F98
Slide 3
Departure to discuss queueing
theory
(On board)
JDK.F98
Slide 4
Introduction to
Queueing Theory
Arrivals
Departures
• More interested in long term, steady state
than in startup => Arrivals = Departures
• Little’s Law:
Mean number tasks in system = arrival rate
x mean reponse time
– Observed by many, Little was first to prove
• Applies to any system in equilibrium,
as long as nothing in black box
is creating or destroying tasks
JDK.F98
Slide 5
A Little Queuing Theory:
Notation
System
Queue
Proc
server
IOC
Device
• Queuing models assume state of equilibrium:
input rate = output rate
• Notation:
r
Tser
u
Tq
Tsys
Lq
Lsys
average number of arriving customers/second
average time to service a customer (tradtionally µ = 1/ Tser )
server utilization (0..1): u = r x Tser (or u = r / Tser )
average time/customer in queue
average time/customer in system: Tsys = Tq + Tser
average length of queue: Lq = r x Tq
average length of system: Lsys = r x Tsys
• Little’s Law: Lengthsystem = rate x Timesystem
(Mean number customers = arrival rate x mean service time)
JDK.F98
Slide 6
A Little Queuing Theory
System
Queue
Proc
server
IOC
Device
• Service time completions vs. waiting time for a busy
server: randomly arriving event joins a queue of
arbitrary length when server is busy,
otherwise serviced immediately
– Unlimited length queues key simplification
• A single server queue: combination of a servicing
facility that accomodates 1 customer at a time
(server) + waiting area (queue): together called a
system
• Server spends a variable amount of time with
customers; how do you characterize variability? JDK.F98
– Distribution of a random variable: histogram? curve?
Slide 7
A Little Queuing Theory
System
Queue
Proc
server
IOC
Device
• Server spends a variable amount of time with customers
– Weighted mean m1 = (f1 x T1 + f2 x T2 +...+ fn x Tn)/F (F=f1 +
f2...)
– variance = (f1 x T12 + f2 x T22 +...+ fn x Tn2)/F – m12
Avg.
» Must keep track of unit of measure (100 ms2 vs. 0.1 s2 )
– Squared coefficient of variance: C = variance/m12
» Unitless measure (100 ms2 vs. 0.1 s2)
• Exponential distribution C = 1 : most short relative to average, few
others long; 90% < 2.3 x average, 63% < average
• Hypoexponential distribution C < 1 : most close to average,
C=0.5 => 90% < 2.0 x average, only 57% < average
• Hyperexponential distribution C > 1 : further from average
C=2.0 => 90% < 2.8 x average, 69% < average
JDK.F98
Slide 8
A Little Queuing Theory:
Variable Service Time
System
Queue
Proc
server
IOC
Device
• Server spends a variable amount of time with customers
– Weighted mean m1 = (f1xT1 + f2xT2 +...+ fnXTn)/F (F=f1+f2+...)
– Squared coefficient of variance C
• Disk response times C 1.5 (majority seeks < average)
• Yet usually pick C = 1.0 for simplicity
• Another useful value is average time
must wait for server to complete task: m1(z)
– Not just 1/2 x m1 because doesn’t capture variance
– Can derive m1(z) = 1/2 x m1 x (1 + C)
– No variance => C= 0 => m1(z) = 1/2 x m1
JDK.F98
Slide 9
A Little Queuing Theory:
Average Wait Time
• Calculating average wait time in queue Tq
– If something at server, it takes to complete on average m1(z)
– Chance server is busy = u; average delay is u x m1(z)
– All customers in line must complete; each avg Tser
Tq = u x m1(z) + Lq x Ts er= 1/2 x u x Tser x (1 + C) + Lq x Ts
er
Tq = 1/2 x u
Tq = 1/2 x u
Tq x (1 – u)
Tq = Ts er x
x Ts er x (1 + C) + r x Tq x Ts er
x Ts er x (1 + C) + u x Tq
= Ts er x u x (1 + C) /2
u x (1 + C) / (2 x (1 – u))
• Notation:
r
Tser
u
Tq
Lq
average number of arriving customers/second
average time to service a customer
server utilization (0..1): u = r x Tser
average time/customer in queue
average length of queue:Lq= r x Tq
JDK.F98
Slide 10
A Little Queuing Theory:
M/G/1 and M/M/1
• Assumptions so far:
–
–
–
–
–
System in equilibrium
Time between two successive arrivals in line are random
Server can start on next customer immediately after prior finishes
No limit to the queue: works First-In-First-Out
Afterward, all customers in line must complete; each avg Tser
• Described “memoryless” or Markovian request arrival
(M for C=1 exponentially random), General service
distribution (no restrictions), 1 server: M/G/1 queue
• When Service times have C = 1, M/M/1 queue
Tq = Tser x u x (1 + C) /(2 x (1 – u)) = Tser x u /
(1 – u)
Tser
u
Tq
average time to service a customer
server utilization (0..1): u = r x Tser
average time/customer in queue
JDK.F98
Slide 11
A Little Queuing Theory:
An Example
• processor sends 10 x 8KB disk I/Os per second,
requests & service exponentially distrib., avg. disk
service = 20 ms
• On average, how utilized is the disk?
– What is the number of requests in the queue?
– What is the average time spent in the queue?
– What is the average response time for a disk request?
• Notation:
r
Tser
u
Tq
Tsys
Lq
Lsys
average number of arriving customers/second = 10
average time to service a customer = 20 ms (0.02s)
server utilization (0..1): u = r x Tser= 10/s x .02s = 0.2
average time/customer in queue = Tser x u / (1 – u)
= 20 x 0.2/(1-0.2) = 20 x 0.25 = 5 ms (0 .005s)
average time/customer in system: Tsys =Tq +Tser= 25 ms
average length of queue:Lq= r x Tq
= 10/s x .005s = 0.05 requests in queue
JDK.F98
average # tasks in system: Lsys = r x Tsys = 10/s x .025s = 0.25
Slide 12
A Little Queuing Theory:
Another Example
• processor sends 20 x 8KB disk I/Os per sec, requests
& service exponentially distrib., avg. disk service = 12
ms
• On average, how utilized is the disk?
– What is the number of requests in the queue?
– What is the average time a spent in the queue?
– What is the average response time for a disk request?
• Notation:
r
Tser
u
Tq
Tsys
Lq
Lsys
average number of arriving customers/second= 20
average time to service a customer= 12 ms
server utilization (0..1): u = r x Tser= 20/s x .012s = 0.24
average time/customer in queue = Ts er x u / (1 – u)
= 12 x 0.24/(1-0.24) = 12 x 0.32 = 3.8 ms
average time/customer in system: Tsys =Tq +Tser= 15.8 ms
average length of queue:Lq= r x Tq
= 20/s x .0038s = 0.076 requests in queue
JDK.F98
average # tasks in system : Lsys = r x Tsys = 20/s x .016s = Slide
0.3213
A Little Queuing Theory:
Yet Another Example
• Suppose processor sends 10 x 8KB disk I/Os per second,
squared coef. var.(C) = 1.5, avg. disk service time = 20
ms
• On average, how utilized is the disk?
– What is the number of requests in the queue?
– What is the average time a spent in the queue?
– What is the average response time for a disk request?
• Notation:
r
Tser
u
Tq
Tsys
Lq
Lsys
average number of arriving customers/second= 10
average time to service a customer= 20 ms
server utilization (0..1): u = r x Tser= 10/s x .02s = 0.2
average time/customer in queue = Tser x u x (1 + C) /(2 x (1 – u))
= 20 x 0.2(2.5)/2(1 – 0.2) = 20 x 0.32 = 6.25 ms
average time/customer in system: Tsys = Tq +Tser= 26 ms
average length of queue:Lq= r x Tq
JDK.F98
= 10/s x .006s = 0.06 requests in queue
Slide 14
average # tasks in system :Lsys = r x Tsys = 10/s x .026s = 0.26
Pitfall of Not using
Queuing Theory
• 1st 32-bit minicomputer (VAX-11/780)
• How big should write buffer be?
– Stores 10% of instructions, 1 MIPS
• Buffer = 1
• => Avg. Queue Length = 1
vs. low response time
JDK.F98
Slide 15
Network Attached Storage
Decreasing Disk Diameters
14" » 10" » 8" » 5.25" » 3.5" » 2.5" » 1.8" » 1.3" » . . .
high bandwidth disk systems based on arrays of disks
Network provides
well defined physical
and logical interfaces:
separate CPU and
storage system!
High Performance
Storage Service
on a High Speed
Network
Network File Services
OS structures
supporting remote
file access
3 Mb/s » 10Mb/s » 50 Mb/s » 100 Mb/s » 1 Gb/s » 10 Gb/s
networks capable of sustaining high bandwidth transfers
Increasing Network Bandwidth
JDK.F98
Slide 16
Manufacturing Advantages
of Disk Arrays
Disk Product Families
Conventional:
4 disk
3.5” 5.25”
designs
Low End
10”
14”
High End
Disk Array:
1 disk design
3.5”
JDK.F98
Slide 17
Replace Small # of Large Disks
with Large # of Small Disks!
(1988 Disks)
IBM 3390 (K)
IBM 3.5" 0061
x70
20 GBytes
320 MBytes
23 GBytes
Volume
97 cu. ft.
0.1 cu. ft.
11 cu. ft.
Power
3 KW
11 W
1 KW
15 MB/s
1.5 MB/s
120 MB/s
I/O Rate
600 I/Os/s
55 I/Os/s
3900 IOs/s
MTTF
250 KHrs
50 KHrs
??? Hrs
Cost
$250K
$2K
$150K
Data Capacity
Data Rate
large data and I/O rates
Disk Arrays have potential for
high MB per cu. ft., high MB per KW
reliability?
JDK.F98
Slide 18
Array Reliability
• Reliability of N disks = Reliability of 1 Disk ÷ N
50,000 Hours ÷ 70 disks = 700 hours
Disk system MTTF: Drops from 6 years to 1 month!
• Arrays (without redundancy) too unreliable to be useful!
Hot spares support reconstruction in parallel with
access: very high media availability can be achieved
JDK.F98
Slide 19
Redundant Arrays of Disks
• Files are "striped" across multiple spindles
• Redundancy yields high data availability
Disks will fail
Contents reconstructed from data redundantly stored in the array
Capacity penalty to store it
Bandwidth penalty to update
Mirroring/Shadowing (high capacity cost)
Techniques:
Horizontal Hamming Codes (overkill)
Parity & Reed-Solomon Codes
Failure Prediction (no capacity overhead!)
VaxSimPlus — Technique is controversial
JDK.F98
Slide 20
Redundant Arrays of Disks
RAID 1: Disk Mirroring/Shadowing
recovery
group
• Each disk is fully duplicated onto its "shadow"
Very high availability can be achieved
• Bandwidth sacrifice on write:
Logical write = two physical writes
• Reads may be optimized
• Most expensive solution: 100% capacity overhead
Targeted for high I/O rate , high availability environments
JDK.F98
Slide 21
Redundant Arrays of Disks
RAID 3: Parity Disk
10010011
11001101
10010011
...
logical record
Striped physical
records
P
1
0
0
1
0
0
1
1
1
1
0
0
1
1
0
1
1
0
0
1
0
0
1
1
0
0
1
1
0
0
0
0
• Parity computed across recovery group to protect against
hard disk failures
33% capacity cost for parity in this configuration
wider arrays reduce capacity costs, decrease expected availability,
increase reconstruction time
• Arms logically synchronized, spindles rotationally synchronized
logically a single high capacity, high transfer rate disk
Targeted for high bandwidth applications: Scientific, Image ProcessingJDK.F98
Slide 22
Redundant Arrays of Disks RAID
5+: High I/O Rate Parity
A logical write
becomes four
physical I/Os
Independent writes
possible because of
interleaved parity
Reed-Solomon
Codes ("Q") for
protection during
reconstruction
Targeted for mixed
applications
D0
D1
D2
D3
P
D4
D5
D6
P
D7
D8
D9
P
D10
D11
D12
P
D13
D14
D15
Increasing
Logical
Disk
Addresses
Stripe
P
D16
D17
D18
D19
D20
D21
D22
D23
P
.
.
.
.
.
.
.
.
.
.
Disk Columns
.
.
.
.
.
Stripe
Unit
JDK.F98
Slide 23
Problems of Disk Arrays:
Small Writes
RAID-5: Small Write Algorithm
1 Logical Write = 2 Physical Reads + 2 Physical Writes
D0'
new
data
D0
D1
D2
D3
old
data (1. Read)
P
old
(2. Read)
parity
+ XOR
+ XOR
(3. Write)
D0'
D1
(4. Write)
D2
D3
P'
JDK.F98
Slide 24
Subsystem Organization
host
host
adapter
array
controller
manages interface
to host, DMA
control, buffering,
parity logic
physical device
control
striping software off-loaded from
host to array controller
no applications modifications
no reduction of host performance
single board
disk
controller
single board
disk
controller
single board
disk
controller
single board
disk
controller
often piggy-backed
in small format devices
JDK.F98
Slide 25
System Availability: Orthogonal
RAIDs
String
Array
Controller
Controller
. . .
String
Controller
. . .
String
Controller
. . .
String
Controller
. . .
String
Controller
. . .
String
Controller
. . .
Data Recovery Group: unit of data redundancy
Redundant Support Components: fans, power supplies, controller, JDK.F98
cables
End to End Data Integrity: internal parity protected data paths
Slide 26
System-Level Availability
host
host
Fully dual redundant
I/O Controller
Array Controller
I/O Controller
Array Controller
...
...
...
...
Goal: No Single
Points of
Failure
...
Recovery
Group
.
.
.
with duplicated paths, higher performance can be
obtained when there are no failures JDK.F98
Slide 27
Review: Storage System Issues
•
•
•
•
•
•
•
•
•
•
Historical Context of Storage I/O
Secondary and Tertiary Storage Devices
Storage I/O Performance Measures
Processor Interface Issues
A Little Queuing Theory
Redundant Arrarys of Inexpensive Disks (RAID)
I/O Buses
ABCs of UNIX File Systems
I/O Benchmarks
Comparing UNIX File System Performance
JDK.F98
Slide 28
CS 252 Administrivia
• Upcoming schedule of project events in CS 252
– Wednesday Dec 2: finish I/O.
– Friday Dec 4: Esoteric computation. Quantum/DNA computing
– Mon/Tue Dec 7/8 for oral reports
– Friday Dec 11: project reports due.
Get moving!!!
JDK.F98
Slide 29
Processor Interface Issues
• Processor interface
–
–
Interrupts
Memory mapped I/O
• I/O Control Structures
–
–
–
–
–
Polling
Interrupts
DMA
I/O Controllers
I/O Processors
• Capacity, Access Time, Bandwidth
• Interconnections
–
Busses
JDK.F98
Slide 30
I/O Interface
CPU
Memory
memory
bus
Independent I/O Bus
Interface
Interface
Peripheral
Peripheral
CPU
common memory
& I/O bus
Memory
Seperate I/O instructions (in,out)
Lines distinguish between
I/O and memory transfers
Interface
Interface
Peripheral
Peripheral
VME bus
Multibus-II
Nubus
40 Mbytes/sec
optimistically
10 MIP processor
completely
saturates the
bus!
JDK.F98
Slide 31
Memory Mapped I/O
CPU
Single Memory & I/O Bus
No Separate I/O Instructions
ROM
Memory
CPU
Interface
Interface
Peripheral
Peripheral
RAM
I/O
$
L2 $
Memory Bus
Memory
I/O bus
Bus Adaptor
JDK.F98
Slide 32
Programmed I/O (Polling)
CPU
Is the
data
ready?
Memory
IOC
no
yes
read
data
but checks for I/O
completion can be
dispersed among
computationally
intensive code
device
store
data
done?
busy wait loop
not an efficient
way to use the CPU
unless the device
is very fast!
no
yes
JDK.F98
Slide 33
Interrupt Driven Data Transfer
CPU
add
sub
and
or
nop
(1) I/O
interrupt
Memory
IOC
(2) save PC
device
(3) interrupt
service addr
User program progress only halted during
actual transfer
(4)
read
store
...
rti
user
program
interrupt
service
routine
1000 transfers at 1 ms each:
memory
1000 interrupts @ 2 µsec per interrupt
1000 interrupt service @ 98 µsec each = 0.1 CPU seconds
-6
Device xfer rate = 10 MBytes/sec => 0 .1 x 10 sec/byte => 0.1 µsec/byte
=> 1000 bytes = 100 µsec
1000 transfers x 100 µsecs = 100 ms = 0.1 CPU seconds
Still far from device transfer rate! 1/2 in interrupt overhead
JDK.F98
Slide 34
Direct Memory Access
Time to do 1000 xfers at 1 msec each:
1 DMA set-up sequence @ 50 µsec
1 interrupt @ 2 µsec
CPU sends a starting address,
1 interrupt service sequence @ 48 µsec
direction, and length count to
DMAC. Then issues "start".
.0001 second of CPU time
0
CPU
Memory
DMAC
IOC
Memory
Mapped I/O
ROM
RAM
device
Peripherals
DMAC provides handshake signals for Peripheral
Controller, and Memory Addresses and handshake
signals for Memory.
DMAC
n
JDK.F98
Slide 35
Input/Output Processors
D1
IOP
CPU
D2
main memory
bus
Mem
. . .
Dn
I/O
bus
(1)
CPU
IOP
(3)
(4)
(2)
target device
where cmnds are
issues instruction to IOP
OP Device Address
looks in memory for commands
interrupts when done
memory
Device to/from memory
transfers are controlled
by the IOP directly.
IOP steals memory cycles.
OP Addr Cnt Other
what
to do
special
requests
where
to put
data
how
much
JDK.F98
Slide 36
Relationship to Processor Architecture
• I/O instructions have largely disappeared
• Interrupt vectors have been replaced by jump
tables
PC <- M [ IVA + interrupt number ]
PC <- IVA + interrupt number
• Interrupts:
– Stack replaced by shadow registers
– Handler saves registers and re-enables higher priority int's
– Interrupt types reduced in number; handler must query
interrupt controller
JDK.F98
Slide 37
Relationship to Processor Architecture
• Caches required for processor performance cause
problems for I/O
– Flushing is expensive, I/O polutes cache
– Solution is borrowed from shared memory multiprocessors
"snooping"
• Virtual memory frustrates DMA
• Load/store architecture at odds with atomic
operations
– load locked, store conditional
• Stateful processors hard to context switch
JDK.F98
Slide 38
Summary
• Disk industry growing rapidly, improves:
– bandwidth 40%/yr ,
– areal density 60%/year, $/MB faster?
• queue + controller + seek + rotate + transfer
• Advertised average seek time benchmark much
greater than average seek time in practice
• Response time vs. Bandwidth tradeoffs
1

• Queueing theory:
or
 1  C  x 
 x 
W 2
 1  





W  
1 
• Value of faster response time:
– 0.7sec off response saves 4.9 sec and 2.0 sec (70%) total
time per transaction => greater productivity
– everyone gets more done with faster response,
JDK.F98
but novice with fast response = expert with slow
Slide 39
Summary: Relationship to
Processor Architecture
• I/O instructions have disappeared
• Interrupt vectors have been replaced by jump
tables
• Interrupt stack replaced by shadow registers
• Interrupt types reduced in number
• Caches required for processor performance cause
problems for I/O
• Virtual memory frustrates DMA
• Load/store architecture at odds with atomic
operations
• Stateful processors hard to context switch
JDK.F98
Slide 40
Download