CS 61C: Great Ideas in Computer Architecture Virtual Machines Instructor:

advertisement
CS 61C:
Great Ideas in Computer Architecture
Virtual Machines
Instructor:
David A. Patterson
http://inst.eecs.Berkeley.edu/~cs61c/sp12
6/27/2016
Spring 2012 -- Lecture #26
1
• RAID
–
–
–
–
Review
Goal was performance with more disk arms per $
Reality: availability what people cared about
Adds availability option for small number of extra disks
Today RAID is multibillion dollar industry, started at UC berkeley
• C has three pools of data memory (+ code memory)
– Static storage: global variable storage, ~permanent, entire program run
– The Stack: local variable storage, parameters, return address
– The Heap (dynamic storage): malloc()gets space from here, free()
returns it
• Common (Dynamic) Memory Problems
–
–
–
–
6/27/2016
Using uninitialized values
Accessing memory beyond your allocated region
Improper use of free by changing pointer handle returned by malloc
Memory leaks: mismatched malloc/free pairs
Spring 2012 -- Lecture #26
2
Redundant Array of Inexpensive Disks
RAID 3: Parity Disk
10010011
11001101
10010011
...
logical record
Striped physical
records
P
1
0
1
0
0
0
1
1
P contains sum of
other disks per stripe
mod 2 (“parity”)
If disk fails, subtract
P from sum of other
disks to find missing information
6/27/2016
Spring 2012 -- Lecture #26
1
1
0
0
1
1
0
1
1
0
1
0
0
0
1
1
1
1
0
0
1
1
0
1
3
2011 Disk Arrays (RAID) Sales
• 2011 Disk Arrays sales $31.0 billion, up 8.2%
“Disk Array Sales Keep Expanding Despite Drive Shortages,”
March 19, 2012, by Timothy Prickett Morgan
http://www.itjungle.com/tfh/tfh031912-story10.htm
“Server Transitions Give Internal Storage Arrays Sales A
Breather,” March 26, 2012, by Timothy Prickett Morgan
http://www.itjungle.com/tfh/tfh032612-story05.htmll
6/27/2016
Spring 2012 -- Lecture #26
4
You Are Here!
Software
• Parallel Requests
Assigned to computer
e.g., Search “Katz”
Hardware
Harness
• Parallel Threads Parallelism &
Assigned to core
e.g., Lookup, Ads
Smart
Phone
Warehouse
Scale
Computer
Virtual
Achieve High Machines
Performance
Computer
• Parallel Instructions
>1 instruction @ one time
e.g., 5 pipelined instructions
• Parallel Data
>1 data item @ one time
e.g., Add of 4 pairs of words
• Hardware descriptions
All gates @ one time
…
Core
Memory
Core
(Cache)
Input/Output
Instruction Unit(s)
Core
Functional
Unit(s)
A0+B0 A1+B1 A2+B2 A3+B3
Main Memory
Logic Gates
• Programming Languages
6/27/2016
Today’s
Lecture
Spring 2012 -- Lecture #26
5
Agenda
•
•
•
•
•
•
•
Virtual Machines
Administrivia
Virtual Memory
Demand Paging
Virtual Machines Revisited
Patterson’s Projects (if time is available)
Summary
6/27/2016
Spring 2012 -- Lecture #26
6
Safely Sharing a Machine
• Amazon Web Services allows independent tasks run on
same computer
– Can sell each “instance”
• Can a “small” operating system (~10,000 LOC) simulate
the hardware of some machine, so that
– Another operating system can run in that simulated
hardware?
– More than one instance of that operating system run on
the same hardware at the same time?
– More than one different operating system can share the
same hardware at the same time?
– And none can access each others’ data?
• Answer: Yes
6/27/2016
Spring 2012 -- Lecture #26
7
Solution – Virtual Machine
• A virtual machine provides interface identical
to underlying bare hardware
– I.e., all devices, interrupts, memory, etc.
• Examples
–
–
–
–
IBM VM/370 (1970s technology!)
VMWare (founded by Mendel & Diane Rosenblum)
Xen (used by AWS)
Microsoft Virtual PC
• Called “System Virtual Machines” vs.
language interpreters (e.g., Java Virtual Machine)
• Virtualization has some performance impact
– Feasible with modern high-performance computers
6/27/2016
Spring 2012 -- Lecture #26
8
Virtual Machines
• Host Operating System:
– OS actually running on the hardware
– Together with virtualization layer, it simulates environment for
…
• Guest Operating System:
–
–
–
–
OS running in the simulated environment
Runs identical as if on native hardware (except performance)
Cannot change access of real system resources
Guest OS code runs in native machine ISA
• The resources of the physical computer are shared to
create the virtual machines
• Processor scheduling by OS can create the appearance that
each user has own processor
6/27/2016
Spring 2012 -- Lecture #26
9
Virtual Machine Monitor
(a.k.a. Hypervisor)
• Maps virtual resources to physical resources
– Memory, I/O devices, processors
• VMM handles real I/O devices
– Emulates generic virtual I/O devices for guest OS
• Host OS must intercept attempts by Guest OS
to access real I/O devices, allocate resources
6/27/2016
Spring 2012 -- Lecture #26
10
Why Virtual Machines Popular
(Again)?
• Increased importance of isolation and security
• Failures in security and reliability of modern
OS’s
• Sharing of single computer between many
unrelated users
– E.g., Cloud computing
• Dramatic increase in performance of
processors makes VM overhead acceptable
6/27/2016
Spring 2012 -- Lecture #26
11
5 Reasons Amazon Web Services uses
Virtual Machines
• (Uses x86 ISA, Linux Host OS, and Xen VMM)
1. Allow AWS protect users from each other
2. Simplified SW distribution within WSC
–
Customers install image, AWS distributes to all instances
3. Can reliably “kill” a VM => control resource usage
4. VMs hide identity of HW => can keep selling old HW
AND can introduce new more efficient HW
–
VM Performance not need be integer multiple of real HW
5. VM limiting rate of processing, network, and disk
space => AWS offers many price points
6/27/2016
Spring 2012 -- Lecture #26
12
Peer Instruction: True or False
Which statements is True about Virtual Machines?
I. Multiple Virtual Machines can run on one
computer
II. Multiple Virtual Machine Monitors can run on
one computer
III. The Guest OS must be the same as the Host OS
A) I only
B) II only
C) III only
6/27/2016
Spring 2012 -- Lecture #26
13
Administrivia
• Lab 13 this week – Malloc/Free in C
• Extra Credit: Fastest Version of Project 3
• Lectures: Virtual Machines/Memory, End off
course Overview/Cal Culture/HKN Evaluation
• Will send final survey end of this week
• All grades finalized: 4/27
• Final Review: Sunday April 29, 2-5PM, 2050 VLSB
• Extra office hours: Thu-Fri May 3 and May 4
• Final: Wed May 9 11:30-2:30, 1 PIMENTEL
6/27/2016
Spring 2012 -- Lecture #26
15
Getting to Know Your Prof
• Lift weights with son in RSF
• Building up for 65th bday
• Bench pressed 270 lbs
on Tuesday 4/17/12
• Son bench pressed 365
pounds in 1997, 320
pounds recently
• Metric to adjust for age?
• Metric to adjust for body
weight?
• Bench pressed 325 pounds
6/27/2016 th
Spring 2012 -- Lecture #26
on
50 birthday
16
Bench Press vs. Age
California Bench Press Records
Bench Stones
500
400
300
200
100
40
45
50
55
Age
60
65
70
USA POWERLIFTING CALIFORNIA STATE RECORDS RAW BENCH PRESS (SINGLE-LIFT) 2011
6/27/2016
Spring 2012 -- Lecture #26
17
Getting to Know Your Prof
• Metric adjust for age?
• “Pound Years”
= Weight lifted * Age
• 20 year old * 150 pounds
“I benched 3000 pd-yrs!”
• Metric adjust for body
weight?
• “Benchstones”
= Bench Press / Body wt * Age
• If weighed 150 pounds
“I did 20 Benchstones”
6/27/2016
Person
Age
Bench
Pd-Years
Son
28
365
Son
42
320
Me
50
325
Me
64
270
~10,000
~13,000
~16,000
~17,000
Age
Bench
PdYears
Body
Bench
stones
Son
28
365
~10,000
230
Son
42
320
~13,000
240
Me
50
325
~16,000
210
Me
64
270
~17,000
180
45
55
75
95
Spring 2012 -- Lecture #26
18
Apply Metrics
California Bench Press Records
California Bench Press Records
140
25000
120
Bench Stones
Pound Years
20000
15000
10000
100
80
60
40
20
5000
40
0
40
45
50
55
60
65
70
45
50
55
Age
60
65
70
Age
USA POWERLIFTING CALIFORNIA STATE RECORDS RAW BENCH PRESS (SINGLE-LIFT) 2011
6/27/2016
Spring 2012 -- Lecture #26
19
Agenda
•
•
•
•
•
•
•
Virtual Machines
Administrivia
Virtual Memory
Demand Paging
Virtual Machines Revisited
Patterson’s Projects (if time is available)
Summary
6/27/2016
Spring 2012 -- Lecture #26
20
2012 AWS Instances & Prices
(Revised from Lecture 1: $0.085=>$0.080)
Ratio
Per Hour
to
Small
Instance
Standard Small
Standard Large
Standard Extra Large
High-Memory Extra Large
High-Memory Double Extra Large
High-Memory Quadruple Extra Large
High-processor Medium
High-processor Extra Large
Cluster Quadruple Extra Large
Eight Extra Large
$0.080
$0.320
$0.640
$0.450
$0.900
$1.800
$0.165
$0.660
$1.300
$2.400
1.0
4.0
8.0
5.6
11.3
22.5
2.1
8.3
16.3
30.0
Virtual
Cores
1
2
4
2
4
8
2
8
16
32
Memory
(GB)
1.7
7.5
15.0
17.1
34.2
68.4
1.7
7.0
23.0
60.5
Disk
(GB)
Address
160
850
1690
420
850
1690
350
1690
1690
1690
32
64
64
64
64
64
32
64
64
64
bit
bit
bit
bit
bit
bit
bit
bit
bit
bit
• Note AWS offers different amounts of virtual
cores, memory, disk at different price points
6/27/2016
Spring 201221
-- Lecture #26
How Hard to Divide Computer?
• Disk – relatively easy
– Lots of space per disk + typically can have several disks
per server
– Easy for Host OS to divide into disk space into chucks,
allocate each to a virtual machine
• Virtual Cores
– With multiple cores per chip, Host OS could map
virtual cores to physical cores, allocate cores to tasks
• Memory is harder
6/27/2016
Spring 2012 -- Lecture #26
22
Dividing Memory?
• Option 1: Could give all memory to any core
that is running, and save to disk when before
another core runs, restore memory for core
• Reload memory image for core when it runs
• How fast swap memory for cores?
• 2*15 GB/(0.2 GB/sec) = 60 seconds to swap!
– Disk can read or write at ~200 MB/sec
• Too slow – cores idle too long
6/27/2016
Spring 2012 -- Lecture #26
23
Dividing Memory?
• Option 2: Provide isolation protection for
different variable length segments of memory
• Hardware mechanism + Virtual Machine Monitor
+ Host OS limit guest program and guest OS to
that segment
– Other cores’ memory is idle, but protected
• Need special registers to check all addresses to
make sure they stay within restricted segment
• Need mode in processor that says OK to change
address limit registers or Not OK
– Processors typically have 2 modes to support VM
6/27/2016
Spring 2012 -- Lecture #26
24
Problem with Segmentation?
• Wanted to support different sized segments
for different programs/VMs
• When one program/VMs stops, next might
want more or less memory than it had
• Results in fragmentation of real memory
• Enough memory to run another program, but
not contiguous so can’t run!
6/27/2016
Spring 2012 -- Lecture #26
25
Solution?
• “All problems in computer science can be
level of _________.”
indirection
solved by another ____
– David Wheeler, British Computer Scientist
– Worked on EDSAC,
1st general purpose
stored program
computer at
Cambridge
University
6/27/2016
Spring 2012 -- Lecture #26
26
Paging
• Divide memory into fixed, equal sized blocks
called “pages”
– Typically 4 KB or 8 KB
• Have table (“page table”) that maps
programs/guest OS’s from “virtual address
space” to “physical address space”
• Host OS allocates pages to page table to
control who accesses what
6/27/2016
Spring 2012 -- Lecture #26
27
Protection + Indirection =
Virtual Address Space
stack
~ FFFF FFFFhex
heap
static data
code
~ 0hex
Application 1
Virtual Memory
stack
~ FFFF FFFFhex
7
6
5
4
3
2
1
0
7
6
5
4
3
2
1
0
Page
Table
Page
Table
heap
static data
~ 0hex
code
Application 2
Virtual Memory
Physical Memory
6/27/2016
Spring 2012 -- Lecture #26
28
Protection + Indirection =
Virtual Address Space
stack
~ FFFF FFFFhex
heap
static data
code
~ 0hex
Application 1
Virtual Memory
stack
~ FFFF FFFFhex
7
6
5
4
3
2
1
0
7
6
5
4
3
2
1
0
Page
Table
Stack 1
Heap 1
Static 1
Code 1
Page
Table
heap
static data
~ 0hex
code
Application 2
Virtual Memory
Physical Memory
6/27/2016
Spring 2012 -- Lecture #26
29
Protection + Indirection =
Virtual Address Space
stack
~ FFFF FFFFhex
heap
static data
code
~ 0hex
Application 1
Virtual Memory
stack
~ FFFF FFFFhex
7
6
5
4
3
2
1
0
Page
Table
Stack 2
Heap 2
Static 2
Code 2
Stack 1
Heap 1
Static 1
Code 1
7
6
5
4
3
2
1
0
Page
Table
heap
static data
~ 0hex
code
Application 2
Virtual Memory
Physical Memory
6/27/2016
Spring 2012 -- Lecture #26
30
Protection + Indirection =
Dynamic Memory Allocation
stack
~ FFFF FFFFhex
heap
static data
code
~ 0hex
Application 1
Virtual Memory
7
6
5
4
3
2
1
0
Page
Table
malloc(4097)
6/27/2016
stack
~ FFFF FFFFhex
Heap’ 1
Stack 2
Heap 2
Static 2
Code 2
Stack 1
Heap 1
Static 1
Code 1
7
6
5
4
3
2
1
0
Page
Table
heap
static data
~ 0hex
code
Application 2
Virtual Memory
Physical Memory
Spring 2012 -- Lecture #26
31
Protection + Indirection =
Dynamic Memory Allocation
stack
~ FFFF FFFFhex
heap
static data
code
~ 0hex
Application 1
Virtual Memory
7
6
5
4
3
2
1
0
Page
Table
malloc(4097)
6/27/2016
stack
~ FFFF FFFFhex
Stack’ 2
Heap’ 1
Stack 2
Heap 2
Static 2
Code 2
Stack 1
Heap 1
Static 1
Code 1
Physical Memory
Spring 2012 -- Lecture #26
7
6
5
4
3
2
1
0
Page
Table
heap
static data
~ 0hex
code
Application 2
Virtual Memory
Recursive function call
32
Protection within Program/Guest OS
• As long as have table that maps pages in
virtual address space to physical address
space, could add protection bits to check on
each memory access to catch bugs
• Read only – cause exception if try to write
– Why?
Student Roulette
• Execute only – cause exception if access with
anything but instruction fetch
– Why?
6/27/2016
Spring 2012 -- Lecture #26
33
Protection + Indirection =
Controlled Sharing
stack
~ FFFF FFFFhex
heap
static data
code
~ 0hex
Application 1
Virtual Memory
7
6
5
4
3
2
1
0
Page
Table
Stack 2
Heap 2
Static 2
Stack 1
Heap 1
Static 1
Code
Physical Memory
6/27/2016
stack
~ FFFF FFFFhex
Spring 2012 -- Lecture #26
7
6
5
4
3
2
1
0
Page
Table
heap
static data
~ 0hex
code
Application 2
Virtual Memory
Shared Code Page
“X” Protection Bit
34
Protection + Indirection =
Controlled Sharing
~ FFFF FFFFhex
stack
heap
static data
~ 0hex
code
stack
~ FFFF FFFFhex
7
6
5
4
3
2
1
0
Page
Table
Application 1
Virtual Memory
Shared Globals
“RW” Protection
Bits
6/27/2016
Stack 2
Heap 2
Stack 1
Heap 1
Static
Code
Physical Memory
Spring 2012 -- Lecture #26
7
6
5
4
3
2
1
0
Page
Table
heap
static data
~ 0hex
code
Application 2
Virtual Memory
Shared Code Page
“X” Protection Bit
35
Where Keep Page Tables?
• How Big are Page Tables?
– 4 GB / 4 KB = 1 M entries, each 4 to 8 bytes
= 1 MB to 2 MB
• Too big for a separate memory
• Keep Page Tables in Main Memory!
• Host OS must know where page tables are,
and protect them
6/27/2016
Spring 2012 -- Lecture #26
36
Day in the Life of an
(Instruction) Address
PC
Physical Address (PA)
Physical Address (PA)
MEM
Instruction
No Cache, No Virtual Memory
6/27/2016
Spring 2012 -- Lecture #26
37
Day in the Life of an
(Instruction) Address
PC
Virtual Address (VA): Virtual Page Number (VPN), Offset
PTBR
Page Table
Base Register
VPN
+
(@Page
Table Entry)
MEM
PA (PPN, Offset)
PA
MEM
Instruction
No Cache, Virtual Memory, Page Table Access
6/27/2016
Spring 2012 -- Lecture #26
38
Day in the Life of an
(Instruction) Address
PC
Virtual Address (VA): Virtual Page Number (VPN), Offset
PTBR
Page Table
Base Register
VPN
+
(@Page
Table Entry)
D cache
PA (PPN, Offset)
Hit
PA
MEM
Instruction
Physical Data Cache, Virtual Memory, Page Table Access
6/27/2016
Spring 2012 -- Lecture #26
39
Day in the Life of an
(Instruction) Address
PC
Virtual Address (VA): Virtual Page Number (VPN), Offset
PTBR
Page Table
Base Register
VPN
+
D cache
(@Page
Table Entry)
Miss
(@Page
Table Entry)
MEM
PA (PPN, Offset)
PA
MEM
Instruction
Physical Data Cache, Virtual Memory, Page Table Access
6/27/2016
Spring 2012 -- Lecture #26
40
Day in the Life of an
(Instruction) Address
Virtual Address (VA): Virtual Page Number (VPN), Offset
PC
PTBR
VPN
Page Table
Base Register
+
D cache
(@Page
Table Entry)
(@Page
Table Entry)
MEM
Miss
PA
(PPN,
Offset)
I$
Hit
Instruction
Physical Data & Instruction Cache, Virtual Memory, Page Table Access
6/27/2016
Spring 2012 -- Lecture #26
41
Day in the Life of an
(Instruction) Address
Virtual Address (VA): Virtual Page Number (VPN), Offset
PC
PTBR
VPN
Page Table
Base Register
+
D cache
(@Page
Table Entry)
(@Page
Table Entry)
MEM
Miss
PA
(PPN,
Offset)PA
I$
Miss
MEM
Instruction
Day in the life of a data access is not too different
Physical Data & Instruction Cache, Virtual Memory, Page Table Access
6/27/2016
Spring 2012 -- Lecture #26
42
Historical Retrospective:
Virtual Memory 1962 versus 2012
• Memory used to be very expensive, and amount available
to the processor was highly limited
– Now memory is cheap: approx $10 per GByte in April 2012
• Many apps’ data could not fit in main memory, e.g.,
payroll
– Paged memory system reduced fragmentation but still
required whole program to be resident in the main memory
– For good performance, buy enough memory to hold your apps
• Programmers moved the data back and forth from the disk
store by overlaying it repeatedly on the primary store
– Programmers no longer need to worry about this level of
detail anymore
6/27/2016
Spring 2012 -- Lecture #26
43
Demand Paging in Atlas (1962)
“A page from secondary
storage is brought into
the primary storage
whenever it is
(implicitly) demanded
by the processor.”
Tom Kilburn
Primary memory as a cache
for secondary memory
User sees 32 x 6 x 512 words
of storage
6/27/2016
Primary
32 Pages
512 words/page
Central
Memory
Spring 2012 -- Lecture #26
Secondary
(~disk)
32x6 pages
44
Demand Paging Scheme
• On a page fault:
– Input transfer into a free page is initiated
– If no free page available, a page is selected to be
replaced (based on usage)
– Replaced page is written on the disk
• To minimize disk latency effect, the first empty page
on the disk was selected
– Page table is updated to point to the new location of
the page on the disk
6/27/2016
Spring 2012 -- Lecture #26
45
Impact of Paging on AMAT
• Memory Parameters:
–
–
–
–
L1 cache hit = 1 clock cycles, hit 95% of accesses
L2 cache hit = 10 clock cycles, hit 60% of L1 misses
DRAM = 200 clock cycles (~100 nanoseconds)
Disk = 20,000,000 clock cycles (~ 10 milliseconds)
• Average Memory Access Time (with no paging):
– 1 + 5%*10 + 5%*40%*200 = 5.5 clock cycles
• Average Memory Access Time (with paging) =
– AMAT (with no paging) + ?
– 5.5 + ?
6/27/2016
Spring 2012 -- Lecture #26
46
Student Roulette?
Impact of Paging on AMAT
• Memory Parameters:
–
–
–
–
L1 cache hit = 1 clock cycles, hit 95% of accesses
L2 cache hit = 10 clock cycles, hit 60% of L1 misses
DRAM = 200 clock cycles (~100 nanoseconds)
Disk = 20,000,000 clock cycles (~ 10 milliseconds)
• Average Memory Access Time (with paging) =
– 5.5 + 5%*40%*(1-HitMemory)*20,000,000
• AMAT if HitMemory = 99.9%?
– 5.5 + 0.02 * .001 * 20,000,000 = 405.5
• AMAT if HitMemory = 99.9999%
– 5.5 + 0.02 * .000001 * 20,000,000 = 5.9
6/27/2016
Spring 2012 -- Lecture #26
47
Peer Instruction: Match the Phrase
Match the memory hierarchy element on the left
with the closest phrase on the right:
1. L1 cache
a. A cache for a main memory
2. L2 cache
b. A cache for disks
3. Main memory c. A cache for a cache
A) 1 a, 2 b, 3 c
B) 1 b, 2 c, 3 a
C) 1 c, 2 b, 3 a
6/27/2016
Spring 2012 -- Lecture #26
48
Virtual Machine/Memory
Instruction Set Support
• 2 modes in hardware: User and System modes
– Some instruction only run in System mode
• Page Table Base Register (PTBR): Page Table addr
• Privileged instructions only available in system
mode
– Trap to system (and VMM) if executed in user mode
• All physical resources only accessible using
privileged instructions
– Including interrupt controls, I/O registers
• Renaissance of virtualization support in ISAs
– Current ISAs (e.g., x86) adapting, following IBM’s path
Spring 2012 -- Lecture #26
6/27/2016
50
Example: Timer Virtualization
• In native machine (no VMM), on timer interrupt
– OS suspends current process, handles interrupt,
selects and resumes next process
• With Virtual Machine Monitor
– VMM suspends current VM, handles interrupt, selects
and resumes next VM
• If a VM requires timer interrupts
– VMM emulates a virtual timer
– Emulates interrupt for VM when physical timer
interrupt occurs
6/27/2016
Spring 2012 -- Lecture #26
51
Virtual Machine Monitor
(a.k.a. Hypervisor)
• Maps virtual resources to physical resources
– Memory, I/O devices, processors
• Guest OS code runs on native machine ISA in
user mode
– Traps to VMM on privileged instructions and
access to protected resources
• Guest OS may be different from host OS
• VMM handles real I/O devices
– Emulates generic virtual I/O devices for guest
6/27/2016
Spring 2012 -- Lecture #26
52
Performance Impact of Virtual
Machines?
• No impact on computation bound program,
since they spend ≈0 time in OS
– E.g., matrix multiply
• Big impact on I/O-intensive programs, since
spend lots of time in OS and execute many
systems calls and many privileged instructions
– Although if I/O-bound => spend most time waiting
for I/O device, then can hide VM overhead
6/27/2016
Spring 2012 -- Lecture #26
53
Virtual Machines and Cores
• Host OS can also limit amount of time a virtual
machine uses a processor (core)
• Hence, at cost of swapping registers, it can run
multiple virtual machines on a single core
• AWS cheapest VM was originally 2 VMs per
core
• Now, with faster processors, can install more
VMs per core and deliver same performance
or have multiple speeds of cores
6/27/2016
Spring 2012 -- Lecture #26
54
2012 AWS Instances & Prices
(Revised from Lecture 1: $0.085=>$0.080)
Instance
Standard Small
Standard Large
Standard Extra Large
High-Memory Extra Large
High-Memory Double Extra Large
High-Memory Quadruple Extra Large
High-processor Medium
High-processor Extra Large
Cluster Quadruple Extra Large
Eight Extra Large
6/27/2016
Ratio
Compute
Per Hour
to
Units
Small
$0.080
$0.320
$0.640
$0.450
$0.900
$1.800
$0.165
$0.660
$1.300
$2.400
1.0
4.0
8.0
5.6
11.3
22.5
2.1
8.3
16.3
30.0
1.0
4.0
8.0
6.5
13.0
26.0
5.0
20.0
33.5
88.0
Spring 201255
-- Lecture #26
Virtual Compute Memory
Cores Unit/ Core
(GB)
1
2
4
2
4
8
2
8
16
32
1.00
2.00
2.00
3.25
3.25
3.25
2.50
2.50
2.09
2.75
1.7
7.5
15.0
17.1
34.2
68.4
1.7
7.0
23.0
60.5
Disk
(GB)
Address
160
850
1690
420
850
1690
350
1690
1690
1690
32
64
64
64
64
64
32
64
64
64
bit
bit
bit
bit
bit
bit
bit
bit
bit
bit
Agenda
•
•
•
•
•
•
•
Virtual Machines
Administrivia
Virtual Memory
Demand Paging
Virtual Machines Revisited
Patterson’s Projects (if time is available)
Summary
6/27/2016
Spring 2012 -- Lecture #26
56
PATTERSON’S PROJECTS
Years
Title
Profs Students
3
3
2
6
12
17
12
21
1988-92
3
16
1997-02 IRAM: Intelligent RAM
4
3
2
25
12
11
2005-10
7
30
2011-16 AMP Lab: Algorithms, Machines, People
9
6
45
30
1977-81 X-Tree: A Tree-Structured Multiprocessor
1980-84 RISC: Reduced Instruction Set Computer
1983-86 SOAR: Smalltalk On A RISC
1985-89 SPUR: Symbolic Processing Using RISCs
RAID:
Redundant Array of Inexpensive Disks
1993-98 NOW: Network of Workstations
2001-05 ROC: Recovery Oriented Computing
RAD Lab: Reliable Adaptive Distributed
Systems Laboratory
2007-12 Par Lab: Parallel Computing Laboratory
Spring
2012 -- Lecture*Parallel
#26
*Each6/27/2016
helped lead to a multibillion dollar
industry
Processor Projects
57
10th ANNIVERSARY REUNIONS
Network of Workstations (NOW): 1993-98
NOW Team 2008: L-R, front row: Prof. Tom Anderson†‡ (Washington), Prof. Rich Martin‡ (Rutgers),
Prof. David Culler*†‡ (Berkeley), Prof. David Patterson*† (Berkeley).
Middle row: Eric Anderson (HP Labs), Prof. Mike Dahlin†‡ (Texas), Prof. Armando Fox‡ (Berkeley),
Drew Roselli (Microsoft), Prof. Andrea Arpaci-Dusseau‡ (Wisconsin), Lok Liu, Joe Hsu.
Last row: Prof. Matt Welsh‡ (Harvard/Google), Eric Fraser, Chad Yoshikawa, Prof. Eric Brewer*†‡
(Berkeley/Inktomi founder), Prof. Jeanna Neefe Matthews (Clarkson), Prof. Amin Vahdat‡ (UCSD),
Prof. Remzi Arpaci-Dusseau (Wisconsin), Prof. Steve Lumetta (Illinois).
6/27/2016
*3 NAE
members
†4
ACM fellows
‡ 9 NSF
Spring
2012 CAREER
-- Lecture #26
Awards
58
EECS Faculty on Projects in now
Nat’l Academy of Engineering
1. Eric Brewer NOW
2. David Culler NOW
3. James Demmel*
Par Lab
4. David Hodges SPUR
5. Michael Jordan*
RAD Lab, AMP Lab
6. Randy Katz SPUR, RAID,
RAD Lab, AMP
7. John Ousterhout RISC,
6/27/2016
SOAR, SPUR, RAID
8. David Patterson* all
9. Michael Stonebraker RAID
10. Scott Shenker
RAD Lab, AMP Lab
* Also in National Academy of
Sciences
Spring 201259
-- Lecture #26
And in Conclusion, …
• Virtual Memory, Paging for Isolation and
Protection, help Virtual Machines share memory
– Can think of as another level of memory hierarchy,
but not really used like caches are today
– Not really routinely paging to disk today
• Virtual Machines as even greater level of
protection to allow greater level of sharing
– Enables fine control, allocation, software distribution,
multiple price points for Cloud Computing
6/27/2016
Spring 2012 -- Lecture #26
60
Download