Introduction to Computer Systems 15-213/18

advertisement
Seoul National University
Virtual Memory: Systems
1
Seoul National University
Virtual Memory: Systems



Virtual memory questions and answers
Simple memory system example
Case study: Core i7/Linux memory system
2
Seoul National University
Virtual memory reminder/review

Programmer’s view of virtual memory
 Each process has its own private linear address space
 Cannot be corrupted by other processes

System view of virtual memory
 Uses memory efficiently by caching virtual memory pages
Efficient only because of locality
 Simplifies memory management and programming
 Simplifies protection by providing a convenient interpositioning point
to check permissions

3
Seoul National University
Recall: Address Translation With a Page Table
Virtual address
Page table
base register
(PTBR)
Page table address
for process
n-1
p p-1
Virtual page number (VPN)
0
Virtual page offset (VPO)
Page table
Valid
Physical page number (PPN)
Valid bit = 0:
page not in memory
(page fault)
m-1
Physical page number (PPN)
p p-1
0
Physical page offset (PPO)
Physical address
4
Seoul National University
Recall: Address Translation: Page Hit
2
PTEA
CPU Chip
CPU
1
VA
PTE
MMU
3
PA
Cache/
Memory
4
Data
5
1) Processor sends virtual address to MMU
2-3) MMU fetches PTE from page table in memory
4) MMU sends physical address to cache/memory
5) Cache/memory sends data word to processor
5
Seoul National University
Question #1

Are the PTEs cached like other memory accesses?

Yes (and no: see next question)
6
Seoul National University
Page tables in memory, like other data
PTE
CPU Chip
PTEA
CPU
PTE
PTEA
hit
VA
MMU
PTEA
miss
Memory
PA
PA
miss
PA
Data
PA
hit
Data
PTEA
L1
cache
VA: virtual address, PA: physical address, PTE: page table entry, PTEA = PTE address
7
Seoul National University
Question #2

Isn’t it slow to have to go to memory twice every time?

Yes, it would be… so, real MMUs don’t
8
Seoul National University
Speeding up Translation with a TLB

Page table entries (PTEs) are cached in L1 like any other
memory word
 PTEs may be evicted by other data references
 PTE hit still requires a small L1 delay

Solution: Translation Lookaside Buffer (TLB)
 Small, dedicated, super-fast hardware cache of PTEs in MMU
 Contains complete page table entries for small number of pages
9
Seoul National University
TLB Hit
CPU Chip
CPU
TLB
2
PTE
VPN
3
1
VA
MMU
PA
4
Cache/
Memory
Data
5
A TLB hit eliminates a memory access
10
Seoul National University
TLB Miss
CPU Chip
TLB
2
4
PTE
VPN
CPU
1
VA
MMU
3
PTEA
PA
Cache/
Memory
5
Data
6
A TLB miss incurs an additional memory access (the PTE)
Fortunately, TLB misses are rare. Why?
11
Seoul National University
Question #3

Isn’t the page table huge? How can it be stored in RAM?

Yes, it would be… so, real page tables aren’t simple arrays
12
Seoul National University
Multi-Level Page Tables

Suppose:
 4KB (212) page size, 64-bit address space, 8-byte PTE

Problem:
 Would need a 32,000 TB page table!

Level 2
Tables
Level 1
Table
264 * 2-12 * 23 = 255 bytes
...

Common solution:
 Multi-level page tables
 Example: 2-level page table


Level 1 table: each PTE points to a page table (always
memory resident)
Level 2 table: each PTE points to a page
(paged in and out like any other data)
...
13
Seoul National University
A Two-Level Page Table Hierarchy
Level 1
page table
Level 2
page tables
Virtual
memory
VP 0
PTE 0
PTE 0
...
PTE 1
...
VP 1023
PTE 2 (null)
PTE 1023
VP 1024
0
2K allocated VM pages
for code and data
...
PTE 3 (null)
VP 2047
PTE 4 (null)
PTE 0
PTE 5 (null)
...
PTE 6 (null)
PTE 1023
Gap
PTE 7 (null)
6K unallocated VM pages
PTE 8
(1K - 9)
null PTEs
1023 null
PTEs
PTE 1023
...
32 bit addresses, 4KB pages, 4-byte PTEs
1023
unallocated
pages
VP 9215
1023 unallocated pages
1 allocated VM page
for the stack
14
Seoul National University
Translating with a k-level Page Table
VIRTUAL ADDRESS
n-1
VPN 1
VPN 2
Level 2
page table
Level 1
page table
...
... ...
p-1
VPN k
0
VPO
Level k
page table
PPN
m-1
p-1
PPN
0
PPO
PHYSICAL ADDRESS
15
Seoul National University
Virtual Memory: Systems



Virtual memory questions and answers
Simple memory system example
Case study: Core i7/Linux memory system
16
Seoul National University
Review of Symbols

Basic Parameters
 N = 2n : Number of addresses in virtual address space
 M = 2m : Number of addresses in physical address space
 P = 2p : Page size (bytes)

Components of the virtual address (VA)





VPO: Virtual page offset
VPN: Virtual page number
TLBI: TLB index
TLBT: TLB tag
Components of the physical address (PA)





PPO: Physical page offset (same as VPO)
PPN: Physical page number
CO: Byte offset within cache line
CI: Cache index
CT: Cache tag
17
Seoul National University
Simple Memory System Example

Addressing
 14-bit virtual addresses
 12-bit physical address
 Page size = 64 bytes
13
12
11
10
9
8
7
6
5
4
3
2
1
VPN
VPO
Virtual Page Number
Virtual Page Offset
11
10
9
8
7
6
5
4
3
2
1
PPN
PPO
Physical Page Number
Physical Page Offset
0
0
18
Seoul National University
Simple Memory System Page Table
Only show first 16 entries (out of 256)
VPN
PPN
Valid
VPN
PPN
Valid
00
28
1
08
13
1
01
–
0
09
17
1
02
33
1
0A
09
1
03
02
1
0B
–
0
04
–
0
0C
–
0
05
16
1
0D
2D
1
06
–
0
0E
11
1
07
–
0
0F
0D
1
19
Seoul National University
Simple Memory System TLB


16 entries
4-way associative
TLBT
13
12
11
10
TLBI
9
8
7
6
5
4
3
2
1
0
VPO
VPN
Set
Tag
PPN
Valid
Tag
PPN
Valid
Tag
PPN
Valid
Tag
PPN
Valid
0
03
–
0
09
0D
1
00
–
0
07
02
1
1
03
2D
1
02
–
0
04
–
0
0A
–
0
2
02
–
0
08
–
0
06
–
0
03
–
0
3
07
–
0
03
0D
1
0A
34
1
02
–
0
20
Seoul National University
Simple Memory System Cache



16 lines, 4-byte block size
Physically addressed
Direct mapped
CT
11
10
9
CI
8
7
6
5
4
CO
3
PPN
2
1
0
PPO
Idx
Tag
Valid
B0
B1
B2
B3
Idx
Tag
Valid
B0
B1
B2
B3
0
19
1
99
11
23
11
8
24
1
3A
00
51
89
1
15
0
–
–
–
–
9
2D
0
–
–
–
–
2
1B
1
00
02
04
08
A
2D
1
93
15
DA
3B
3
36
0
–
–
–
–
B
0B
0
–
–
–
–
4
32
1
43
6D
8F
09
C
12
0
–
–
–
–
5
0D
1
36
72
F0
1D
D
16
1
04
96
34
15
6
31
0
–
–
–
–
E
13
1
83
77
1B
D3
7
16
1
11
C2
DF
03
F
14
0
–
–
–
–
21
Seoul National University
Address Translation Example #1
Virtual Address: 0x03D4
TLBT
TLBI
13
12
11
10
9
8
7
6
5
4
3
2
1
0
0
0
0
0
1
1
1
1
0
1
0
1
0
0
VPN
VPN 0x0F
___
0x3
TLBI ___
VPO
0x03
TLBT ____
Y
TLB Hit? __
N
Page Fault? __
PPN: 0x0D
____
Physical Address
CI
CT
11
10
9
8
7
6
5
4
3
2
1
0
0
0
1
1
0
1
0
1
0
1
0
0
PPN
0
CO ___
CO
0x5
CI___
0x0D
CT ____
PPO
Y
Hit? __
0x36
Byte: ____
22
Seoul National University
Address Translation Example #2
Virtual Address: 0x0B8F
TLBT
TLBI
13
12
11
10
9
8
7
6
5
4
3
2
1
0
0
0
1
0
1
1
1
0
0
0
1
1
1
1
VPN
VPN 0x2E
___
2
TLBI ___
VPO
0x0B
TLBT ____
N
TLB Hit? __
Y
Page Fault? __
TBD
PPN: ____
Physical Address
CI
CT
11
10
9
8
7
6
PPN
CO ___
CI___
CT ____
5
4
CO
3
2
1
0
PPO
Hit? __
Byte: ____
23
Seoul National University
Address Translation Example #3
Virtual Address: 0x0020
TLBT
TLBI
13
12
11
10
9
8
7
6
5
4
3
2
1
0
0
0
0
0
0
0
0
0
1
0
0
0
0
0
VPN
VPN 0x00
___
0
TLBI ___
VPO
0x00
TLBT ____
N
TLB Hit? __
N
Page Fault? __
PPN: 0x28
____
Physical Address
CI
CT
11
10
9
8
7
6
5
4
3
2
1
0
1
0
1
0
0
0
1
0
0
0
0
0
PPN
0
CO___
CO
0x8
CI___
0x28
CT ____
PPO
N
Hit? __
Mem
Byte: ____
24
Seoul National University
Virtual Memory: Systems



Virtual memory questions and answers
Simple memory system example
Case study: Core i7/Linux memory system
25
Seoul National University
Intel Core i7 Memory System
Processor package
Core x4
Registers
Instruction
fetch
L1 d-cache
32 KB, 8-way
L1 i-cache
32 KB, 8-way
L2 unified cache
256 KB, 8-way
MMU
(addr translation)
L1 d-TLB
64 entries, 4-way
L1 i-TLB
128 entries, 4-way
L2 unified TLB
512 entries, 4-way
QuickPath interconnect
4 links @ 25.6 GB/s each
L3 unified cache
8 MB, 16-way
(shared by all cores)
To other
cores
To I/O
bridge
DDR3 Memory controller
3 x 64 bit @ 10.66 GB/s
32 GB/s total (shared by all cores)
Main memory
26
Seoul National University
Review of Symbols



Basic Parameters
 N = 2n : Number of addresses in virtual address space
 M = 2m : Number of addresses in physical address space
 P = 2p : Page size (bytes)
Components of the virtual address (VA)
 TLBI: TLB index
 TLBT: TLB tag
 VPO: Virtual page offset
 VPN: Virtual page number
Components of the physical address (PA)
 PPO: Physical page offset (same as VPO)
 PPN: Physical page number
 CO: Byte offset within cache line
 CI: Cache index
 CT: Cache tag
27
Seoul National University
End-to-end Core i7 Address Translation
32/64
CPU
L2, L3, and
main memory
Result
Virtual address (VA)
36
12
VPN
VPO
32
L1
miss
L1
hit
4
TLBT TLBI
L1 d-cache
(64 sets, 8 lines/set)
TLB
hit
...
...
TLB
miss
L1 TLB (16 sets, 4 entries/set)
9
9
9
9
40
VPN1 VPN2 VPN3 VPN4
PPN
CR3
PTE
PTE
PTE
Page tables
PTE
12
40
6
6
PPO
CT
CI CO
Physical
address
(PA)
28
Seoul National University
Core i7 Page Table Translation
9
9
VPN 1
CR3
Physical
address
of L1 PT
40
/
L1 PT
Page global
directory
L1 PTE
512 GB
region
per entry
9
VPN 2
L2 PT
Page upper
40 directory
/
VPN 3
L3 PT
Page middle
40 directory
/
L2 PTE
9
VPN 4
2 MB
region
per entry
VPO
Virtual
address
L4 PT
Page
table
40
/
Offset into
/12 physical and
virtual page
L4 PTE
L3 PTE
1 GB
region
per entry
12
4 KB
region
per entry
Physical
address
of page
40
/
40
12
PPN
PPO
Physical
address
29
Seoul National University
Cute Trick for Speeding Up L1 Access
CT
Physical
address
(PA)
Virtual
address
(VA)

Observation





36
CT
6 6
CI CO
PPN
PPO
Tag Check
No
Change
Address
Translation
CI
VPN
VPO
36
12
L1 Cache
Bits that determine CI identical in virtual and physical address
Can index into cache while address translation taking place
Generally we hit in TLB, so PPN bits (CT bits) available next
“Virtually indexed, physically tagged”
Cache carefully sized to make this possible
30
Seoul National University
Summary



Memory hierarchies are an optimization resulting from a perfect
match between memory technology and two types of program
locality
 Temporal locality
 Spatial locality
The goal is to provide a “virtual” memory technology (an illusion)
that has an access time of the highest-level memory with the size and
cost of the lowest-level memory
Cache memory and virtual memory are instances of a memory
hierarchy
31
Seoul National University
A Common Framework for Memory Hierarchies

Question 1: Where can a Block be Placed? One place (directmapped), a few places (set associative), or any place (fully
associative)

Question 2: How is a Block Found? Indexing (direct-mapped),
limited search (set associative), full search (fully associative)

Question 3: Which Block is Replaced on a Miss? Typically LRU
or random

Question 4: How are Writes Handled? Write-through or write-
back
32
Download