Page Replacement Algorithms

advertisement
Page Replacement Algorithms
1
Virtual Memory Management
Fundamental issues : A Recap
Key concept: Demand paging
Issues:
¾ Placement strategies
™
Place pages anywhere – no placement
policy required
User Program n
....
¾ Load pages into memory only when a
page fault occurs
User Program 2
User Program 1
¾ Replacement strategies
™
What to do when there exist more jobs
a ca
e oy
than
can fit in memory
¾ Load control strategies
™
Determining how many jobs can be
in memory at one time
Operating System
Memory
2
Page Replacement Algorithms
Concept
Typically Σi VASi >> Physical Memory
With demand paging, physical memory fills quickly
When a process faults & memory is full, some page must be
swapped out
¾ Handling a page fault now requires 2 disk accesses not 1!
¾ Though writes are more efficient than reads (why?)
Which page should be replaced?
p
p
p g of the faulting
g process
p
Local replacement
— Replace
a page
Global
— Possibly
the
Gl b l replacement
l
P ibl replace
l
h page of
f another
h process
3
Page Replacement Algorithms
Evaluation methodology
Record a trace of the pages accessed by a process
¾ Example: (Virtual) address trace...
(3,0), (1,9), (4,1), (2,1), (5,3), (2,0), (1,9), (2,4), (3,1), (4,8)
¾ generates page trace
3, 1, 4, 2, 5, 2, 1, 2, 3, 4 (represented as c, a, d, b, e, b, a, b, c, d)
Hardware can tell OS when a new page is loaded into the TLB
¾ Set a used bit in the page table entry
¾ Increment or shift a register
Simulate the behavior of a page replacement algorithm on the
trace and record the number of page faults generated
fewer faults
better performance
4
Optimal Page Replacement
Clairvoyant replacement
Replace the page that won’t be needed for the longest time in the
future
0
Time
Page
Frames
F
Requests
0
1
2
3
1
c
2
a
3
d
4
b
5
e
6
b
7
a
8
b
9
c
10
d
a
b
c
d
Faults
Time page
needed next
5
Optimal Page Replacement
Clairvoyant replacement
Replace the page that won’t be needed for the longest time in the
future
Time
0
1
c
2
a
3
d
4
b
5
e
6
b
7
a
8
b
9
c
10
d
a
b
c
d
a
b
c
d
a
b
c
d
a
b
c
d
a
b
c
d
a
b
c
e
•
a
b
c
e
a
b
c
e
a
b
c
e
a
b
c
e
d
b
c
e
•
Page
Frames
F
Requests
0
1
2
3
Faults
Time page
needed next
a=7
b=6
c=9
d = 10
a = 15
b = 11
c = 13
d = 14
6
Local Page Replacement
FIFO replacement
¾ A single pointer suffices
Frame List
Performance with 4 page frames:
Page
Frames
Time
Requests
0
1
2
3
0
1
c
2
a
Physical
Memory
3
0
2
Simple to implement
3
d
4
b
5
e
6
b
7
a
8
b
9
c
10
d
a
b
c
d
Faults
7
Local Page Replacement
FIFO replacement
¾ A single pointer suffices
Frame List
Performance with 4 page frames:
Page
Frames
Time
Requests
0
1
2
3
Faults
0
a
b
c
d
1
c
a
b
c
d
2
a
a
b
c
d
Physical
Memory
3
0
2
Simple to implement
3
d
a
b
c
d
4
b
a
b
c
d
5
e
e
b
c
d
•
6
b
e
b
c
d
7
a
e
a
c
d
8
b
e
a
b
d
9
c
e
a
b
c
10
d
d
a
b
c
•
•
•
•
8
Least Recently Used Page Replacement
Use the recent past as a predictor of the near future
Replace the page that hasn’t been referenced for the longest time
Page
Frames
F
Time
Requests
0
1
2
3
0
1
c
2
a
3
d
4
b
5
e
6
b
7
a
8
b
9
c
10
d
a
b
c
d
Faults
Time page
last used
9
Least Recently Used Page Replacement
Use the recent past as a predictor of the near future
Replace the page that hasn’t been referenced for the longest time
Page
Frames
F
Time
Requests
0
1
2
3
0
a
b
c
d
1
c
a
b
c
d
2
a
a
b
c
d
3
d
a
b
c
d
4
b
a
b
c
d
6
b
a
b
e
d
7
a
a
b
e
d
8
b
a
b
e
d
•
Faults
Time page
last used
5
e
a
b
e
d
a=2
b=4
c=1
d=3
a=7
b=8
e=5
d=3
9
c
a
b
e
c
10
d
a
b
d
c
•
•
a=7
b=8
e=5
c=9
10
Least Recently Used Page Replacement
Implementation
Maintain a “stack” of recently used pages
Page
Frames
Time
Requests
R
t
0
1
2
3
0
1
c
2
a
3
d
4
b
5
e
6
b
7
a
8
b
9
c
10
d
a
b
c
d
a
b
c
d
a
b
c
d
a
b
c
d
a
b
c
d
a
b
e
d
a
b
e
d
a
b
e
d
a
b
e
d
a
b
e
c
a
b
d
c
•
•
c
b
a
e
d
c
b
a
d
e
•
Faults
c
LRU
page stack
a
c
d
a
c
b
d
a
c
Page to replace
e
b
d
a
b
e
d
a
a
b
e
d
b
a
e
d
c
11
Least Recently Used Page Replacement
Implementation
Maintain a “stack” of recently used pages
Page
Frames
Time
Requests
R
t
0
1
2
3
0
1
c
2
a
3
d
4
b
5
e
6
b
7
a
8
b
9
c
10
d
a
b
c
d
a
b
c
d
a
b
c
d
a
b
c
d
a
b
c
d
a
b
e
d
a
b
e
d
a
b
e
d
a
b
e
d
a
b
e
c
a
b
d
c
•
•
c
b
a
e
d
c
b
a
d
e
•
Faults
LRU
page stack
Page to replace
c
a
c
d
a
c
b
d
a
c
e
b
d
a
c
b
e
d
a
a
b
e
d
b
a
e
d
12
What is the goal of a page replacement algorithm?
¾
¾
¾
¾
A. Make life easier for OS implementer
B Reduce
B.
R d
th
the number
b off page ffaults
lt
C. Reduce the penalty for page faults when they occur
D. Minimize CPU time of algorithm
13
Approximate LRU Page Replacement
The Clock algorithm
Maintain a circular list of pages resident in memory
¾ Use a clock (or used/referenced) bit to track how often a page is accessed
¾ The bit is set whenever a page is referenced
Clock hand sweeps
p over p
pages
g looking
g for one with used bit = 0
¾ Replace pages that haven’t been referenced for one complete revolution
of the clock
Page 7:
Page 1:
10 5
Page 3:
resident bit
used bit
frame number
11 1
11 0
Page 4:
Page 0:
10 3
11 4
func Clock_Replacement
begin
while (victim page not found) do
if(used bit for current page = 0) then
replace current page
else
reset used bit
end if
advance clock pointer
end while
end Clock_Replacement
14
Clock Page Replacement
Example
Time
0
1 2
c a
3
d
4
b
a
b
c
d
a
b
c
d
a
b
c
d
a
b
c
d
Page
Frames
Requests
q
0
1
2
3
a
b
c
d
5
e
6
b
7
a
8
b
9
c
10
d
Faults
1 a
1
0
0
0
Page table entries
1 b
for resident pages:
1 c
1 d
e
b
c
d
1
1
0
0
e
b
c
d
1
1
1
0
e
b
a
d
1
1
1
0
e
b
a
d
1
1
1
1
e
b
a
c
1
0
0
0
d
b
a
c
15
Clock Page Replacement
Example
Time
0
1 2
c a
3
d
4
b
5
e
6
b
7
a
8
b
9
c
10
d
a
b
c
d
a
b
c
d
a
b
c
d
a
b
c
d
e
b
c
d
•
e
b
c
d
e
b
a
d
•
e
b
a
d
e
b
a
c
•
d
b
a
c
•
Page
Frames
Requests
q
0
1
2
3
a
b
c
d
Faults
1 a
Page table entries
1 b
for resident pages:
1 c
1 d
1
0
0
0
e
b
c
d
1
1
0
0
e
b
c
d
1
0
1
0
e
b
a
d
1
1
1
0
e
b
a
d
1
1
1
1
e
b
a
c
1
0
0
0
d
b
a
c
16
Optimizing Approximate LRU Replacement
The Second Chance algorithm
There is a significant cost to replacing “dirty” pages
Modify the Clock algorithm to allow dirty pages to always survive one
sweep of the clock hand
¾ Use both the dirty bit and the used bit to drive replacement
Page 7: 1 1 0 0
Page 1: 1 0 0 5
Second Chance Algorithm
Page 4: 1 0 0 3
Page 3: 1 1 1 9
Before clock
sweep
After clock
sweep
used dirty
used dirty
replace page
0
0
1
1
Page 0: 1 1 1 4
resident bit
used bit
dirty bit
0
1
0
1
0
0
0
0
0
1
17
The Second Chance Algorithm
Example
Time
0
1 2 3
c aw d
4
bw
a
b
c
d
a
b
c
d
a
b
c
d
Page
Frames
Requests
q
0
1
2
3
a
b
c
d
a
b
c
d
5
e
6
b
7
aw
8
b
9
c
10
d
11 a
10 b*
10 e
10 c
00 a*
10 d
00 e
00 c
Faults
10 a
Page table
entries 10 b
for resident 10 c
pages:
10 d
11
11
10
10
a
b
c
d
00 a*
00 b*
10 e
00 d
00 a* 11 a
10 b* 10 b*
10 e 10 e
00 d 00 d
18
The Second Chance Algorithm
Example
Time
0
1 2 3
c aw d
4
bw
5
e
6
b
7
aw
8
b
9
c
10
d
a
b
c
d
a
b
c
d
a
b
c
d
a
b
e
d
•
a
b
e
d
a
b
e
d
a
b
e
d
a
b
e
c
•
a
d
e
c
•
Page
Frames
Requests
q
0
1
2
3
a
b
c
d
a
b
c
d
Faults
Page table 10 a
entries for
resident 10 b
pages: 10
c
10 d
11
11
10
10
00 a*
00 b*
10 e
00 d
a
b
c
d
00
10
10
00
a
b
e
d
11
10
10
00
a
b
e
d
11
10
10
10
a
b
e
c
00 a*
10 d
00 e
00 c
19
The Problem With Local Page Replacement
How much memory do we allocate to a process?
Page
Frames
Time
Requests
0
0
a
1
b
2
c
1
a
2
b
3
c
4
d
5
a
6
b
7
c
8
d
9
a
10
b
11
c
12
d
Page
Frames
Faults
0
a
1
b
2
c
–
3
Faults
20
The Problem With Local Page Replacement
How much memory do we allocate to a process?
0
1
a
2
b
3
c
4
d
5
a
6
b
7
c
8
d
9
a
10
b
11
c
12
d
0
a
a
a
a
d
d
d
c
c
c
b
b
b
1
b
b
b
b
b
a
a
a
d
d
d
c
c
2
c
c
c
c
c
c
b
b
b
a
a
a
d
•
•
•
•
•
•
•
•
•
Page
Frames
Time
Requests
Page
Frames
Faults
0
a
a
a
a
a
a
a
a
a
a
a
a
a
1
b
b
b
b
b
b
b
b
b
b
b
b
b
2
c
–
c
c
c
c
d
c
d
c
d
c
d
c
d
c
d
c
d
c
d
c
d
3
•
Faults
21
Page Replacement Algorithms
Performance
Local page replacement
¾ LRU — Ages pages based on when they were last used
¾ FIFO — Ages pages based on when they’re brought into memory
Towards global page replacement ... with variable number of
page frames allocated to processes
The principle of locality
¾ 90% of the execution of a program is sequential
¾ Most iterative constructs consist of a relatively small number of
instructions
¾ When processing large data structures, the dominant cost is sequential
processing on individual structure elements
¾ Temporal vs. physical locality
22
Optimal Page Replacement
For processes with a variable number of frames
VMIN — Replace a page that is not referenced in the next τ
accesses
Example: τ = 4
Pagess
in Memo
ory
Time
Requests
Page a
Page
b
P
Page c
Page d
Page e
0
1
c
2
c
3
d
4
b
5
c
6
e
7
c
8
e
9
a
10
d
•
•
t = -1
t=0
Faults
23
Optimal Page Replacement
For processes with a variable number of frames
VMIN — Replace a page that is not referenced in the next τ
accesses
Example: τ = 4
Pagess
in Memo
ory
Time
Requests
Faults
Page a
Page
b
P
Page c
Page d
Page e
0
1
c
2
c
3
d
4
b
5
c
6
e
7
c
8
e
9
a
10
d
•
•
t = -1
-
F
•
-
•
•
-
•
•
-
F
•
-
•
-
•
F
•
•
•
•
F
-
F
-
•
•
t=0
•
•
•
24
Explicitly Using Locality
The working set model of page replacement
Assume recently referenced pages are likely to be referenced again
soon…
... and only keep those pages recently referenced in memory (called
the working set)
¾ Thus pages may be removed even when no page fault occurs
¾ The number of frames allocated to a process will vary over time
A process is allowed to execute only if its working set fits into
memory
¾ The
Th working
ki sett model
d l performs
f
iimplicit
li it lload
d control
t l
25
Working Set Page Replacement
Implementation
Keep track of the last τ references
¾ The pages referenced during the last τ memory accesses are
the working set
¾ τ is called the window size
Example: Working set computation, τ = 4 references:
Pages
in Memory
y
Time
Requests
Page a
Page b
Page c
Page d
Page e
0
1
c
2
c
3
d
4
b
5
c
6
e
7
c
8
e
9
a
10
d
•t = 0
t•= -1
t•= -2
Faults
26
Working Set Page Replacement
Implementation
Keep track of the last τ references
¾ The pages referenced during the last τ memory accesses are
the working set
¾ τ is called the window size
Example: Working set computation, τ = 4 references:
¾ What if τ is too small? too large?
Pages
in Memory
y
Time
Requests
Page a
Page b
Page c
Page d
Page e
Faults
0
1
c
2
c
3
d
4
b
5
c
6
e
7
c
8
e
9
a
10
d
•t = 0
t•= -1
t•= -2
•
F
•
•
•
•
•
-
•
•
•
-
F
•
•
-
•
•
•
-
•
•
F
•
•
•
•
•
•
F
•
•
•
•
F
•
•
•
•
•
•
27
Page-Fault-Frequency Page Replacement
An alternate working set computation
Explicitly attempt to minimize page faults
¾ When page fault frequency is high — increase working set
¾ When page fault frequency is low — decrease working set
Algorithm:
Keep track of the rate at which faults occur
When a fault occurs, compute the time since the last page fault
Record the time, tlast, of the last page fault
If the time between page faults is “large” then reduce the working
set
If tcurrent
nt – tlast
l st > τ, then remove from memory all pages not
referenced in [tlast, tcurrent ]
If the time between page faults is “small” then increase working set
If tcurrent – tlast ≤ τ, then add faulting page to the working set
28
Page-Fault-Frequency Page Replacement
Example, window size = 2
If tcurrent – tlast > 2, remove pages not referenced in [tlast, tcurrent ] from
the working set
If tcurrent – tlast ≤ 2, just add faulting page to the working set
Paages
in M
Memory
Time
Requests
Page a
Page b
Page
g c
Page d
Page e
0
1
c
2
c
3
d
4
b
5
c
6
e
7
c
8
e
9
a
10
d
•
•
•
Faults
tcur – tlast
29
Page-Fault-Frequency Page Replacement
Example, window size = 2
If tcurrent – tlast > 2, remove pages not referenced in [tlast, tcurrent ] from
the working set
If tcurrent – tlast ≤ 2, just add faulting page to the working set
Paages
in M
Memory
Time
Requests
Page a
Page b
Page
g c
Page d
Page e
0
1
c
2
c
3
d
4
b
5
c
6
e
7
c
8
e
9
a
10
d
•
•
•
•
F
•
•
•
•
•
•
•
•
•
•
F
•
•
-
•
•
•
-
•
•
•
F
•
•
•
•
•
•
•
•
F
•
•
•
•
F
•
Faults
•
•
•
•
•
tcur – tlast
1
3
2
3
1
30
Load Control
Fundamental tradeoff
High multiprogramming level
number off page
p g frames
f
¾ MPLmax =
minimum number of frames required for a process to execute
Low paging overhead
¾ MPLmin = 1 process
Issues
¾ What criterion should be used to determine when to increase or
decrease the MPL?
¾ Which task should be swapped out if the MPL must be reduced?
31
Load Control
How not to do it: Base load control on CPU utilization
‹
‹
Assume memory is nearly full
A chain of page faults occur
CPU
¾ A queue of processes forms at
the paging device
‹
...
I/O
Device
Paging
Device
CPU utilization falls
Operating system increases MPL
¾ New processes fault, taking memory away from existing processes
CPU utilization goes to 0, the OS increases the MPL further...
System is thrashing — spending all of its time paging
32
Load Control
Thrashing
Thrashing can be ameliorated by local page replacement
Better criteria for load control: Adjust MPL so that:
¾ mean time between page faults (MTBF) = page fault service time
(PFST)
¾ Σ WSi = size of memory
1.0
1.0
CPU
Utilization
MTBF
PFST
Nmax NI/O-BALANCE
Multiprogramming Level
33
Load Control
Thrashing
Ready
ready
queue
?
Running
Physical
Memory
Waiting
Suspended
suspended
queue
semaphore/condition queues
When the multiprogramming level should be
decreased, which process should be swapped
out?
¾
¾
¾
¾
¾
Lowest priority process?
Smallest process?
Largest process?
Oldest process?
Faulting process?
Paging Disk
34
Download