Professional Documents
Culture Documents
Introduction To Operating Systems: Class 10-1: Swapping - Policy (Ch. 22)
Introduction To Operating Systems: Class 10-1: Swapping - Policy (Ch. 22)
Introduction To Operating Systems: Class 10-1: Swapping - Policy (Ch. 22)
Resource-efficient
Image computing
Systems lab
SKKU IRIS Lab
Jong Hwan Ko
Assistant Professor
School of Information and Communication Engineering
SungKyunKwan Univ.
2019. 11. 5
Swapping Out Pages
2
Page Replacement Policies
3
Page Replacement Policies
Simple policies
• OPT (Optimal)
• FIFO
• Random
History-based policies
• Based on recency
◦ LRU
◦ Clock (approximated LRU)
• Based on frequency
◦ LFU
4
OPT: Optimal Replacement
Policy
Leads to the fewest number of misses
overall
• Replaces the page that will be accessed furthest
in the future
• Resulting in the fewest-possible cache misses
Reference:
Serves only1 2
as
3
a 4comparison
1 2 5 1
point,
2 3
to4 5
know how 1
close
1 1 1
we1are1 to1 “perfect”
1 1 3 3 3
1 1 1 1 1 1 1 1 1 3 3
PF rate
= 7 / 12 2 2 2 2 2 2 2 2 2 4 4
3 4 4 4 5 5 5 5 5 5
Miss Miss Miss Miss Hit Hit Miss Hit Hit Miss Miss Hit
5
FIFO
Pages are placed in a queue when they enter the
system
When a replacement occurs, the page on the tail of
the queue(the “First-in” pages) is evicted
Obvious and simple to implement
Fair: All pages receive equal residency,
but can’t determine the importance of blocks
Why might this be good?
• Maybe, the one brought in the longest ago is not being used
Why might this be bad?
• Some pages may always be needed
6
FIFO: Belady’s Anomaly
1 2 3 4 1 2 5 1 2 3 4 5
14
Page Fault Count
12
10
8
6
4
2
0
1 2 3 4 5 6 7
Page Frame Count
7
FIFO: Belady’s Anomaly
Reference: 1 2 3 4 1 2 5 1 2 3 4 5
1 2 3 4 1 2 5 5 5 3 4 4 Last in
PF rate
= 9 / 12 1 2 3 4 1 2 2 2 5 3 3
1 2 3 4 1 1 1 2 5 5 First in
Miss Miss Miss Miss Miss Miss Miss Hit Hit Miss Miss Hit
Reference: 1 2 3 4 1 2 5 1 2 3 4 5
1 2 3 4 4 4 5 1 2 3 4 5
PF rate Last in
1 2 3 3 3 4 5 1 2 3 4
= 10 / 12
1 2 2 2 3 4 5 1 2 3 First in
1 1 1 2 3 4 5 1 2
Miss Miss Miss Miss Hit Hit Miss Miss Miss Miss Miss Miss
8
Random
9
Using History
Historical
Meaning Algorithm
Information
10
LRU (Least Recently Used)
11
LRU (Least Recently Used)
Stack property
• Does not suffer from Belady’s anomaly
• Policies that guarantee increasing memory size
does not increase the number of page faults
(e.g. OPT, LRU, etc.)
• Any page in memory with m frames is also in
memory
Reference: 1 2with
3 m+1
4 1frames
2 5 1 2 3 4 5
1 2 3 4 1 2 5 1 2 3 4 5 MRU
PF rate
= 10 / 12 1 2 3 4 1 2 5 1 2 3 4
1 2 3 4 1 2 5 1 2 3 LRU
Miss Miss Miss Miss Miss Miss Miss Hit Hit Miss Miss Miss
12
Performance Comparison: No-
Locality Workload
Workload assumptions
• 10,000 random accesses to
100 pages
Analysis
• OPT performs noticeably better
than the realistic policies
• It doesn’t matter much which
realistic policy you are using
• When the cache is large
enough to
fit the entire workload, it also
13
Performance Comparison: 80-
20 Workload
Workload assumptions
• 80% of the references are made to 20% of the
page
• The remaining 20% of the reference are made to
the remaining 80% of the pages
Analysis
• LRU does better, as it is
more likely to hold onto
the hot pages
14
Performance Comparison:
Looping Sequential
Workload assumptions
• Reference starts at 0, then 1, … up to page 49,
and then repeat
Analysis
• Worst case for both LRU
and FIFO
◦ Kick out older pages that are
going to be accessed sooner
than the pages that the
policies prefer to keep
◦ 0% hit rate even with a cache
of size 49
• Random fares notably
15
Implementing LRU
16
Approximating LRU: Clock
Algorithm
Use bit (reference bit)
• Whenever a page is referenced, the use bit is set to 1
Clock Algorithm
• All pages of the system arranges in a circular list
• A clock hand points to some particular page to begin with
• When a page fault occurs, the page the hand is pointing to
(clock pointer) is inspected
◦ The action taken depends on the use bit
• The algorithm continues until it finds a use bit that is set to 0
A
H B
Use bit Meaning
G C
0 Evict the page
F D 1 Clear the use bit and advance the hand
E
17
Clock Algorithm
Algorithm
Algorithm Clock_Replacement
while (victim page not found)
if(use bit for current page = 0)
replace current page
else
reset use bit
advance clock pointer
Clock pointer advances
only in the page
Example
Use bit replacement algorithm
Reference: 1 2 3 4 1 2 5 1 2 3 4 5
Clock 1 1 1 1 1 1 1 1 1 0 0 1
pointer 1 1 1 4 4 4 5 5 5 5 5 5
PF rate 1 1 0 1 1 0 1 1 1 1 1
= 9 / 12 2 2 2 1 1 1 1 1 3 3 3
1 0 0 1 0 0 1 0 1 1
3 3 3 2 2 2 2 2 4 4
Miss Miss Miss Miss Miss Miss Miss Hit Hit Miss Miss Hit
18
Performance of Clock
Algorithm
Clock algorithm doesn’t do as well as
perfect LRU
It does better than approaches that do
The 80-20 Workload
not consider history at all
100%
80%
Hit Rate
60%
OPT
LRU
Clock
40%
FIFO
RAND
20%
20 40 60 80 100
Cache Size (Blocks)
19
Considering Dirty Pages
Page 4
Page 5
Page 1
Page 2
Page 3
Page 4
… …
Clustering (grouping)
• In disk drives, a single large write is more
efficiently than many small ones
• Collect a number of pending writes together in
memory and write them to disk in one write
Pending writes
Page n
Page 1
Page 2
Page 3
Page 4
Page 5
…
Physical Memory
write in one write
Page 1
Page 2
Page 3
Page 4
Secondary
Storage 22
Thrashing
23
Summary
VM mechanisms
• Physical and virtual addressing
• Partitioning, segmentation, paging
• Page table management, TLBs, etc.
VM policies
• Page replacement policy, page allocation policy, etc.
VM optimizations
• Demand paging (space)
• Multi-level page tables (space)
• Efficient translation using TLBs (time)
• Page replacement policy (time, space)
24