Introduction To Operating Systems: Class 10-1: Swapping - Policy (Ch. 22)

You might also like

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 24

Intelligent &

Resource-efficient
Image computing
Systems lab
SKKU IRIS Lab

Introduction to Operating Systems


Class 10-1 : Swapping - Policy (Ch. 22)

Jong Hwan Ko
Assistant Professor
School of Information and Communication Engineering
SungKyunKwan Univ.
2019. 11. 5
Swapping Out Pages

The OS should swap out pages to make


room for the new pages the OS is about to
bring in

Once the page to swap out (victim page) is


determined,
• Write out the victim page to disk if modified (dirty
bit set)
• If the victim page is clean, just discard
◦ The original version is in a disk drive

2
Page Replacement Policies

The policy of picking a page to kick out is


known as page-replacement policy

Goal: minimize the page fault rate (miss


rate)
• The miss penalty (cost of disk access) is so high
(> x100,000)
• A tiny miss rate quickly dominates the overall
AMAT (Average Memory Access Time)

3
Page Replacement Policies

Simple policies
• OPT (Optimal)
• FIFO
• Random

History-based policies
• Based on recency
◦ LRU
◦ Clock (approximated LRU)
• Based on frequency
◦ LFU

4
OPT: Optimal Replacement
Policy
Leads to the fewest number of misses
overall
• Replaces the page that will be accessed furthest
in the future
• Resulting in the fewest-possible cache misses
Reference:
Serves only1 2
as
3
a 4comparison
1 2 5 1
point,
2 3
to4 5
know how 1
close
1 1 1
we1are1 to1 “perfect”
1 1 3 3 3
1 1 1 1 1 1 1 1 1 3 3
PF rate
= 7 / 12 2 2 2 2 2 2 2 2 2 4 4
3 4 4 4 5 5 5 5 5 5
Miss Miss Miss Miss Hit Hit Miss Hit Hit Miss Miss Hit

5
FIFO
Pages are placed in a queue when they enter the
system
When a replacement occurs, the page on the tail of
the queue(the “First-in” pages) is evicted
Obvious and simple to implement
Fair: All pages receive equal residency,
but can’t determine the importance of blocks
Why might this be good?
• Maybe, the one brought in the longest ago is not being used
Why might this be bad?
• Some pages may always be needed

6
FIFO: Belady’s Anomaly

We would expect the cache hit rate to


increase when the cache gets larger
But in some case, with FIFO, it gets worse
Reference Row

1 2 3 4 1 2 5 1 2 3 4 5

14
Page Fault Count

12
10
8
6
4
2
0
1 2 3 4 5 6 7
Page Frame Count
7
FIFO: Belady’s Anomaly
Reference: 1 2 3 4 1 2 5 1 2 3 4 5
1 2 3 4 1 2 5 5 5 3 4 4 Last in
PF rate
= 9 / 12 1 2 3 4 1 2 2 2 5 3 3
1 2 3 4 1 1 1 2 5 5 First in

Miss Miss Miss Miss Miss Miss Miss Hit Hit Miss Miss Hit

Reference: 1 2 3 4 1 2 5 1 2 3 4 5

1 2 3 4 4 4 5 1 2 3 4 5
PF rate Last in
1 2 3 3 3 4 5 1 2 3 4
= 10 / 12
1 2 2 2 3 4 5 1 2 3 First in
1 1 1 2 3 4 5 1 2
Miss Miss Miss Miss Hit Hit Miss Miss Miss Miss Miss Miss
8
Random

Simply picks a random page to replace


Simple to implement: no bookkeeping
needed
Performance depends on the luck of the
draw
Reference: 1 2 3 4 1 2 5 1 2 3 4 5
Outperforms FIFO and LRU for certain
1 1 1 1 1 1 1 1 1 1 1 5
workloads
PF rate
2 2 2 2 2 2 2 2 3 3 3
= 8 / 12
3 4 4 4 5 5 5 5 4 4
Miss Miss Miss Miss Hit Hit Miss Hit Hit Miss Miss Miss

9
Using History

Lean on the past and use history


• Two type of historical information

Historical
Meaning Algorithm
Information

If a page has been accessed more


Recency recently, it will more likely be accessed LRU
again

If a page has been accessed many times,


Frequency LFU
it will more likely be accessed again

10
LRU (Least Recently Used)

Replace the page that has not been used


for the longest time in the past
Use past to predict the future
• cf. OPT wants to look at the future
With locality, LRU approximates OPT
Harder to implement: must track which
pages have been accessed
Does not consider the frequency of page
accesses

11
LRU (Least Recently Used)

Stack property
• Does not suffer from Belady’s anomaly
• Policies that guarantee increasing memory size
does not increase the number of page faults
(e.g. OPT, LRU, etc.)
• Any page in memory with m frames is also in
memory
Reference: 1 2with
3 m+1
4 1frames
2 5 1 2 3 4 5

1 2 3 4 1 2 5 1 2 3 4 5 MRU
PF rate
= 10 / 12 1 2 3 4 1 2 5 1 2 3 4
1 2 3 4 1 2 5 1 2 3 LRU

Miss Miss Miss Miss Miss Miss Miss Hit Hit Miss Miss Miss

12
Performance Comparison: No-
Locality Workload
Workload assumptions
• 10,000 random accesses to
100 pages

Analysis
• OPT performs noticeably better
than the realistic policies
• It doesn’t matter much which
realistic policy you are using
• When the cache is large
enough to
fit the entire workload, it also
13
Performance Comparison: 80-
20 Workload
Workload assumptions
• 80% of the references are made to 20% of the
page
• The remaining 20% of the reference are made to
the remaining 80% of the pages
Analysis
• LRU does better, as it is
more likely to hold onto
the hot pages

14
Performance Comparison:
Looping Sequential
Workload assumptions
• Reference starts at 0, then 1, … up to page 49,
and then repeat

Analysis
• Worst case for both LRU
and FIFO
◦ Kick out older pages that are
going to be accessed sooner
than the pages that the
policies prefer to keep
◦ 0% hit rate even with a cache
of size 49
• Random fares notably
15
Implementing LRU

LRU can generally do a better job than


simpler policies like FIFO or Random
Then how should we implement it?
• Upon each page access, we update a time field
of the corresponding PTE in a page table
• When replacing a page, the OS could scan all
the time fields
-> prohibitively expensive

Some approximation required!

16
Approximating LRU: Clock
Algorithm
Use bit (reference bit)
• Whenever a page is referenced, the use bit is set to 1
Clock Algorithm
• All pages of the system arranges in a circular list
• A clock hand points to some particular page to begin with
• When a page fault occurs, the page the hand is pointing to
(clock pointer) is inspected
◦ The action taken depends on the use bit
• The algorithm continues until it finds a use bit that is set to 0
A
H B
Use bit Meaning
G C
0 Evict the page
F D 1 Clear the use bit and advance the hand
E
17
Clock Algorithm

Algorithm
Algorithm Clock_Replacement
while (victim page not found)
if(use bit for current page = 0)
replace current page
else
reset use bit
advance clock pointer
Clock pointer advances
only in the page
Example
Use bit replacement algorithm
Reference: 1 2 3 4 1 2 5 1 2 3 4 5
Clock 1 1 1 1 1 1 1 1 1 0 0 1
pointer 1 1 1 4 4 4 5 5 5 5 5 5
PF rate 1 1 0 1 1 0 1 1 1 1 1
= 9 / 12 2 2 2 1 1 1 1 1 3 3 3
1 0 0 1 0 0 1 0 1 1
3 3 3 2 2 2 2 2 4 4
Miss Miss Miss Miss Miss Miss Miss Hit Hit Miss Miss Hit
18
Performance of Clock
Algorithm
Clock algorithm doesn’t do as well as
perfect LRU
It does better than approaches that do
The 80-20 Workload
not consider history at all
100%

80%
Hit Rate

60%
OPT
LRU
Clock
40%
FIFO
RAND
20%

20 40 60 80 100
Cache Size (Blocks)

19
Considering Dirty Pages

If a page has been modified (dirty page)


• Must be written back to disk to evict it, which is
expensive
If a page has not been modified (clean
page)
• Eviction is free
• The frame can simply be reused without
additional I/O
Thus, some VM systems prefer to evict
clean pages over dirty pages
• The clock algorithm could be changed to scan
for pages that are both unused and clean to 20
Page Selection Policy

The OS has to decide when to bring a


page into memory
• Demand paging
◦ OS brings the page into memory when it is accessed, “on
demand”
• Prefetching
◦ The OS guesses that a page is about to be used, and thus
When page
bring1itisinbrought
aheadinto memory
of time
Page 2 is brought into memory too,
as it is likely soon be accessed
Page n
Page 3

Page 4

Page 5

Page 1
Page 2
Page 3
Page 4
… …

Physical Memory Secondary


Storage
21
Page Write Policy

Clustering (grouping)
• In disk drives, a single large write is more
efficiently than many small ones
• Collect a number of pending writes together in
memory and write them to disk in one write
Pending writes

Page n
Page 1
Page 2

Page 3

Page 4

Page 5

Physical Memory
write in one write
Page 1
Page 2
Page 3
Page 4

Secondary
Storage 22
Thrashing

Memory is oversubscribed and the


memory demands of the set of running
processes exceeds the available physical
memory
• Decide not to run a subset of processes
• Reduced set of processes working sets fit in
memory

23
Summary
VM mechanisms
• Physical and virtual addressing
• Partitioning, segmentation, paging
• Page table management, TLBs, etc.

VM policies
• Page replacement policy, page allocation policy, etc.

VM optimizations
• Demand paging (space)
• Multi-level page tables (space)
• Efficient translation using TLBs (time)
• Page replacement policy (time, space)

24

You might also like