Chapter - 05 - Memory Management

You might also like

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 57

Memory Management

 Problems that arise when managing memory


 Memory allocation models

 Virtual memory
Problems that arise when managing memory

 Convert the virtual address in the program to the


physical address in the main memory.
 Manage the memory which has been allocated and not
yet allocated.
 Memory allocation techniques such that:
 Prevent invasive processes from being allocated to other
processes.
 Allows multiple processes to share each other's memory.
 Expand memory to be able to store multiple processes
simultaneously.
Convert virtual to physical addresses

The address in the executable program (exe format)


is the relative address (virtual) -> the absolute
address (physical) in the main memory. The
conversion may occur at one of the following times:
 Compile time
 Load time
 Execution time
Virtual address space and physical address space

 Virtual address (logical address): is the address created by the CPU.


 Physical address: is the physical address in the main memory.
 The virtual address space of the process: is the collection of all virtual
addresses of a process.
 The physical address space of the process: is the collection of all physical
addresses corresponding to virtual addresses.
Manage the memory which has been
allocated and not yet allocated
 Methods for managing the memory allocation
 Use the bit sequence: the i-th bit is 1, the i-th block has been
allocated, 0 is not allocated yet.
 Use linked list: each node of the linked list stores information
of the memory that contains the process (P) or free space
between two processes (H).
 The algorithm chooses a blank segment:
 First-fit: select the first blank space large enough.
 Best-fit: choose the smallest space but big enough to satisfy
the demand.
 Worst-fit: choose the largest blank.
Manage the memory allocation by bit sequence or
linked list
Manage the memory allocation

Some cases can occur before and after the X process finished
Memory allocation models
 Continuous allocation: the process is loaded into a
continuous memory area
 Linker-Loader
 Base & Limit
 Non-continuous allocation: the process is loaded
into a discontinuous memory area
 Segment model
 Paging model
 Combination model: segment with paging
Linker_Loader Model
 absolute address = address that start loading process + relative address

 Cannot move the process in memory


 It is not possible to protect a process that is accessed by another process
from the process's memory (due to no address control mechanism)
Base & Limit Model

 Base register: Keep the start address of the memory


allocated to the process
 Limit register: Keep the size of the process
Đ
MMU Mechanism in Base & Limit Model

 Can move programs in memory


 There may be external fragmentation
Base & Limit Model – When the size of process grow
in the execution

 Move the process


 Allocate the redundant memory for the process
 Swapping
Segmentation Model
MMU Mechanism in segmentation

Physical address =base + d


Install the segment table

 STBR (Segment Table Base Register): save the start


address of the segment table.
 STLR (Segment Table Limit Register): save segment
number
Share the segment

MMU assigns two elements in two segmentation tables of two processes of the
same value
Segment protection
 Attribute: R (read), X (execute), W (write),…

Comment on segmentation techniques:


• External fragmentation still occurs.
• Advantages: program code and data are separated ->
easily protect program code and easily share data or
functions
Paging Model

Physical memory: page frame.


Virtual address space: page.
Virtual address structure

 The size of the page is 2n


 The size of virtual address space is 2m (CPU use a
virtual address of m bits)
 The high m-n bit of the virtual address will represent
the page number, and the low n bits represent the
relative address in the page
MMU mechanism in paging model

Physical address = starting position of the page frame f + d


= f*(page frame size) + d

Suppose process access the virtual address (p,d) =


(3,500) -> physical address: 2048+500 = 2548.
Address conversion mechanism of MMU

 Calculation of the physical address of MMU:


 Copy d to the low n bits of the physical address.
 Copy f to the (m-n) high bit of the physical address.

Example: A system with a 16-bit


virtual address of the form (p, d)
with p has 4 bits, d has 12 bits (the
system has 16 pages, 4 KB each).
Present/absent = 1 means the page is
currently in memory and = 0 is in
secondary memory.
Table page settings
 PTBR (Page Table Base Register): Save the start address of the page table.
 PTLR (Page Table Limit Register): Save the element number in the page
table (p<PTLR)
Combination Memory (Translation Lookaside Buffers -
TLBs)
Paging Model - For example
 A 32-bit computer system:
 1 page frame size is 4K.
 What is the maximum size of process that the system can manage?

 Computer 32 bit => virtual address (p,d) has 32 bits => bits of p +
bits of d = 32, but 1 page 4K=212 bytes => d has 12 bit =>p has 20
bit => 1 page table has 220 elements.
 => The system manages the process up to the maximum 2 20 page
=> the maximum size of process 220 x 212 byte = 232 byte =4 GB.

 Comment: Computer n bit can manage the process with the


maximum size 2n byte.
Organize a page table

 Multi-level paging.
 Hash Page Table.
 Inverse Page Table.

To solve the memory loss problem when the process needs a large page
table
Multi-level paging

Bảng trang cấp 1 Các bảng trang cấp 2


Multi-level paging
page number page offset
p1 p2 d

10 10 12
p1 index of first-level page table. p2 index of page table level 2

Bảng trang cấp 1

Bảng trang cấp 2


Hash Page Table
 When virtual address space is> 32 bits

 64-bit computer, with 256MB RAM, a page frame


size of 4KB.
 The normal page table must have 252 entries, if using
the hash page table, it is possible to use the table with
the number of items equal to the number of physical
page frames, 216 (<<252) with hash function:
hasfunc(p)=p mod 216
Address conversion mechanism when
using hash page table
Inverse Page Table
 Single table page to manage memory of all processes.
 Each element of the inverse page table is a pair (pid, p)
 pid is the code of the process
 p is the page number.
 Each virtual address is a triplet (pid, p, d).
Page protection
Shared memory

 Processes share some page frames


 Write the same page frame number in the page table of
each process
Paging model - Comments

 Eliminate peripheral fragmentation.


 There is still internal fragmentation.

 Combining both paging and segmentation


techniques:
 Segments paging
Paged segmentation

 A process of multiple segments.


 Each segment is divided into several pages, stored in page
frames may be non-continuous.
MMU mechanism in paging combination
segmentation model
VIRTUAL MEMORY

 Using the secondary memory to store the process,


parts of the process are transferred in-out between the
main memory and the secondary memory.
 Demand paging.
 Demand segmentation.
Demand paging

G H

Field containing the "check" bit:


•   1 (valid) is the page in the main memory
•   0 (invalid) is the page being stored on the secondary memory or the page is not in the
process
Convert virtual addresses (p, d) to physical
addresses
Replace page

 "Update" bit (dirty bit):


 1: page content has been modified.
 0: page content has not been changed.
Time to perform a memory access request

 P: xác suất xảy ra lỗi trang (0  p  1).


 Memory access (ma): thời gian một lần truy xuất bộ nhớ.
 Effective Access Time (EAT): thời gian thực hiện một yêu cầu truy xuất bộ nhớ.
 Page fault overhead (pfo) :thời gian xử lý một lỗi trang.
 Swap page in (spi): thời gian chuyển trang từ đĩa vào bộ nhớ.
 Swap page out (spo): thời gian chuyển trang ra đĩa (swap page out có thể bằng 0).
 Restart overhead (ro): thời gian tái khởi động lại việc truy xuất bộ nhớ.
 EAT = (1 – p) x ma+ p (pfo+ [spo]+ spi+ ro)

 Ví dụ: Thời gian một lần truy xuất bộ nhớ là 1 micro second và giả sử 40% trang
được chọn đã thay đổi nội dung và thời gian hoán chuyển trang ra/vào là 10 mili
second . Tính ETA.

 EAT = (1 – p) + p (pfo+10000*0.4+10000+ro) micro second


The algorithm selects the victim page

 The "victim" page: the page after replacing will cause the least error page.

 FIFO (First In First Out) Algorithm


 OPT (Optimal Page Replacement Algorithm) Algorithm
 LRU ( Least-recently-used) Algorithm
 Approximate algorithms LRU
 Algorithms with history bits
 Second chance algorithm
 Advanced second chance algorithm (Not Recently Used Page Replacement
Algorithm: NRU)
 Statistical algorithms
 LFU Algorithm (least frequently used)
 MFU Algorithm (most frequently used)
FIFO Algorithm (First In First Out)

 The page in the longest memory will be selected as the victim page

Example: A process is granted 3 page frames, initially all 3 frames are


empty, the process in turn accesses pages in the following order: 7, 0, 1, 2,
0, 3, 0, 4, 2, 3, 0, 3, 2, 1, 2, 0, 1, 7, 0, 1. Calculate the number of page
errors when applying FIFO algorithm to select victim page
FIFO Algorithm (First In First Out)

 The page in the longest memory will be selected as the victim page
Belady Paradox
 Consider the process to access page strings in the following order: 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5

If using 3 page frames, there will If using 4 page frames, there


be 9 page errors will be 10 page errors
OPT Algorithm (Optimal Page Replacement
Algorithm)

 Choose the longest-used page in the future

 The number of page errors arising is the lowest.


 Don't be paradoxical Belady.
 Difficult to set up because it is often impossible to know the sequence of
process retrieval.
 Suitable for operating systems for home appliances
LRU Algorithm ( Least-recently-used)

 Based on the last time the page was accessed.


 The page selected for replacement will be the longest
page that has not been accessed.
LRU algorithm settings
 Use the counter
 Element structure in the page table: add a field to record the

"most recent access time".


 CPU structure: add a counting bar (counter).

 Use linked list


 The page at the bottom of the list is the last accessed page
 The page at the top of the list is the longest unused page
LRU Approximate algorithms

 Each element in the page table has a reference bit:


 assigned is 0 by OS.
 hardware assigned is 1 each time the corresponding page is
accessed

 Approximate algorithms LRU


 Algorithms with history bits
 Second chance algorithm
 Advanced second chance algorithm (Not Recently
Used Page Replacement Algorithm: NRU)
Algorithms with bits of history
 Each page uses an additional 8 bit of history (history).
 Update bits history:
 Shift the history bits to the right one position to remove the
lowest bit.
 Set the bit reference of each page to the highest bit in 8 bits
history of that page.
 8 bit history will store the access status to the page in
the last 8 cycles.
 The "victim" page is the page with the smallest history
value.
Second chance algorithm
 Find a page according to FIFO principle.
 Check the reference bit of the page.
 If the reference bit is 0, select this page.
 If the reference bit is 1 then reassign it to 0 then find the next FIFO page (give
this page a second chance)
Second chance algorithm

The round linked list of the above example


Advanced second chance algorithm (Not Recently Used
Page Replacement Algorithm: NRU)
 Class 1 (0,0): includes pages with (ref, dirty) = (0,0). (lowest priority)
 not recently accessed and not modified
 best to replace.
 Class 2 (0.1):
 not recently accessed but has been modified.
 This case is not very good, because the page needs to be archived before replacing.
 Class 3 (1.0):
 recently accessed, but not modified.
 Pages can quickly be continued to be used.
 Class 4 (1,1): (highest priority)
 The page was recently accessed, and modified.
 The page can quickly be used again and before the replacement needs to be archived.
 The "victim" page: the first page found in the lowest priority class.
Statistical algorithms

 Count variable: save the number of access to a page.


 The LFU algorithm (least frequently used):
 Replace the page with the smallest counter value, i.e. the least used
page.
 The MFU algorithm (most frequently used):
 Replace the page with the largest counter value, i.e. the most used
page.
Allocate the number of page frames

 Equal allocation
 m page frames and n processes.
 Each process is given m/n page frames.
 Allocation according to the size ratio
 si : the size of the process pi
 S = si the total size of all processes
 m : the number of page frames available
 ai : the number of page frames allocated to the process pi
ai= si/ S x m
 Example: Process 1= 10K, Process 2=127K and 62 blank page frame. Then can
grant
 Process 1: 10/137 x 62 ~ 4 frame
 Process 2: 127/137 x62 ~ 57 frame

 Allocate according to the priority ratio:


Replace the page
 Global replacement
 Select the "victim" page from the set of all page frames in the
system.
 There are more options available.
 The number of page frames assigned to a process can be change.
 Processes cannot control the rate of their page errors.
 Local replacement
 Only select alternate pages from the set of page frames provided
for the paging error.
 The number of page frames assigned to a process will not
change
Stagnant System (thrashing)
 Not enough page frames -> frequent page errors ->
lots of CPU time spent performing page
replacement.
 Working set model.
Working Set Model
 WSSi (  , t): the number of elements in the “working set” of Process Pi at a
time t.
 Set of pages retrieved by the process in the last  hits at time t.
 m : number of blank page frames.
 D = WSSi : the total number of page frames required for the whole
system.
 At the time t: allocate the Pi page frame number (WSSi)(, t-1).
 D>m -> delay system

You might also like