Unit 4

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 34

Unit 4

Memory Management
1.Introduction –Swapping:

To increase CPU utilization in multiprogramming, a memory management scheme known as


swapping can be used. Swapping is the process of bringing a process into memory and then
temporarily copying it to the disc after it has run for a while. The purpose of swapping in an
operating system is to access data on a hard disc and move it to RAM so that application
programs can use it. It’s important to remember that swapping is only used when data isn’t
available in RAM. Although the swapping process degrades system performance, it allows
larger and multiple processes to run concurrently. Because of this, swapping is also known
as memory compaction. The CPU scheduler determines which processes are swapped in and
which are swapped out. Consider a multiprogramming environment that employs a priority-
based scheduling algorithm. When a high-priority process enters the input queue, a low-
priority process is swapped out so the high-priority process can be loaded and executed.
When this process terminates, the low priority process is swapped back into memory to
continue its execution. Below figure shows the swapping process in operating system:

Swapping has been subdivided into two concepts: swap-in and swap-out.
• Swap-out is a technique for moving a process from RAM to the hard disc.
• Swap-in is a method of transferring a program from a hard disc to main memory, or RAM.
Advantages
If there is low main memory so some processes may has to wait for much long but by using
swapping process do not have to wait long for execution on CPU.

It utilize the main memory.

Using only single main memory, multiple process can be run by CPU using swap partition.
The concept of virtual memory start from here and it utilize it in better way.This concept can
be useful in priority based scheduling to optimize the swapping process.
Disadvantages
• If there is low main memory resource and user is executing too many processes and suddenly
the power of system goes off there might be a scenario where data get erase of the processes
which are took parts in swapping.
• Chances of number of page faults occur
• Low processing performance
• Only one process occupies the user program area of memory in a single tasking operating
system and remains in memory until the process is completed.
When all of the active processes in a multitasking operating system cannot coordinate in
main memory, a process is swapped out of main memory so that other processes can enter
it.
2.Contiguous Memory Allocation :
Allocating space to software applications is referred to as memory allocation. Memory is a sizable
collection of bytes. Contiguous and non-contiguous memory allocation are the two basic types of
memory allocation. Contiguous memory allocation enables the tasks to be finished in a single memory
region. Contrarily, non-contiguous memory allocation distributes the procedure throughout many
memory locations in various memory sections.

Contiguous Memory Allocation: What Is It?


• An operating system memory allocation method is contiguous memory allocation. What,
however, is memory allocation? A software or process requires memory space in order to be
run. As a result, a process must be given a specific amount of memory that corresponds to
its needs. Memory allocation is the term for this procedure.
• Contiguous memory allocation is one of these memory allocation strategies. We use this
strategy to allocate contiguous blocks of memory to each process, as the name suggests.
Therefore, we allot a continuous segment from the entirely empty area to the process based
on its size whenever a process requests to reach the main memory.
Techniques for Contiguous Memory Allocation
• Depending on the needs of the process making the memory request, a single contiguous
piece of memory blocks is assigned.
• It is performed by creating fixed-sized memory segments and designating a single process
to each partition. The amount of multiprogramming will be constrained, therefore, to the
number of memory-based fixed partitions.
• Internal fragmentation results from this allocation as well. Consider the scenario where a
process is given a fixed-sized memory block that is a little larger than what is needed. Internal
fragmentation is the term used to describe the leftover memory space in the block in the
scenario. A partition becomes available for another process to run once a process inside of
it has completed.
In the variable partitioning scheme, the OS keeps a table that lists which memory partitions are free
and which are used by processes. Contiguous memory allocation reduces address translation
overheads, expediting process execution.

According to the contiguous memory allocation technique, if a process needs to be given space in the
memory, we must give it a continuous empty block of space to reside in. There are two ways to allocate
this:

o Fix-size Partitioning Method


o Flexible Partitioning Method
o Let's examine both of these strategies in depth, along with their benefits and drawbacks.

Fix-size Partitioning Method

Each process in this method of contiguous memory allocation is given a fixed size continuous
block in the main memory. This means that the entire memory will be partitioned into
continuous blocks of fixed size, and each time a process enters the system, it will be given one
of the available blocks. Because each process receives a block of memory space that is the same
size, regardless of the size of the process. Static partitioning is another name for this approach.

Three processes in the input queue in the figure above require memory space allocation. The

memory has fixed-sized chunks because we are using the fixed size partition technique. In
addition to the 4MB process, the first process, which is 3MB in size, is given a 5MB block. The
second process, which is 1MB in size, is also given a 5MB block. So, it doesn't matter how big
the process is. The same fixed-size memory block is assigned to each.

It is obvious that under this system, the number of continuous blocks into which the memory
will be partitioned will be determined by the amount of space each block covers, and this, in
turn, will determine how many processes can remain in the main memory at once.
The degree of multiprogramming refers to the number of processes that can run concurrently
in memory. Therefore, the number of blocks formed in the RAM determines the system's level
of multiprogramming.

Advantages

A fixed-size partition system has the following benefits:

o This strategy is easy to employ because each block is the same size. Now all that is left to
do is allocate processes to the fixed memory blocks that have been divided up.
o It is simple to keep track of how many memory blocks are still available, which determines
how many further processes can be allocated memory.
o This approach can be used in a system that requires multiprogramming since numerous
processes can be maintained in memory at once.

Disadvantages

Although the fixed-size partitioning strategy offers numerous benefits, there are a few
drawbacks as well:

o We won't be able to allocate space to a process whose size exceeds the block since the
size of the blocks is fixed.
o The amount of multiprogramming is determined by block size, and only as many
processes can run simultaneously in memory as there are available blocks.
o We must assign the process to this block if the block's size is more than that of the
process; nevertheless, this will leave a lot of free space in the block. This open area might
have been used to facilitate another procedure.
Flexible Partitioning Method

No fixed blocks or memory partitions are created while using this style of contiguous memory
allocation technique. Instead, according on its needs, each process is given a variable-sized
block. This indicates that if space is available, this amount of RAM is allocated to a new process
whenever it requests it. As a result, each block's size is determined by the needs and
specifications of the process that uses it.

There are no partitions with set sizes in the aforementioned diagram. Instead, the first process is given
just 3MB of RAM because it requires that much. The remaining 3 processes are similarly given only the
amount of space that is necessary for them.

This method is also known as dynamic partitioning because the blocks' sizes are flexible and
determined as new processes start.

Advantages

A variable-size partitioning system has the following benefits:

o There is no internal fragmentation because the processes are given blocks of space according to
their needs. Therefore, this technique does not waste RAM.
o How many processes are in the memory at once and how much space they take up will determine
how many processes can be running simultaneously. As a result, it will vary depending on the
situation and be dynamic.
o Even a large process can be given space because there are no blocks with set sizes.

Disadvantages

Despite the variable-size partition scheme's many benefits, there are a few drawbacks as well:

o This method is dynamic, hence it is challenging to implement a variable-size partition scheme.


o It is challenging to maintain record of processes and available memory space.

Techniques for Contiguous Memory Allocation Input Queues

So far, we've examined two different contiguous memory allocation strategies. But what
transpires when a fresh process needs to be given a location in the main memory? How is the
block or segment that it will receive chosen?

Continuous blocks of memory assigned to processes cause the main memory to always be full.
A procedure, however, leaves behind an empty block termed as a hole after it is finished. A new
procedure could potentially be implemented in this area. As a result, there are processes and
holes in the main memory, and each one of these holes might be assigned to a new process
that comes in.

First-Fit

This is a fairly straightforward technique where we start at the beginning and assign the first
hole, which is large enough to meet the needs of the process. The first-fit technique can also be
applied so that we can pick up where we left off in our previous search for the first-fit hole.

Best-Fit

The goal of this greedy method, which allocates the smallest hole that meets the needs of the
process, is to minimise any memory that would otherwise be lost due to internal fragmentation
in the event of static partitioning. Therefore, in order to select the greatest match for the
procedure without wasting memory, we must first sort the holes according to their diameters.

Worst-Fit

The Best-Fit strategy is in opposition to this one. The largest hole is chosen to be assigned to
the incoming process once the holes are sorted based on size. The theory behind this allocation
is that because the process is given a sizable hole, it will have a lot of internal fragmentation left
over. As a result, a hole will be left behind that can house a few additional processes.
Contiguous Memory Allocation's advantages and disadvantages

Contiguous memory allocation has a range of benefits and drawbacks. The following are a few
benefits and drawbacks:

Advantages

o The number of memory blocks remaining, which affects how many further processes can be given
memory space, is easy to keep track of.
o Contiguous memory allocation has good read performance since the entire file can be read from
the disc in a single process.
o The contiguous allocation works well and is easy to set up.

Disadvantages

o Fragmentation is not an issue because each new file can be written to the disk's end after the
preceding one.
o In order to choose the proper hole size while creating a new file, it needs know its final size.
o The extra space in the holes would need to be compressed or used once the diskis full.

Conclusion
o When a process is brought into the main memory to be executed, contiguous memory allocation
allocates contiguous blocks of memory to that process.
o There are two methods for allocating contiguous memory:
o Each process is given access to a fixed size continuous block of main memory through fixed size
partitioning.
o Variable Size Partitioning: Depending on the needs of each process, space is allocated. No specific
fixed-size block is present.
o There are three ways to give an entering process a hole:
o First-Fit: Assign the procedure to the first, adequate hole.
o The smallest hole that fulfils the procedure' requirements should be assigned.
o Worst-Fit: Give the entering process the hole with the biggest size among all.

3.Paging:

In Operating Systems, Paging is a storage mechanism used to retrieve processes from the
secondary storage into the main memory in the form of pages.

The main idea behind the paging is to divide each process in the form of pages. The main
memory will also be divided in the form of frames.
One page of the process is to be stored in one of the frames of the memory. The pages can be
stored at the different locations of the memory but the priority is always to find the contiguous
frames or holes.

Pages of the process are brought into the main memory only when they are required otherwise
they reside in the secondary storage.

Different operating system defines different frame sizes. The sizes of each frame must be equal.
Considering the fact that the pages are mapped to the frames in Paging, page size needs to be as same
as frame size.
Example

Let us consider the main memory size 16 Kb and Frame size is 1 KB therefore the main memory
will be divided into the collection of 16 frames of 1 KB each.

There are 4 processes in the system that is P1, P2, P3 and P4 of 4 KB each. Each process is divided
into pages of 1 KB each so that one page can be stored in one frame.

Initially, all the frames are empty therefore pages of the processes will get stored in the
contiguous way.

Frames, pages and the mapping between the two is shown in the image below.
Let us consider that, P2 and P4 are moved to waiting state after some time. Now, 8 frames
become empty and therefore other pages can be loaded in that empty place. The process P5 of
size 8 KB (8 pages) is waiting inside the ready queue.

Given the fact that, we have 8 non contiguous frames available in the memory and paging
provides the flexibility of storing the process at the different places. Therefore, we can load the
pages of process P5 in the place of P2 and P4.

When a page is to be accessed by the CPU by using the logical address, the operating system needs to
obtain the physical address to access that page physically.

The logical address has two parts.

1. Page Number
2. Offset

Memory management unit of OS needs to convert the page number to the frame number.
Example

Considering the above image, let's say that the CPU demands 10th word of 4th page of process P3.
Since the page number 4 of process P1 gets stored at frame number 9 therefore the 10th word of 9th
frame will be returned as the physical address.

4.Segmentation:

In Operating Systems, Segmentation is a memory management technique in which the memory is divided
into the variable size parts. Each part is known as a segment which can be allocated to a process.

The details about each segment are stored in a table called a segment table. Segment table is stored in
one (or many) of the segments.

Segment table contains mainly two information about segment:

Base: It is the base address of the segment

Limit: It is the length of the segment.

Why Segmentation is required?

Till now, we were using Paging as our main memory management technique. Paging is more close to the
Operating system rather than the User. It divides all the processes into the form of pages regardless of
the fact that a process can have some relative parts of functions which need to be loaded in the same
page.

Operating system doesn't care about the User's view of the process. It may divide the same function into
different pages and those pages may or may not be loaded at the same time into the memory. It
decreases the efficiency of the system.

It is better to have segmentation which divides the process into the segments. Each segment contains
the same type of functions such as the main function can be included in one segment and the library
functions can be included in the other segment.

With the help of segment map tables and hardware assistance, the operating system can easily translate
a logical address into physical address on execution of a program.
The Segment number is mapped to the segment table. The limit of the respective segment is compared
with the offset. If the offset is less than the limit then the address is valid otherwise it throws an error as
the address is invalid.

In the case of valid addresses, the base address of the segment is added to the offset to get the physical
address of the actual word in the main memory.

The above figure shows how address translation is done in case of segmentation.

Advantages of Segmentation
1. No internal fragmentation
2. Average Segment Size is larger than the actual page size.
3. Less overhead
4. It is easier to relocate segments than entire address space.
5. The segment table is of lesser size as compared to the page table in paging.

Disadvantages
1. It can have external fragmentation.
2. it is difficult to allocate contiguous memory to variable sized partition.
3. Costly memory management algorithms.
Difference between Paging and Segmentation

Sr Paging Segmentation
No.

1 Non-Contiguous memory allocation Non-contiguous memory allocation

2 Paging divides program into fixed size pages. Segmentation divides program into variable size
segments.

3 OS is responsible Compiler is responsible.

4 Paging is faster than segmentation Segmentation is slower than paging

5 Paging is closer to Operating System Segmentation is closer to User

6 It suffers from internal fragmentation It suffers from external fragmentation

7 There is no external fragmentation There is no external fragmentation

8 Logical address is divided into page number and Logical address is divided into segment number and
page offset segment offset

9 Page table is used to maintain the page Segment Table maintains the segment information
information.

10 Page table entry has the frame number and Segment table entry has the base address of the
some flag bits to represent details about pages. segment and some protection bits for the segments.

5.Structure of the Page Table:

In this tutorial, we will cover some of the most common techniques used for structuring the Page table.

The data structure that is used by the virtual memory system in the operating system of a computer in
order to store the mapping between physical and logical addresses is commonly known as Page Table.

As we had already told you that the logical address that is generated by the CPU is translated into the
physical address with the help of the page table.

Thus page table mainly provides the corresponding frame number (base address of the frame) where
that page is stored in the main memory.
The above diagram shows the paging model of Physical and logical memory.

Characteristics of the Page Table

Some of the characteristics of the Page Table are as follows:

It is stored in the main memory.

Generally; the Number of entries in the page table = the Number of Pages in which the process is divided.

PTBR means page table base register and it is basically used to hold the base address for the page table
of the current process.

Each process has its own independent page table.

Techniques used for Structuring the Page Table

Some of the common techniques that are used for structuring the Page table are as follows:

1. Hierarchical Paging
2. Hashed Page Tables
3. Inverted Page Tables

Let us cover these techniques one by one;


Hierarchical Paging
Another name for Hierarchical Paging is multilevel paging.

• There might be a case where the page table is too big to fit in a contiguous space, so we may
have a hierarchy with several levels.
• In this type of Paging the logical address space is broke up into Multiple page tables.
• Hierarchical Paging is one of the simplest techniques and for this purpose, a two-level page
table and three-level page table can be used.

Two Level Page Table

Consider a system having 32-bit logical address space and a page size of 1 KB and it is further divided
into:

Page Number consisting of 22 bits.

Page Offset consisting of 10 bits.

As we page the Page table, the page number is further divided into :

Page Number consisting of 12 bits.

Page Offset consisting of 10 bits.

Thus the Logical address is as follows:

In the above diagram,

P1 is an index into the Outer Page table.

P2 indicates the displacement within the page of the Inner page Table.

As address translation works from outer page table inward so is known as forward-mapped Page Table.

Below given figure below shows the Address Translation scheme for a two-level page table
Three Level Page Table

For a system with 64-bit logical address space, a two-level paging scheme is not appropriate. Let us
suppose that the page size, in this case, is 4KB.If in this case, we will use the two-page level scheme
then the addresses will look like this:

Thus in order to avoid such a large table, there is a solution and that is to divide the outer page table,
and then it will result in a Three-level page table:

Hashed Page Tables


This approach is used to handle address spaces that are larger than 32 bits.

• In this virtual page, the number is hashed into a page table.


• This Page table mainly contains a chain of elements hashing to the same elements.
• Each element mainly consists of :
• The virtual page number
• The value of the mapped page frame.
• A pointer to the next element in the linked list.
• Given below figure shows the address translation scheme of the Hashed Page Table:
The above Figure shows Hashed Page Table

• The Virtual Page numbers are compared in this chain searching for a match; if the match is
found then the corresponding physical frame is extracted.
• In this scheme, a variation for 64-bit address space commonly uses clustered page tables.

Clustered Page Tables

• These are similar to hashed tables but here each entry refers to several pages (that is 16) rather
than 1.
• Mainly used for sparse address spaces where memory references are non-contiguous and
scattered

Inverted Page Tables


• The Inverted Page table basically combines A page table and A frame table into a single data
structure.
• There is one entry for each virtual page number and a real page of memory
• And the entry mainly consists of the virtual address of the page stored in that real memory
location along with the information about the process that owns the page.
• Though this technique decreases the memory that is needed to store each page table; but it
also increases the time that is needed to search the table whenever a page reference occurs.

Given below figure shows the address translation scheme of the Inverted Page Table:
In this, we need to keep the track of process id of each entry, because many processes may
have the same logical addresses.

Also, many entries can map into the same index in the page table after going through the hash
function. Thus chaining is used in order to handle this.

6. Virtual Memory:
A computer can address more memory than the amount physically installed on the system. This
extra memory is actually called virtual memory and it is a section of a hard disk that's set up to
emulate the computer's RAM.

The main visible advantage of this scheme is that programs can be larger than physical
memory. Virtual memory serves two purposes. First, it allows us to extend the use of physical
memory by using disk. Second, it allows us to have memory protection, because each virtual
address is translated to a physical address.

Following are the situations, when entire program is not required to be loaded fully in main
memory.

User written error handling routines are used only when an error occurred in the data or
computation.

Certain options and features of a program may be used rarely.

Many tables are assigned a fixed amount of address space even though only a small amount of
the table is actually used.

The ability to execute a program that is only partially in memory would counter many benefits.

Less number of I/O would be needed to load or swap each user program into memory.

A program would no longer be constrained by the amount of physical memory that is available.

Each user program could take less physical memory, more programs could be run the same
time, with a corresponding increase in CPU utilization and throughput.

Modern microprocessors intended for general-purpose use, a memory management unit, or


MMU, is built into the hardware. The MMU's job is to translate virtual addresses into physical
addresses. A basic example is given below
Virtual memory is commonly implemented by demand paging. It can also be implemented in a
segmentation system. Demand segmentation can also be used to provide virtual memory.

7. Background Demand Paging


A demand paging system is quite similar to a paging system with swapping where processes reside in
secondary memory and pages are loaded only on demand, not in advance. When a context switch occurs,
the operating system does not copy any of the old program’s pages out to the disk or any of the new
program’s pages into the main memory Instead, it just begins executing the new program after loading
the first page and fetches that program’s pages as they are referenced.
While executing a program, if the program references a page which is not available in the main memory
because it was swapped out a little ago, the processor treats this invalid memory reference as a page
fault and transfers control from the program to the operating system to demand the page back into the
memory.

Advantages
Following are the advantages of Demand Paging −
Large virtual memory.
More efficient use of memory.
There is no limit on degree of multiprogramming.

Disadvantages
Number of tables and the amount of processor overhead for handling page interrupts are greater than
in the case of the simple paged management techniques.

Page Replacement Algorithm


Page replacement algorithms are the techniques using which an Operating System decides which
memory pages to swap out, write to disk when a page of memory needs to be allocated. Paging happens
whenever a page fault occurs and a free page cannot be used for allocation purpose accounting to reason
that pages are not available or the number of free pages is lower than required pages.
When the page that was selected for replacement and was paged out, is referenced again, it has to read
in from disk, and this requires for I/O completion. This process determines the quality of the page
replacement algorithm: the lesser the time waiting for page-ins, the better is the algorithm.
A page replacement algorithm looks at the limited information about accessing the pages provided by
hardware, and tries to select which pages should be replaced to minimize the total number of page
misses, while balancing it with the costs of primary storage and processor time of the algorithm itself.
There are many different page replacement algorithms. We evaluate an algorithm by running it on a
particular string of memory reference and computing the number of page faults,

Reference String
The string of memory references is called reference string. Reference strings are generated artificially or
by tracing a given system and recording the address of each memory reference. The latter choice
produces a large number of data, where we note two things.
For a given page size, we need to consider only the page number, not the entire address.
If we have a reference to a page p, then any immediately following references to page p will never cause
a page fault. Page p will be in memory after the first reference; the immediately following references will
not fault.
For example, consider the following sequence of addresses − 123,215,600,1234,76,96
If page size is 100, then the reference string is 1,2,6,12,0,0

First In First Out (FIFO) algorithm


Oldest page in main memory is the one which will be selected for replacement.
Easy to implement, keep a list, replace pages from the tail and add new pages at the head.

Optimal Page algorithm


An optimal page-replacement algorithm has the lowest page-fault rate of all algorithms. An optimal
page-replacement algorithm exists, and has been called OPT or MIN.
Replace the page that will not be used for the longest period of time. Use the time when a page is to be
used.
Least Recently Used (LRU) algorithm
Page which has not been used for the longest time in main memory is the one which will be selected for
replacement.
Easy to implement, keep a list, replace pages by looking back into time.

Page Buffering algorithm


To get a process start quickly, keep a pool of free frames.
On page fault, select a page to be replaced.
Write the new page in the frame of free pool, mark the page table and restart the process.
Now write the dirty page out of disk and place the frame holding replaced page in free pool.

Least frequently Used(LFU) algorithm


• The page with the smallest count is the one which will be selected for replacement.
• This algorithm suffers from the situation in which a page is used heavily during the initial phase
of a process, but then is never used again.
Most frequently Used(MFU) algorithm
• This algorithm is based on the argument that the page with the smallest count was probably just
brought in and has yet to be used.

Background Demand Paging:


• Demand paging follows that pages should only be brought into memory if the executing process
demands them. This is often referred to as lazy evaluation as only those pages demanded by the
process are swapped from secondary storage to main memory. Contrast this to pure swapping,
where all memory for a process is swapped from secondary storage to main memory during the
process startup.
• Commonly, to achieve this process a page table implementation is used. The page table
maps logical memory to physical memory. The page table uses a bitwise operator to mark if a
page is valid or invalid. A valid page is one that currently resides in main memory. An invalid page
is one that currently resides in secondary memory. When a process tries to access a page, the
following steps are generally followed:
• Attempt to access page.
• If page is valid (in memory) then continue processing instruction as normal.
• If page is in valid then a page-fault trap occurs.
• Check if the memory reference is a valid reference to a location on secondary memory. If not,
the process is terminated (illegal memory access). Otherwise, we have to page in the required
page.
• Schedule disk operation to read the desired page into main memory.
• Restart the instruction that was interrupted by the operating system trap.
Advantages
Demand paging, as opposed to loading all pages immediately:

• Only loads pages that are demanded by the executing process.


• As there is more space in main memory, more processes can be loaded, reducing the context
switching time, which utilizes large amounts of resources.
• Less loading latency occurs at program startup, as less information is accessed from secondary
storage and less information is brought into main memory.
• As main memory is expensive compared to secondary memory, this technique helps significantly
reduce the bill of material (BOM) cost in smart phones for example. Symbian OS had this feature.
Disadvantages

Individual programs face extra latency when they access a page for the first time.
Low-cost, low-power embedded systems may not have a memory management unit that supports
page replacement.
Memory management with page replacement algorithms becomes slightly more complex.
Possible security risks, including vulnerability to timing attacks; see Percival, Colin (2005-05-
Thrashing which may occur due to repeated page faults.
8. Copy on Writer:
Copy on Write or simply COW is a resource management technique. One of its main use is in the
implementation of the fork system call in which it shares the virtual memory(pages) of the OS.
In UNIX like OS, fork() system call creates a duplicate process of the parent process which is called as
the child process.
The idea behind a copy-on-write is that when a parent process creates a child process then both of
these processes initially will share the same pages in memory and these shared pages will be marked as
copy-on-write which means that if any of these processes will try to modify the shared pages then only
a copy of these pages will be created and the modifications will be done on the copy of pages by that
process and thus not affecting the other process.
Suppose, there is a process P that creates a new process Q and then process P modifies page 3.
The below figures shows what happens before and after process P modifies page 3.

9. Page Replacement:

While a user process is executing, a page fault occurs. The operating system determines where
the desired page is residing on the disk but then finds that there are no free frames on the free-frame
list: All memory is in use Figure below:

Need for page replacement.

The operating system has several options at this point. It could terminate the user process.
However, demand paging is the operating system's attempt to improve the computer system's
utilization and throughput. Users should not be aware that their processes are running on a paged
system—paging should be logically transparent to the user. So this option is not the best choice.

The operating system could swap out a process, freeing all its frames and reducing the level of
multiprogramming. This option is a good one in certain circumstances; Here, we discuss the most
common solution: page replacement.

Basic Page Replacement


Page replacement takes the following approach. If no frame is free, we find one that is not currently
being used and free it. We can free a frame by writing its contents to swap space and changing the
page table (and all other tables) to indicate that the page is no longer in memory Figure below:
We can now use the freed frame to hold the page for which the process faulted. We modify the
page-fault service routine to include page replacement:
Find the location of the desired page on the disk.
Find a free frame:
If there is a free frame, use it.
If there is no free frame, use a page-replacement algorithm to select a victim frame. Write the
victim frame to the disk; change the page and frame tables accordingly.
Read the desired page into the (newly) free frame; change the page and frame tables.
Restart the user process.

page-replacement algorithm
1. FIFO Page Replacement
The simplest page-replacement algorithm is a first-in, first-out (FIFO) algorithm. A FIFO replacement
algorithm associates with each page the time when that page was brought into memory. When a page
must be replaced, the oldest page is chosen. Notice that it is not strictly necessary to record the time
when a page is brought in. We can create a FIFO queue to hold all pages in memory. We replace the
page at the head of the queue. When a page is brought into memory, we insert it at the tail of the
queue.
For our example reference string, our three frames are initially empty. The first three references (7,
0, 1) cause page faults and are brought into these empty frames. The next reference (2) replaces page
7, because page 7 was brought in first. Since 0 is the next reference and 0 is already in memory, we
have no fault for this reference. The first reference to 3 results in replacement of page 0, since it is now
first in line. Because of this replacement, the next reference, to 0, will fault. Page 1 is then replaced by
page 0. This process continues as shown in Figure below. Every time a fault occurs, we show which
pages are in our three frames. There are fifteen faults altogether.
2. FIFO page-replacement algorithm.

The FIFO page-replacement algorithm is easy to understand and program. However, its performance
is not always good. On the one hand, the page replaced may be an initialization module that was
used a long time ago and is no longer needed. On the other hand, it could contain a heavily used
variable that was initialized early and is in constant use.

Optimal Page Replacement


Replace the page that will not be used for the longest period of time. Use of this page-replacement
algorithm guarantees the lowest possible page fault rate for a fixed number of frames.
For example, on our sample reference string, the optimal page-replacement algorithm would yield
nine page faults, as shown in Figure below.

Optimal page-replacement algorithm.


The first three references cause faults that fill the three empty frames. The reference to page 2 replaces
page 7, because page 7 will not be used until reference 18, whereas page 0 will be used at 5, and page
1 at 14. The reference to page 3 replaces page 1, as page 1 will be the last of the three pages in
memory to be referenced again.
With only nine page faults, optimal replacement is much better than a FIFO algorithm, which results
in fifteen faults. (If we ignore the first three, which all algorithms must suffer, then optimal replacement
is twice as good as FIFO replacement.) In fact, no replacement algorithm can process this reference
string in three frames with fewer than nine faults.
Unfortunately, the optimal page-replacement algorithm is difficult to implement, because it requires
future knowledge of the reference string.

1- LRU Page Replacement

The key distinction between the FIFO and OPT algorithms is that the FIFO algorithm uses the time
when a page was brought into memory, whereas the OPT algorithm uses the time when a page is to
be used. If we use the recent past as an approximation of the near future, then we can replace the
page that has not been used for the longest period of time. This approach is the

least-recently-used (LRU) algorithm.

LRU replacement associates with each page the time of that page’s last use. When a page must be
replaced, LRU chooses the page that has not been used for the longest period of time. We can think
of this strategy as the optimal page-replacement algorithm looking backward in time, rather than
forward. (Strangely, if we let SR be the reverse of a reference string S, then the page-fault
rate for the OPT algorithm on S is the same as the page-fault rate for the OPT algorithm on SR.
Similarly, the page-fault rate for the LRU algorithm on S is the same as the page-fault rate for the LRU
algorithm on SR.) The result of applying LRU replacement to our example reference string is shown in
Figure below:

LRU page-replacement algorithm.

The LRU algorithm produces twelve faults. Notice that the first five faults are the same as those for
optimal replacement. When the reference to page 4 occurs, however, LRU replacement sees that, of
the three frames in memory, page 2 was used least recently. Thus, the LRU algorithm replaces page 2,
not knowing that page 2 is about to be used. When it then faults for page 2, the LRU algorithm
replaces page 3, since it is now the least recently used of the three pages in memory. Despite these
problems, LRU replacement with twelve faults is much better than FIFO replacement with fifteen.
The LRU policy is often used as a page-replacement algorithm and is considered to be good. The
major problem is how to implement LRU replacement. An LRU page-replacement algorithm may
require substantial hardware assistance.
10.Allocation of Frames:

The main memory of the system is divided into frames.

The OS has to allocate a sufficient number of frames for each process and to do so, the OS uses various
algorithms.

The five major ways to allocate frames are as follows:


1. Proportional frame allocation
The proportional frame allocation algorithm allocates frames based on the size that is necessary for the
execution and the number of total frames the memory has.
The only disadvantage of this algorithm is it does not allocate frames based on priority. This situation is
solved by Priority frame allocation.
• Disadvantage: In systems with processes of varying sizes, it does not make much sense to give
each process equal frames. Allocation of a large number of frames to a small process will eventually
lead to the wastage of a large number of allocated unused frames.

2. Priority frame allocation


Priority frame allocation allocates frames based on the priority of the processes and the number of
frame allocations.
If a process is of high priority and needs more frames then the process will be allocated that many
frames. The allocation of lower priority processes occurs after it.
• Advantage: All the processes share the available frames according to their needs, rather than
equally.

3. Global replacement allocation


When there is a page fault in the operating system, then the global replacement allocation takes care
of it.
The process with lower priority can give frames to the process with higher priority to avoid page faults.
• Advantage: Does not hinder the performance of processes and hence results in greater system
throughput.
• Disadvantage: The page fault ratio of a process can not be solely controlled by the process
itself. The pages in memory for a process depends on the paging behavior of other processes as
well.

4. Local replacement allocation


In local replacement allocation, the frames of pages can be stored on the same page.
It doesn’t influence the behavior of the process as it did in global replacement allocation.
• Advantage: The pages in memory for a particular process and the page fault ratio is affected by
the paging behavior of only that process.

Disadvantage: A low priority process may hinder a high priority process by not making its frames
available to the high priority process.

5. Equal frame allocation


In equal frame allocation, the processes are allocated equally among the processes in the operating
system.
The only disadvantage in equal frame allocation is that a process requires more frames for allocation
for execution and there are only a set number of frames.
10. Thrashing:
Thrash is a term used in computer science to describe the poor performance of a virtual memory
system when the same pages are loaded repeatedly owing to a shortage of main memory to store
them in secondary memory.
Thrashing happens in computer science when a computer's virtual memory resources are overutilized,
resulting in a persistent state of paging and page faults, which inhibits most application-level activity. It
causes the computer's performance to decline or collapse. The scenario can last indefinitely unless the
user stops certain running apps or active processes to free up extra virtual memory resources.
What is Thrashing in OS
Thrashing in operating system is a phenomenon in computing that occurs when virtual memory is
employed. It occurs when a computer's virtual memory rapidly exchanges data for data on the hard
drive, to the exclusion of most application-level operations. As main memory is depleted, more pages
must be swapped into and out of virtual memory.
To know more about thrashing in OS, first, we need to know about page faults and swapping.
• Page fault: A page fault is a type of interrupt that occurs when a program attempts to access a
page of memory that is not currently mapped to physical memory, leading to a disk I/O operation.
• Swapping: Swapping results in a high rate of hard drive access. Thrashing can last for a long
time if the underlying problem is not addressed. Thrashing in operating system has the ability to
cause the computer's hard disk to fail completely.

Thus, thrashing in operating system is sometimes referred to as disk thrashing.


• After visualizing the Graphical representation,
• The underlying idea is that if a process is given too few frames, there would be too many and too
frequent page faults. As a result, the CPU would perform no useful work, and CPU usage would
plummet drastically.
• The long-term scheduler would then attempt to enhance CPU usage by loading additional
processes into memory, increasing the degree of multiprogramming. Unfortunately, this would
result in even lower CPU utilization, creating a chain reaction of larger page faults followed by a
rise in the degree of multiprogramming, a phenomenon known as Thrashing.

Causes of Thrashing in Operating System


• Thrashing in the operating system has an impact on the performance of the operating system's
execution.
• Thrashing causes serious performance issues in the operating system.
• We can also argue that as soon as the memory is full, the process begins to take a long time to
swap in the needed pages. Because most programs are waiting for pages, CPU utilization drops
once again.
• The high degree of multiprogramming and a shortage of frames are two of the most common
reasons for thrashing in the operating system.
Algorithms during Thrashing
In order to deal with page faults, the operating system uses either the local frames replacement
algorithm or the global frames replacement algorithm to try to bring in enough pages in the main
memory. Let's explore how various replacement strategies affect Thrashing.
Global Page Replacement
The Global Page replacement has the ability to bring any page, and once Thrashing in the operating
system is detected, it attempts to bring more pages. As a result of this, no process can acquire enough
frames, and the thrashing in the operating system will get worse. To conclude, when Thrashing in
operating system occurs, the global page replacement technique is ineffective.
Local Page Replacement
Unlike the Global Page Replacement, the Local Page Replacement will choose pages that belong to
that process. As a result, there is a potential that the Thrashing in operating system will be reduced. As
previously been demonstrated, there are several drawbacks to Local Page replacement. As a result,
local page replacement is merely a different option than global page replacement.
How to Overcome Thrashing?
Thrashing has an adverse effect on hard disk health and system performance. As a result, some steps
must be taken to avoid it. The following strategies can be used to solve the thrashing problem:
Upgrade the RAM size
As inadequate memory might result in disk thrashing, one approach is to upgrade the RAM. With
additional memory, your computer can perform tasks more simply and without having to work as hard.
It is, in general, the best long-term answer.
Replace programs
In order to overcome thrashing, what we can do is we can remove heavy memory-intensive programs
with relatively less memory-intensive alternatives.
Reduce the number of active applications
If you have too many apps running in the background, your system resources will be depleted quickly.
Furthermore, the remaining system resource is sluggish, which might cause Thrashing. As a result, when
you close an application, certain resources are released, allowing you to prevent Thrashing in operating
system to some extent.
Alter the size of the swap file
There is a probability of occurrence of thrashing also when the swap file is not properly configured in
the operating system. We can define swapping as when a page fault occurs. The operating system
attempts to retrieve that page from secondary memory and swap it with one of the pages in RAM.
Techniques to Prevent Thrashing
The Local Page replacement is superior to the Global Page replacement, although it has several
drawbacks and is not always useful. As a result, here are some more strategies for dealing with
Thrashing.
Working-Set Model
A process goes from locality to locality when it runs, while a locality is a set of pages that are actively
being used together. The working set window is defined by the parameter Δ(theta) in this model. The
working set is the collection of pages in the most current Δ page references. The working set contains
pages that are currently in use.
If Δ is too small, the entire locality will not be covered. On the other hand, if Δ is too large, it may
overlap numerous localities. The size of the working set is thus the essential feature.
We calculate the working-set size, WSSi, for each process in the system if there are m processes.
D = ΣWSSi, where D is the overall demand for frames. Each process actively makes use of the pages in
its working set. As a result, WSSi frames are required by process i. If the overall demand exceeds the
total amount of possible frames (D > m), thrashing will occur because certain processes will run out of
frames.
After choosing, the operating system monitors each process's working set and allocates enough frames
to that working set to provide it with its working-set size. Another process can be started if there are
enough spare frames. The OS selects a process to suspend if the sum of the working-set sizes exceeds
the total number of available frames.
The series of numbers in the following diagram represents the pages required by a process. If the
working set size is set to nine, the following pages are displayed at times t1 and t2:

How do we compute Working Sets?


Peter D.Denning introduced the working set parameter as T, which means that the working set is made
up of all pages that have been referenced in the recent T seconds. The clock method may be extended
to retain an idle time for each page. The working set contains pages with idle times shorter than T.
Page Fault Frequency

This is a straightforward model. We must act depending on the frequency/rate of page faults and
assign frames to each process accordingly. For this page-fault rate, we specified an upper bound (UB)
and a lower bound (LB). We compare the page fault rate(R) of each process to the stated upper
bound(UB) and lower bound(LB).
If R > UB, we can deduce that a process requires additional frames to keep this rate under control. To
avoid thrashing, we'll need to dedicate extra frames to it. If there aren't any frames available, the
process can be paused until a sufficient number of frames becomes available. The allotted frames
should be assigned to another high-paging process once this process has been suspended.
We have more than enough frames for a process if R < LB, and some of them can be given to other
processes. We can maintain a balance between frame needs and frame allocation by using the R, UB,
and LB.
Locality Model
The locality model in thrashing is based on the observation that when a system is thrashing, a large
proportion of memory accesses are likely to be concentrated in a few pages of memory that are in high
demand. By identifying and prioritizing these pages, it may be possible to reduce the number of page
faults and alleviate the thrashing condition. This is because if the system can keep these high-demand
pages in physical memory, it will need to page them in and out less frequently, reducing the number of
disk I/O operations and improving system performance. The locality model can be applied using
techniques such as page replacement algorithms that aim to keep the most frequently accessed pages
in physical memory.
Read about Batch Operating System here.
Symptoms of Thrashing in OS and How to Detect it?
Thrashing might occur in OS when the system is overloaded with excessive paging, which results in
decreased performance of the OS. Here are the different ways to detect Thrashing in OS.
• Maximum cores of CPU being used, and little or no activity is done.
• Thrashing leads to the swapping of pages between the main memory and disk, so disk activity
increases rapidly.
• Frequent page faults can also be a reason for thrashing in OS.
Effects on System Performance and User Experience
• Thrashing in OS has various impacts on system performance and user experience.
• It decreases the overall performance of the system due to excessive cores of the CPU being used.
• Due to the swapping of pages between memory and disk, the response time for user interaction
gets increased, which reduces the user experience.
• Thrashing increases application loading time, which reduces the efficiency of the system
• Disk activity is increased rapidly, which reduces the performance of systems. Due to slow
performance and increased loading time of the system and applications, the user might get
frustrated, which depicts a bad user experience.

You might also like