Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 34

UNIT-3

Memory-Management Strategies:
Introduction
Swapping
Contiguous Memory Allocation
Paging
Segmentation
Virtual Memory Management:
Introduction
Demand Paging
Copy On-Write
Page replacement
Frame allocation
Thrashing
Memory-mapped files
MEMORY-MANAGEMENT STRATEGIES:

INTRODUCTION:
The term Memory can be defined as a collection of data in a specific format. It is used to store
instructions and processed data. The memory comprises a large array or group of words or bytes, each
with its own location. The primary motive of a computer system is to execute programs. These programs,
along with the information they access, should be in the main memory during execution. The CPU fetches
instructions from memory according to the value of the program counter. 
To achieve a degree of multiprogramming and proper utilization of memory, memory management is
important. Many memory management methods exist, reflecting various approaches, and the
effectiveness of each algorithm depends on the situation. The main aim of memory management is to
achieve efficient utilization of memory.  
Why Memory Management is required:
• Allocate and de-allocate memory before and after process execution.
• To keep track of used memory space by processes.
• To minimize fragmentation issues.
• To proper utilization of main memory.
• To maintain data integrity while executing of process.
SWAPPING:

 Swapping is a memory management scheme in which any process can be temporarily swapped from main
memory to secondary memory so that the main memory can be made available for other processes.
 It is used to improve main memory utilization. In secondary memory, the place where the swapped-out
process is stored is called swap space.
 The purpose of the swapping in operating system is to access the data present in the hard disk and bring it
to RAM, so that the application programs can use it.
 The thing to remember is that swapping is used only when data is not present in RAM.
 Although the process of swapping affects the performance of the system, it helps to run larger and more than
one process. This is the reason why swapping is also referred to as memory compaction.
 The concept of swapping has divided into two more concepts: Swap-in and Swap-out.

• Swap-out is a method of removing a process from RAM and adding it to the hard disk.
• Swap-in is a method of removing a program from a hard disk and putting it back into the main memory or
RAM.
Advantages of Swapping
 It helps the CPU to manage multiple processes within a single main memory.

 It helps to create and use virtual memory.


 Swapping allows the CPU to perform multiple tasks simultaneously. Therefore, processes do not have
to wait very long before they are executed.
 It improves the main memory utilization.

Disadvantages of Swapping
 If the computer system loses power, the user may lose all information related to the program in case of
substantial swapping activity.
 If the swapping algorithm is not good, the composite method can increase the number of Page Fault
and decrease the overall processing performance.
CONTIGUOUS MEMORY ALLOCATION
 In the Contiguous Memory Allocation, each process is contained in a single contiguous section of
memory.
 In this memory allocation, all the available memory space remains together in one place which implies
that the freely available memory partitions are not spread over here and there across the whole memory
space.
 In Contiguous memory allocation which is a memory management technique, whenever there is a
request by the user process for the memory then a single section of the contiguous memory block is
given to that process according to its requirement.
 Contiguous Memory allocation is achieved just by dividing the memory into the fixed-sized
partition.
 The memory can be divided either in the fixed-sized partition or in the variable-sized partition in
order to allocate contiguous space to user processes.
Fixed-size Partition Scheme
 This technique is also known as Static partitioning. In this scheme, the system divides the memory
into fixed-size partitions. The partitions may or may not be the same size. The size of each partition is
fixed as indicated by the name of the technique and it cannot be changed.
 In this partition scheme, each partition may contain exactly one process. There is a problem that this
technique will limit the degree of multiprogramming because the number of partitions will basically
decide the number of processes.
Example:
Let's take an example of fixed size partitioning scheme, we will divide a memory size of 15 KB into
fixed-size partitions:

It is important to note that these partitions are allocated to the processes as they arrive and the partition
that is allocated to the arrived process basically depends on the algorithm followed.
If there is some wastage inside the partition then it is termed Internal Fragmentation.
Variable-size Partition Scheme
 This scheme is also known as Dynamic partitioning and is came into existence to overcome the
drawback i.e. internal fragmentation that is caused by Static partitioning. In this partitioning, scheme
allocation is done dynamically.
 The size of the partition is not declared initially. Whenever any process arrives, a partition of size equal
to the size of the process is created and then allocated to the process. Thus the size of each partition is
equal to the size of the process.
 As partition size varies according to the need of the process so in this partition scheme there is
no internal fragmentation.
PAGING:

 Paging is a storage mechanism that allows OS to retrieve processes from the secondary storage into
the main memory in the form of pages. In the Paging method, the main memory is divided into small
fixed-size blocks of physical memory, which is called frames.
 The size of a frame should be kept the same as that of a page to have maximum utilization of the main
memory and to avoid external fragmentation. Paging is used for faster access to data, and it is a logical
concept.
Advantages of Paging
 Here, are advantages of using Paging method:

• Easy to use memory management algorithm


• No need for external Fragmentation
• Swapping is easy between equal-sized pages and page frames.

Disadvantages of Paging
 Here, are drawback/ cons of Paging:

• May cause Internal fragmentation


• Page tables consume additional memory.
• Multi-level paging may lead to memory reference overhead.
SEGMENTATION
 Segmentation method works almost similarly to paging, only difference between the two is that
segments are of variable-length whereas, in the paging method, pages are always of fixed size.
 A program segment includes the program’s main function, data structures, utility functions, etc. The
OS maintains a segment map table for all the processes. It also includes a list of free memory blocks
along with its size, segment numbers, and It’s memory locations in the main memory or virtual
memory.
Advantages of Segmentation
 Here, are pros/benefits of Segmentation

• Offer protection within the segments


• You can achieve sharing by segments referencing multiple processes.
• Not offers internal fragmentation
• Segment tables use lesser memory than paging

Disadvantages of Segmentation
 Here are cons/drawback of Segmentation

• In segmentation method, processes are loaded/ removed from the main memory. Therefore, the free
memory space is separated into small pieces which may create a problem of external fragmentation
• Costly memory management algorithm
VIRTUAL MEMORY MANAGEMENT

INTRODUCTION:
 Virtual Memory is a storage allocation scheme in which secondary memory can be addressed as
though it were part of the main memory.
 The addresses a program may use to reference memory are distinguished from the addresses the
memory system uses to identify physical storage sites, and program-generated addresses are translated
automatically to the corresponding machine addresses. 
 The size of virtual storage is limited by the addressing scheme of the computer system and the amount
of secondary memory is available not by the actual number of the main storage locations. 
 It is a technique that is implemented using both hardware and software. It maps memory addresses
used by a program, called virtual addresses, into physical addresses in computer memory. 
1. All memory references within a process are logical addresses that are dynamically translated into
physical addresses at run time. This means that a process can be swapped in and out of the main
memory such that it occupies different places in the main memory at different times during the course
of execution.
2. A process may be broken into a number of pieces and these pieces need not be continuously located in
the main memory during execution. The combination of dynamic run-time address translation and use
of page or segment table permits this.
DEMAND PAGING : 

 The process of loading the page into memory on demand (whenever page fault occurs) is known as
demand paging. 
The process includes the following steps :
1. If the CPU tries to refer to a page that is currently not available in the main memory, it generates an
interrupt indicating a memory access fault.
2. The OS puts the interrupted process in a blocking state. For the execution to proceed the OS must
bring the required page into the memory.
3. The OS will search for the required page in the logical address space.
4. The required page will be brought from logical address space to physical address space. The page
replacement algorithms are used for the decision-making of replacing the page in physical address
space.
5. The page table will be updated accordingly.
6. The signal will be sent to the CPU to continue the program execution and it will place the process back
into the ready state.
COPY ON WRITE

 Copy on Write or simply COW is a resource management technique. One of its main use is in the
implementation of the fork system call in which it shares the virtual memory(pages) of the OS.
 In UNIX like OS, fork() system call creates a duplicate process of the parent process which is called as
the child process.
 The idea behind a copy-on-write is that when a parent process creates a child process then both of
these processes initially will share the same pages in memory and these shared pages will be marked as
copy-on-write which means that if any of these processes will try to modify the shared pages then only
a copy of these pages will be created and the modifications will be done on the copy of pages by that
process and thus not affecting the other process.
 Suppose, there is a process P that creates a new process Q and then process P modifies page 3.
The below figures shows what happens before and after process P modifies page 3.
PAGE REPLACEMENT
 The page replacement algorithm decides which memory page is to be replaced. The process of replacement is
sometimes called swap out or write to disk. Page replacement is done when the requested page is not found in the
main memory (page fault).

 There are two main aspects of virtual memory, Frame allocation and Page Replacement. It is very important to have
the optimal frame allocation and page replacement algorithm. Frame allocation is all about how many frames are to
be allocated to the process while the page replacement is all about determining the page number which needs to be
replaced in order to make space for the requested page.
Types of Page Replacement Algorithms
 There are various page replacement algorithms. Each algorithm has a different method by which the
pages can be replaced.
1. Optimal Page Replacement algorithm → this algorithms replaces the page which will not be
referred for so long in future. Although it can not be practically implementable but it can be used as a
benchmark. Other algorithms are compared to this in terms of optimality.
2. Least recent used (LRU) page replacement algorithm → this algorithm replaces the page which has
not been referred for a long time. This algorithm is just opposite to the optimal page replacement
algorithm. In this, we look at the past instead of staring at future.
3. FIFO → in this algorithm, a queue is maintained. The page which is assigned the frame first will be
replaced first. In other words, the page which resides at the rare end of the queue will be replaced on
the every page fault.
FRAME ALLOCATION

 An important aspect of operating systems, virtual memory is implemented using demand paging.


Demand paging necessitates the development of a page-replacement algorithm and a frame allocation
algorithm. Frame allocation algorithms are used if you have multiple processes; it helps decide how
many frames to allocate to each process.
 There are various constraints to the strategies for the allocation of frames:

• You cannot allocate more than the total number of available frames.
• At least a minimum number of frames should be allocated to each process. This constraint is supported
by two reasons. The first reason is, as less number of frames are allocated, there is an increase in the
page fault ratio, decreasing the performance of the execution of the process. Secondly, there should be
enough frames to hold all the different pages that any single instruction can reference.
 There are mainly five ways of frame allocation algorithms in the OS. These are as follows:

1. Equal Frame Allocation


2. Proportional Frame Allocation
3. Priority Frame Allocation
4. Global Replacement Allocation
5. Local Replacement Allocation
Equal Frame Allocation
 In equal frame allocation, the processes are assigned equally among the processes in the OS. For
example, if the system has 30 frames and 7 processes, each process will get 4 frames. The 2 frames
that are not assigned to any system process may be used as a free-frame buffer pool in the system.
Proportional Frame Allocation
 The proportional frame allocation technique assigns frames based on the size needed for execution and
the total number of frames in memory.
 The allocated frames for a process pi of size si are ai = (si/S)*m, in which S represents the total of all
process sizes, and m represents the number of frames in the system.
Priority Frame Allocation
 Priority frame allocation assigns frames based on the number of frame allocations and the processes.
Suppose a process has a high priority and requires more frames that many frames will be allocated to
it. Following that, lesser priority processes are allocated.
Global Replacement Allocation
 When a process requires a page that isn't currently in memory, it may put it in and select a frame from
the all frames sets, even if another process is already utilizing that frame. In other words, one process
may take a frame from another.
Local Replacement Allocation
 When a process requires a page that isn't already in memory, it can bring it in and assign it a frame
from its set of allocated frames.
THRASHING

 In multiprogramming, there can be a scenario when the system spends most of its time shuttling pages
between the main memory and the secondary memory due to frequent page faults. This behavior is
known as thrashing.
 A process is said to be thrashing if the CPU spends more time serving page faults than executing the
pages. This leads to low CPU utilization and the Operating System in return tries to increase the degree
of multiprogramming.
 The above definition might be hard to understand in one go so let’s try to understand it with an
example. We know each process requires a minimum number of pages to be present in the main
memory at any point in time for its execution.
Causes of Thrashing
 Thrashing affects the performance of execution in the Operating system. Also, thrashing results in
severe performance problems in the Operating system.
 When the utilization of CPU is low, then the process scheduling mechanism tries to load many
processes into the memory at the same time due to which degree of Multiprogramming can be
increased. Now in this situation, there are more processes in the memory as compared to the available
number of frames in the memory. Allocation of the limited amount of frames to each process.
 Whenever any process with high priority arrives in the memory and if the frame is not freely available
at that time then the other process that has occupied the frame is residing in the frame will move to
secondary storage and after that this free frame will be allocated to higher priority process.
 We can also say that as soon as the memory fills up, the process starts spending a lot of time for the
required pages to be swapped in. Again the utilization of the CPU becomes low because most of the
processes are waiting for pages.
 Thus a high degree of multiprogramming and lack of frames are two main causes of thrashing in the
Operating system.
MEMORY-MAPPED FILES
 Memory mapping refers to process ability to access files on disk the same way it accesses dynamic
memory. It is obvious that accessing RAM is much faster than accessing disk via read and write
system calls.
 Behind the scenes, the operating system utilizes virtual memory techniques to do the trick. The OS
splits the memory mapped file into pages (similar to process pages) and loads the requested pages into
physical memory on demand.
 If a process references an address (i.e. location within the file) that does not exists, a page fault occurs
and the operating system brings the missing page into memory.
Advantages
 Memory mapping is an excellent technique that has various benefits. Examples below.

• Efficiency: when dealing with large files, no need to read the entire file into memory first.
• Fast: accessing virtual memory is much faster than accessing disk.
• Sharing: facilitates data sharing and inter process communication.
• Simplicity: dealing with memory as opposed to allocating space, copying data and deallocating space.

Disadvantages
 Just like any other technique, memory mapping has some drawbacks.
• Memory mapping is generally good for binary files, however reading formatted binary file types
with custom headers such as TIFF can be problematic.
• Memory mapping text files is not such an appealing task as it may require proper text handling
and conversion.
• The notion that a memory mapped file has always better performance should not be taken for
granted. Recall that accessing file in memory may generate a lot of page faults which is bad.
• Memory footprint is larger than that of traditional file IO. In other words, user applications have
no control over memory allocation.
• Expanding file size is not easy to implement because a memory mapped file is assumed to be
KERNEL MEMORY ALLOCATION

 The Kernel is the heart of an operating system. It is the kernel of an OS through the medium of which the OS
exercises control over the computer system.
 When a system boots up, the Kernel is the first program that is loaded in memory after the bootloader because
the kernel handles the rest of the functions of the system for the OS. The Kernel remains in the memory until the
OS shuts down.

Functions of a Kernel in OS
1. Scheduling processes:
 The kernel provides a part of the time to each process, when a process has finished executions the kernel starts another
process, determines the state of the process, which could be one of running, waiting, or ended.
2. Resource Allocation:
 The kernel controls memory, peripheral devices, and CPU processes. It also acts as link between resources and processes.
It allocates the memory to the processes. If any process requires access to some hardware component, the kernel allocates
that component to it.
3. Device Management:
 The kernel manages the devices connected with the system, such as I/O devices, storage devices, etc and also the
exchange of data through these devices. The information is received from the system and transferred to it from the
I/O devices and various applications.
4. Interrupt Handling and System Calls:
 When a process runs, there may arise a task of high priority that needs to be executed first. The kernel switches the
control from the currently running process to the new one as per their priorities. The kernel also deals with system
calls, which simply put are software interrupts.
5. Memory Management:
 Once the kernel creates and executes a process, it lives in memory having occupied space in it. When the process
ends, the kernel removes the process from the memory. Kernel assigns the memory to process and releases it as well.
6. Process Management:
 The kernel performs the creation, execution, and ending of processes that run in the system. When a system has to
execute any task, the kernel creates and manages the processes.

You might also like