OS Lec 9 & 10

You might also like

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 25

Operating Systems

Week 5: Lecture 9 & 10: Memory Management


Main Memory

• The term Memory can be defined as a collection of


data in a specific format. It is used to store instructions
and processed data.
• The Main Memory is central to the operation of a
modern computer. Main Memory is a large array of
words or bytes, ranging in size from hundreds of
thousands to billions. Main memory is a repository of
rapidly available information shared by the CPU and
I/O devices. Main memory is the place where
programs and information are kept when the processor
is effectively utilizing them.  Main memory is
associated with the processor, so moving instructions
and information into and out of the processor is
extremely fast.
• Memory Management
• In a multiprogramming computer, the operating system resides in a part of memory and the
rest is used by multiple processes. The task of subdividing the memory among different
processes is called memory management. Memory management is a method in the operating
system to manage operations between main memory and disk during process execution. The
main aim of memory management is to achieve efficient utilization of memory.  
• Memory manager is used to keep track of the status of memory locations, whether it is free
or allocated. It addresses primary memory by providing abstractions so that software
perceives a large memory is allocated to it.
• Memory manager permits computers with a small amount of main memory to execute
programs larger than the size or amount of available memory. It does this by moving
information back and forth between primary memory and secondary memory by using the
concept of swapping.
• The memory manager is responsible for protecting the memory allocated to each process
from being corrupted by another process. If this is not ensured, then the system may exhibit
unpredictable behavior.
• Memory managers should enable sharing of memory space between processes. Thus, two
programs can reside at the same memory location although at different times.
Memory Allocation
• Memory protection is a phenomenon by which we control memory access rights on a computer. The main aim of it is
to prevent a process from accessing memory that has not been allocated to it. Hence prevents a bug within a process
from affecting other processes, or the operating system itself, and instead results in a segmentation fault or storage
violation exception being sent to the disturbing process, generally killing of process.
• Logical and Physical Address Space:
• Logical Address space: An address generated by the CPU is known as “Logical Address”. It is also known as a
Virtual address. Logical address space can be defined as the size of the process. A logical address can be changed.
• Physical Address space: An address seen by the memory unit (i.e the one loaded into the memory address register
of the memory) is commonly known as a “Physical Address”. A Physical address is also known as a Real address.
The set of all physical addresses corresponding to these logical addresses is known as Physical address space. A
physical address is computed by MMU. The run-time mapping from virtual to physical addresses is done by a
hardware device Memory Management Unit(MMU). The physical address always remains constant.
Memory Management
• The Memory management Techniques can be classified into following main categories:
• Contiguous memory management schemes
• Non-Contiguous memory management schemes
Contiguous Memory Allocation
• In a Contiguous memory management scheme, each program occupies a single contiguous block of storage
locations, i.e., a set of memory locations with consecutive addresses.
• Single contiguous memory management schemes:
• The Single contiguous memory management scheme is the simplest memory management scheme used in the earliest
generation of computer systems. In this scheme, the main memory is divided into two contiguous areas or partitions.
The operating systems reside permanently in one partition, generally at the lower memory, and the user process is
loaded into the other partition.
• Advantages of Single contiguous memory management schemes:
• Easy to manage and design.
• In a Single contiguous memory management scheme, once a process is loaded, it is given full processor's time, and no other processor will interrupt
it.

• Disadvantages of Single contiguous memory management schemes:


• Wastage of memory space due to unused memory as the process is unlikely to use all the available memory space.
• It does not support multiprogramming, i.e., it cannot handle multiple programs simultaneously.

• Multiple Partitioning:
• The single Contiguous memory management scheme is inefficient as it limits computers to execute only one program
at a time resulting in wastage in memory space and CPU time. The problem of inefficient CPU use can be overcome
using multiprogramming that allows more than one program to run concurrently. To switch between two processes,
the operating systems need to load both processes into the main memory. The operating system needs to divide the
available main memory into multiple parts to load multiple processes into the main memory. Thus multiple processes
can reside in the main memory simultaneously.
Contiguous Memory Allocation
• Fixed partition allocation: In this method, the operating system maintains a table that indicates which parts of
memory are available and which are occupied by processes. Initially, all memory is available for user processes and
is considered one large block of available memory. This available memory is known as “Hole”. When the process
arrives and needs memory, we search for a hole that is large enough to store this process. If the requirement fulfills
then we allocate memory to process, otherwise keeping the rest available to satisfy future requests. While allocating a
memory sometimes dynamic storage allocation problems occur, which concerns how to satisfy a request of size n
from a list of free holes. There are some solutions to this problem:
• Dynamic Partitioning
• The dynamic partitioning was designed to overcome the problems of a fixed partitioning scheme. In a dynamic
partitioning scheme, each process occupies only as much memory as they require when loaded for processing.
Requested processes are allocated memory until the entire physical memory is exhausted or the remaining space is
insufficient to hold the requesting process. In this scheme the partitions used are of variable size, and the number of
partitions is not defined at the system generation time.
• Advantages of Dynamic Partitioning memory management schemes:
• Simple to implement.
• Easy to manage and design.
• Disadvantages of Dynamic Partitioning memory management schemes:
• This scheme also suffers from internal fragmentation.
• The number of partitions is specified at the time of system segmentation.
Contiguous Memory Allocation
• Fragmentation:
• A Fragmentation is defined as when the process is loaded and removed after execution from memory, it creates a
small free hole. These holes can not be assigned to new processes because holes are not combined or do not fulfill
the memory requirement of the process.  To achieve a degree of multiprogramming, we must reduce the waste of
memory or fragmentation problem. In operating system two types of fragmentation:
• Internal fragmentation: 
• Internal fragmentation occurs when memory blocks are allocated to the process more than their requested size. Due
to this some unused space is leftover and creates an internal fragmentation problem.
• External fragmentation:
• In external fragmentation, we have a free memory block, but we can not assign it to process because blocks are not
contiguous.
Contiguous Memory Allocation
• First fit:- 
• In the first fit, the first available free hole fulfills the requirement of
the process allocated. 
• Here, in this diagram 40 KB memory block is the first available free
hole that can store process A (size of 25 KB), because the first two
blocks did not have sufficient memory space.
• Best fit:-
• In the best fit, allocate the smallest hole that is big enough to process
requirements. For this, we search the entire list, unless the list is
ordered by size. 
• Here in this example, first, we traverse the complete list and find the
last hole 25KB is the best suitable hole for Process A(size 25KB).
• In this method memory utilization is maximum as compared to other
memory allocation techniques.
• Worst fit:-
• In the worst fit, allocate the largest available hole to process. This
method produces the largest leftover hole. 
• Here in this example, Process A (Size 25 KB) is allocated to the
largest available memory block which is 60KB. Inefficient memory
utilization is a major issue in the worst fit.
Non Contiguous Memory Allocation
• Non-Contiguous memory management schemes:
• In a Non-Contiguous memory management scheme, the program is divided into different blocks and loaded at
different portions of the memory that need not necessarily be adjacent to one another. This scheme can be classified
depending upon the size of blocks and whether the blocks reside in the main memory or not.
Paging
• Paging is a storage mechanism that allows OS to retrieve processes from the secondary storage into the main
memory in the form of pages. The paging technique divides the physical memory(main memory) into fixed-size
blocks that are known as Frames and also divide the logical memory(secondary memory) into blocks of the same
size that are known as Pages.
• What is Paging Protection?
• The paging process should be protected by using the concept of insertion of an additional bit called Valid/Invalid bit.
Paging Memory protection in paging is achieved by associating protection bits with each page. These bits are
associated with each page table entry and specify protection on the corresponding page.
• Advantages of Paging
• Here, are advantages of using Paging method:
• Easy to use memory management algorithm
• No need for external Fragmentation
• Swapping is easy between equal-sized pages and page frames.
• Disadvantages of Paging
• Here, are drawback/ cons of Paging:
• May cause Internal fragmentation
• Page tables consume additional memory.
• Multi-level paging may lead to memory reference overhead.
Paging
• For example, if the main memory size is 16 KB and Frame size is 1 KB. Here, the main memory will be divided into
the collection of 16 frames of 1 KB each.
• There are 4 separate processes in the system that is A1, A2, A3, and A4 of 4 KB each. Here, all the processes are
divided into pages of 1 KB each so that operating system can store one page in one frame.
• At the beginning of the process, all the frames remain empty so that all the pages of the processes will get stored in a
contiguous way.
Page Table
• The data structure that is used by the virtual memory system to store the mapping between the physical and logical
addresses is known as Page Table.
• The page table provides the corresponding page number where the page is supposed to be stored.
•  
• Characteristics of a Page Table:
• The characteristics of a page table are as follows:
• Page table is stored in the main memory.
• The number of entries in the page table is equal to the number of pages the processes are divided into.
• Each process has its page table.
• Page Table Base Register or PTBR holds the base address for the page table.
• Techniques used for structuring the page table:
• Some of the techniques used for structuring the page table are as follows:
• Hierarchical paging
• Inverted page tables
• Hashed page tables
Page Table
• Hierarchical paging
• Hierarchical paging or multilevel paging is a type of paging where the logical address space is broken up into
multiple page tables.
• It is one of the simplest techniques and for this, a two-level or three-level page table is used.
• Inverted Page Tables
• Inverted page table is a combination of a page table and a frame table.
• There is one entry for virtual page number and real page of memory.
• But, it may also increase the amount of time needed to search the table for a page reference.
• Hashed Page Tables
• The hashed page table method is used to handle address spaces larger than 32 Bits. 
• In this table, there is a chain of elements hashed into the same elements.
• The elements consist: 
• Virtual page number
• A pointer next to the element in the list.
• The value of the mapped page frame.
• Cluster tables are similar to hashed page tables but used for 64 Bits.
Segmentation
• What is Segmentation?
• Segmentation method works almost similarly to paging, only difference between the two is that segments are of
variable-length whereas, in the paging method, pages are always of fixed size.
• A program segment includes the program’s main function, data structures, utility functions, etc. The OS maintains a
segment map table for all the processes. It also includes a list of free memory blocks along with its size, segment
numbers, and it’s memory locations in the main memory or virtual memory.
• Advantages of Segmentation
• Here, are pros/benefits of Segmentation
• Offer protection within the segments
• You can achieve sharing by segments referencing multiple processes.
• Not offers internal fragmentation
• Segment tables use lesser memory than paging
• Disadvantages of Segmentation
• Here are cons/drawback of Segmentation
• In segmentation method, processes are loaded/ removed from the main memory. Therefore, the free memory space is
separated into small pieces which may create a problem of external fragmentation
• Costly memory management algorithm
Types of Segmentation
• Virtual Memory Segmentation With this type of segmentation, each process is segmented into n divisions and the
most important thing is they are not segmented all at once.
• Simple Segmentation With the help of this type, each process is segmented into n divisions and they are all together
segmented at once exactly but at the runtime and can be non-contiguous (that is they may be scattered in the
memory).
• Characteristics of Segmentation
• Some characteristics of the segmentation technique are as follows:
• The Segmentation partitioning scheme is variable-size.
• Partitions of the secondary memory are commonly known as segments.
• Partition size mainly depends upon the length of modules.
• Thus with the help of this technique, secondary memory and main memory are divided into unequal-sized partitions.
Virtual Memory
• Virtual Memory is a storage mechanism which offers user an illusion of having a very big main memory. It is done
by treating a part of secondary memory as the main memory. In Virtual memory, the user can store processes with a
bigger size than the available main memory.
• Therefore, instead of loading one long process in the main memory, the OS loads the various parts of more than one
process in the main memory. Virtual memory is mostly implemented with demand paging and demand segmentation.
• Here, are reasons for using virtual memory:
• Whenever your computer doesn’t have space in the physical memory it writes what it needs to remember to the hard
disk in a swap file as virtual memory.
• If a computer running Windows needs more memory/RAM, then installed in the system, it uses a small portion of the
hard drive for this purpose.
• In the modern world, virtual memory has become quite common these days. It is used whenever some pages require
to be loaded in the main memory for the execution, and the memory is not available for those many pages.
• So, in that case, instead of preventing pages from entering in the main memory, the OS searches for the RAM space
that are minimum used in the recent times or that are not referenced into the secondary memory to make the space for
the new pages in the main memory.
Page Demand
• A demand paging mechanism is very much similar to a paging system with swapping where processes stored in the
secondary memory and pages are loaded only on demand, not in advance.
• So, when a context switch occurs, the OS never copy any of the old program’s pages from the disk or any of the new
program’s pages into the main memory. Instead, it will start executing the new program after loading the first page
and fetches the program’s pages, which are referenced.
• During the program execution, if the program references a page that may not be available in the main memory
because it was swapped, then the processor considers it as an invalid memory reference. That’s because the page
fault and transfers send control back from the program to the OS, which demands to store page back into the
memory.
• Page Fault – A page fault happens when a running program accesses a memory page that is mapped into the
virtual address space but not loaded in physical memory. 
• Types of Page Replacement Methods
• Here, are some important Page replacement methods
• FIFO
• Optimal Algorithm
• LRU Page Replacement
Page Replacement Algorithms
• Page Replacement Algorithms : 
• 1. First In First Out (FIFO) – 
This is the simplest page replacement algorithm. In
this algorithm, the operating system keeps track of all
pages in the memory in a queue, the oldest page is in
the front of the queue. When a page needs to be
replaced page in the front of the queue is selected for
removal. 
Example-1Consider page reference string 1, 3, 0, 3, 5,
6, 3 with 3 page frames. Find the number of page
faults. 
• Initially, all slots are empty, so when 1, 3, 0 came they
are allocated to the empty slots —> 3 Page Faults. 
when 3 comes, it is already in memory so —> 0 Page
Faults. 
Then 5 comes, it is not available in memory so it
replaces the oldest page slot i.e 1. —>1 Page Fault. 
6 comes, it is also not available in memory so it
replaces the oldest page slot i.e 3 —>1 Page Fault. 
Finally, when 3 come it is not available so it replaces
0 1 page fault 
Page Replacement Algorithms
• 2. Optimal Page replacement – 
In this algorithm, pages are replaced which would not be used for the longest duration of time in the future. 
Example-2:Consider the page references 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, with 4 page frame. Find number of page fault. 
• Initially, all slots are empty, so when 7 0 1 2 are allocated to the empty slots —> 4 Page faults 
0 is already there so —> 0 Page fault. 
when 3 came it will take the place of 7 because it is not used for the longest duration of time in the future.—>1 Page fault. 
0 is already there so —> 0 Page fault.. 
4 will takes place of 1 —> 1 Page Fault. 
• Now for the further page reference string —> 0 Page fault because they are already available in the memory. 
Optimal page replacement is perfect, but not possible in practice as the operating system cannot know future requests. The use
of Optimal Page replacement is to set up a benchmark so that other replacement algorithms can be analyzed against it.
Page Replacement Algorithms
• 3. Least Recently Used – 
In this algorithm, page will be replaced which is least recently used. 
Example-3Consider the page reference string 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2 with 4 page frames.Find number of page faults. 
• Initially, all slots are empty, so when 7 0 1 2 are allocated to the empty slots —> 4 Page faults 
0 is already their so —> 0 Page fault. 
when 3 came it will take the place of 7 because it is least recently used —>1 Page fault 
0 is already in memory so —> 0 Page fault. 
4 will takes place of 1 —> 1 Page Fault 
Now for the further page reference string —> 0 Page fault because they are already available in the memory. 
Allocating Kernel Memory
• When a process running in user mode requests additional memory, pages are allocated from the list of free page
frames maintained by the kernel. This list is typically populated using a page-replacement algorithm.
Remember, too, that if a user process requests a single byte of memory, internal fragmentation will result, as the
process will be granted, an entire page frame. Kernel memory, however, is often allocated from a free-memory
pool different from the list used to satisfy ordinary user-mode processes. There are two primary reasons for
this:
• 1. The kernel requests memory for data structures of varying sizes, some of which are less than a page in size.
As a result, the kernel must use memory conservatively and attempt to minimize waste due to fragmentation.
This is especially important because many operating systems do not subject kernel code or data to the paging
system.
• 2. Pages allocated to user-mode processes do not necessarily have to be in contiguous physical memory.
However, certain hardware devices interact directly with physical memory—-without the benefit of a virtual
memory interface—and consequently may require memory residing in physically contiguous pages. In the
following sections, we examine two strategies for managing free memory that is assigned to kernel processes.
Allocating Kernel Memory
• 1. Buddy system –
• Buddy allocation system is an algorithm in which a larger
memory block is divided into small parts to satisfy the
request. This algorithm is used to give best fit. The two
smaller parts of block are of equal size and called as
buddies. In the same manner one of the two buddies will
further divide into smaller parts until the request is
fulfilled. Benefit of this technique is that the two buddies
can combine to form the block of larger size according to
the memory request. 
• Example – If the request of 25Kb is made then block of
size 32Kb is allocated. 
• Four Types of Buddy System – 
• Binary buddy system
• Fibonacci buddy system
• Weighted buddy system
• Tertiary buddy system
Allocating Kernel Memory
• 2. Slab Allocation –
• A second strategy for allocating kernel memory is
known as slab allocation. It eliminates
fragmentation caused by allocations and
deallocations. This method is used to retain
allocated memory that contains a data object of a
certain type for reuse upon subsequent allocations
of objects of the same type. In slab allocation
memory chunks suitable to fit data objects of certain
type or size are preallocated. Cache does not free
the space immediately after use although it keeps
track of data which are required frequently so that
whenever request is made the data will reach very
fast. Two terms required are:  
• Slab – A slab is made up of one or more physically
contiguous pages. The slab is the actual container of
data associated with objects of the specific kind of
the containing cache.
• Cache – Cache represents a small amount of very
fast memory. A cache consists of one or more slabs.
There is a single cache for each unique kernel data
structure.
Any Question?

You might also like