OS 5 Memory Management

You might also like

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 24

OPERATING SYSTEMS

5 MEMORY MANAGEMENT

1. MEMORY MANAGEMENT TECHNIQUES: -

Memory Partition Techniques are of two types-


i. Contiguous Partition Scheme
ii. Non-Contiguous Partition Scheme

2. Contiguous Partition Scheme

In Contiguous memory allocation, when the process arrives from the ready queue to the main
memory for execution, the contiguous memory blocks are allocated to the process according
to its requirement. Now, to allocate the contiguous space to user processes, the memory can
be divided either in the fixed-sized partition or in the variable-sized partition.
Contiguous memory partition is of following types-
i. fixed-partition scheme
ii. variable-partition scheme

3. Fixed-partition scheme
● Fixed memory partitions
● Divide memory into fixed spaces
● Assign a process to a space when it’s free
● Mechanisms
● Separate input queues for each partition
● Single input queue: better ability to optimize CPU usage

Fig. Fixed Partition Scheme


Advantages:
● Simple to implement
● Little OS overhead
Disadvantages:
● Internal fragmentation causes inefficient use of memory. Main memory is utilized in extreme
inefficient manner. Every program occupies an entire partition regardless of it’s size. The
mechanism in which space is wasted internally to a partition because the block of data loaded
is smaller than the size of partition, is known as internal fragmentation.

4. Internal fragmentation-

A larger memory block is assigned to a process. Some portion of that memory block is left

unused as it cannot be used by any other process.

Fig. Internal Fragmentation

5. variable-partition scheme
● In variable partition scheme initially, memory will be a single continuous free block.
● Whenever the request by the process arrives accordingly partition will be mode in the
memory.
● If the smaller process keep on coming then the larger partition will be made into smaller
partition.

Fig. Variable Partition


Example, using 64 MB of main memory, is shown in Figure.
Eventually this will leads to a situation in which there are a lot of small holes and partitions
created in the memory. Memory becomes more and more fragmented, inefficient and
unutilized with time. This mechanism is known as external fragmentation, it shows that
the memory which is external to all the memory partitions becomes fragmented increasingly.

6. External fragmentation

External fragmentation issue occurs when free memory is divided into small blocks and is
scattered by allocated memory. It is a disadvantage of storage allocation algorithms, this
happens when these algorithms fail to keep the memory in order and used by the programs
efficiently.
Comparison between Internal & External Fragmentations-
Internal Fragmentation External Fragmentation

The difference between the memory The unused spaces formed between non-contiguous
allocated and the required memory is memory fragments are too small to serve a new
called internal fragmentation. process request. This is called external
fragmentation.
It occurs when main memory is divided It occurs when memory is allocated to
into fixed-size blocks regardless of the processes dynamically based on process
size of the process. requests.

It refers to the unused space in the


It refers to the unused memory blocks that are
partition which resides within an allocated
too small to handle a request.
region, hence the name.

Internal fragmentation can be eliminated


External fragmentation can be eliminated by
by allocating memory to processes
compaction, segmentation and paging.
dynamically.

7. Compaction

The phenomenon when free memory chunks are collected and it results in larger memory
chunks which can be used as space available for processes, this mechanism is referred to as
Compaction. In memory management, swapping creates multiple fragments in the memory
because of the processes moving in and out. Compaction is simply a mechanism of combining
all the empty spaces together to form large free space so that we can use that free space for
processes.
Fig. Compaction

ALLOCATION
Some popular algorithms used for allocating the memory partitions to the processes are as
follows-
1. First Fit Algorithm
2. Best Fit Algorithm
3. Worst Fit Algorithm
7.1. First Fit Algorithm
● First fit algorithm starts scanning from the starting and scans all the
partitions serially.
● A partition is allocated to a process, if the partition is large enough to hold
the process.
● The size of the partition should be greater than or at least equal to the size of
the process.
7.2. Best Fit Algorithm
● Best fit algorithm scans all the empty partitions.
● Then the algorithm compares all the partitions and allocates the process to
the smallest size partition which is large enough to hold the process at least.
7.3. Worst Fit Algorithm
● Worst fit algorithm also scans all the empty partitions.
● Then the algorithm compares all the partitions and allocates the process to
the largest size partition.
Note that,
For static partitioning,
● Best Fit Algorithm is best.
● Because the space left after the process allocation in the partition is of very
small size.
● Thus, it causes least internal fragmentation.
For dynamic partitioning,
● The Worst Fit Algorithm is best.
● Because the space left after the process allocation in the partition is of large size.
● There is a high probability that this space will fulfill the requirement of
arriving processes.

8. NEED OF PAGING

The main disadvantage of variable size partitioning is that it causes External fragmentation.
Although, external fragmentation can be removed by Compaction but the issue is compaction
also makes the system inefficient.
That’s why the concept of paging is introduced, Paging is a dynamic and flexible technique
that can load the processes in the memory partitions in a more optimal manner. The basic
idea and concept behind paging is to divide the process into pages, so that we can store
these pages in the memory at different holes(partitions) and use these holes efficiently.
8.1. Page
A page, virtual page, or memory page, is a fixed-length contiguous memory block in a
virtual memory operating system, described by a single entry in the page table. Page is
the smallest unit of data in a virtual memory for memory management.

9. PAGING: NON-CONTIGUOUS PARTITION SCHEME


9.1. Address Space
An address space is defined as a range of valid addresses available in the memory for a
program or process. It is the memory space accessible to a program or process. The
memory can be physical or virtual and is used for storing data and executing
instructions.
9.2. Physical Address Space
Physical address space is defined as the size of the main memory. It is very important
for comparing the process size with the physical address space. Process size should
always be less than the physical address space.
Example-
Physical Address Space = Main Memory Size
If, physical address space = 64 KB = 26 KB = 26 X 210 Bytes = 216 bytes
Let us consider,
word size = 8 Bytes = 23 Bytes
Hence,
16
Physical address space (in words) = 2 13
𝑤𝑜𝑟𝑑𝑠
2 3 =
Therefore,
Bits required for Physical Address = 13 bits
If, Physical Address Space = N Words
then, Physical Address will be = log2 N
Logical Address Space
Logical address space is the size of the process. The size of the process should be less
enough so that it can reside in the main memory.
Example-
Logical Address Space = 128 MB = (27 X 220 ) Bytes = 227 Bytes
Word size = 4 Bytes = 22 Bytes

Logical Address Space (in words)


22
= 7  225 Words

22
Logical Address = 25 Bits
In general,
If, logical address space = L words
Then, Logical Address = log2 L bits
9.3. Paging
The memory management unit (MMU)is a hardware device that maps physical
addresses to virtual addresses and this mapping is called paging technique. The basic
concept behind paging is to divide each process into pages. The main memory is
divided into frames.
Example-
Let us consider, main memory size = 16 KB
Frame size = 1 KB
No. of frames = 16KB/ 1KB = 16
Therefore, the main memory is divided into 16 frames of size 1 KB each.
Let us suppose we have four processes in the system, say P1, P2, P3 and P4 of size 4 KB
each.
We need to divide each process into pages of size 1 KB each so that one frame can store
one page.
We have all the empty frames inititally, therefore all the pages of the processes will be
stored in the contiguous manner.
Pages, frames and mapping between them is depicted in the image below.
9.4. Page Table
Page Table may be defined as a data structure used to store the mapping between logical
addresses and physical addresses in the virtual memory system .
Logical addresses are produced by the CPU for the page of the processes that’s why
logical addresses are used by the processes generally.
Physical addresses are the actual frame address which is present in the main memory.
Hardware or more specifically by RAM subsystems make use of physical addresses
generally.

Characteristics-
● Main memory stores the Page table.
● Number of pages of a process is equal to the number of Page table entries.
● Page Table Base Register (PTBR) stores the page table's base address.
● There is an independent page table for each process.
Working-

● Page Table Base Register (PTBR) stores and provides the page table's base address
● Page table’s base address is added with the page number which is referenced by the
CPU.
● Page table provides the page entry with the frame number where the referenced
page is stored in the memory.

9.5. Page Table Entry (PTE)


● A page table entry stores several information about the page.
● The data stored in the page table entry varies from OS to OS
● The most important information that a page table entry stores is frame
number Generally, Each entry of a page table have the following information-

1. Frame Number
● Frame number describes the frame where the page is actually stored in the main
memory.
● The number of bits used in the frame number completely depends upon the
number of frames present in the primary memory.
2. Present / Absent Bit-
● Also known as valid / invalid bit.
● This bit is used to specify whether that page is present in the primary memory
or not.
● If the page is present in the main memory, then this bit is set to 1 otherwise set
to 0.
3. Protection Bit-

● Also known as “Read / Write bit“.

● This bit is used for protection of pages.

● It specifies the permission to perform r/w operation on a page.

● If it is allowed to perform only read operations and no writing is allowed,

then this bit is set to 0.

● If it is allowed to perform both read and write operation, then this bit will be

set to 1.

4. Reference Bit-

● This bit specifies whether the page has been referenced in the latest clock

cycle or not.

● If that page has been referenced recently, then the reference bit is set to

1 otherwise it will be set to 0.

● It is very useful in page replacement policy.

● If a page is not referenced recently then it is considered as a good candidate

in LRU (Least recently used)page replacement policy.

5. Caching Enabled / Disabled-

● It is used to enable or disable the caching of a page.

● Whenever a fresh / new data is required, then caching of the page is disabled

by using this bit.

● If caching of the page is enabled, then this bit is set to 0 otherwise set to 1.

6. Dirty Bit-

● Also known as “Modified bit“.

● Dirty bit is used to specify whether that page is modified or not.

● If the page is not modified, then this bit is set to 0 otherwise it is set to 1.

Address generated by CPU (Logical Address) is divided into following:

● Page number(p): Total number of bits required to represent the pages present in

the Logical Address Space .

● Page offset(d): It is the number of bits required to represent a particular word in a

page or word number of a page or page offset or it is page size of the Logical Address

Space .

Physical Address is divided into

● Frame number(f): Total number of bits required to represent the number of frames

present in the Physical Address Space or Frame number.


● Frame offset(d): It is the number of bits required to represent a particular word in a

frame or word number of a frame or frame offset or frame size of Physical Address

Space

Fig. Mapping from page table to main memory

10. TYPES OF PAGE TABLE (PT)

● Single Level PT

● Multi level PT
● Inverted PT
10.1. Single Level PT-
This is the most simple and straightforward approach that has a single linear array of
page-table entries (PTEs). Each PTE stores information about the page, such as its
physical page number (“frame” number) as well as status bits, such as whether or not
the page is valid.

Example- Consider a machine byte addressable with 64 MBytes physical memory and a
virtual address space of 32 bit. If the page size is of 4 KB, what is the approximate size
of the page table?
Solution-

Given-
Size of main memory = 64 MB

Number of bits in virtual address space = 32 bits


Page size = 4 KB
Also given memory is byte addressable.

Size of main memory = 64 MB = 226 B


Thus, Number of bits in physical address = 26 bits
Number of frames in main memory = Size of main memory / Frame size

= 64 MB / 4 KB

= 226 B / 212 B

= 214

Thus, Number of bits in frame number = 14 bits


We have, Page size = 4 KB = 212 B
Thus, Number of bits in page offset = 12 bits
So, Physical address is-

Number of bits in virtual address space = 32 bits


Thus, Process size = 232 B = 4 GB
Number of pages the process is divided = Process size / Page size
= 4 GB / 4 KB
= 220 pages
Thus, Number of entries in page table = 220 entries
Page table size = Number of entries in PT x PTE size
= Number of entries in PT x Number of bits in frame number
= 220 x 14 bits
= 220 x 16 bits (Approximating 14 bits ≈ 16 bits, to make it exactly 2 bytes )
= 220 x 2 bytes
= 2 MB

10.2. Multi- Level PT


Multi-level page tables are tree-like structures to hold page tables. It is a paging
scheme where a hierarchy of page tables exists. The entries of the level-0 page table
are pointers to a level-1 page table, and the entries of the level-1 page table are PTEs
as described above in the single-level page table.
Need for multilevel paging -
● When the page table size is greater than the frame size.
● As a result, we cannot store the page table in a single frame of main memory.

Working-
In the multilevel paging,
● The page table size is greater than the main memory frame size, so the page table
is divided into some parts.
● Now, the size of each divided part is equal to the frame size, possibly except the
last part of the page table.
● Then each small page of the page table is stored in various frames of the
main memory.
● Now, another page table is created to store the information of the frames storing
the pages of the divided page table.
● Resulting a hierarchy of page tables.
● Multilevel paging is done until the entire page table can be stored in a single frame
of the main memory.

Example- Consider a system using paging scheme where-


Logical Address Space is 4 GB,
Physical Address Space is 16 TB
Page size is 4 KB
Find number of levels of a page table will be required .
We know, Size of main memory = Physical Address Space

= 16 TB
= 244 B
Thus, Number of bits in physical address = 44 bits
Number of frames = Size of main memory / Size of each Frame
= 16 TB / 4 KB
= 232 frames
So, Number of bits in frame number = 32 bits
Page size = 4 KB = 212 B
Thus, Number of bits in page offset = 12 bits
So, Physical address is-

Number of pages the process is divided = Process size / Page size


= 4 GB / 4 KB
= 220 pages
Inner Page Table- Inner page table keeps track of the frames storing the pages of process.
Inner Page table size = No. of entries in inner PT * PTE size
= No. of pages the process is divided * No. of bits in frame number
= 220 x 32 bits
= 220 x 4 bytes
= 4 MB
The point to be noted here, the size of the inner page table is greater than the frame size
(4 KB).
Thus, we cannot store the inner page table in a single frame. So, the inner page table
will be divided into pages.
Number of pages the inner PT is divided = Inner PT size / Page size
= 4 MB / 4 KB
= 210 pages
These 210 pages of inner page table will be stored in different main memory frames.
No. of page table entries in one page of inner page table = Page size / Page table
entry size
= Page size / No. of bits in frame number
= 4 KB / 32 bits
= 4 KB / 4 B

= 210
One page of the inner page table has 210 entries.
So,
No. of bits required to search a particular entry in one page of inner page table = 10
bits Outer Page Table - Outer page table is required to keep track of the frames storing
the pages of inner page table.
Outer Page table size = No. of entries in outer PT * PTE size
= No. of pages the inner PT is divided * No. of bits in frame number
= 210 x 32 bits
= 210 x 4 bytes
= 4 KB
Now, point to be noted here is- the size of the outer page table is same as frame
size (4 KB).
So, Outer page table can be stored in a single frame. Hence, for the given
system, we will have two levels of page table.
The paging system will look like as shown below-
11. TRANSLATION LOOK ASIDE BUFFER

Drawbacks of Paging Mechanism


● For a larger process the size of the Page table will be very large, and it will waste
main memory.
● While reading a single word from the main memory CPU will take more time .
● How to decrease the page table size- The page table size can be reduced by increasing the
page size but this will cause internal fragmentation and page wastage too.
● Another method is to use multilevel paging but this will increase the effective access
time, so this is not a practical method.
● How to decrease the effective access time- CPU can use a register with a page table
stored in it so that the time to access page tables can become less but the register is
costly and very small as compared to the size of the page table. Therefore, this is also not
a practical method.
● To overcome these drawbacks of paging, we need a memory that is cheaper than the
register in cost and faster than the main memory to reduce the time taken by the CPU
to access the page table again and again, and that will only focus on accessing the
actual word.
Locality of reference
The concept of locality of reference says that, OS can load only those number of pages in
the main memory which are frequently accessed by the CPU instead of loading the entire
process in the main memory and along with that, the OS can only load those page table
entries that are corresponding to those many pages.
11.1. Translation lookaside buffer (TLB)
● A Translation lookaside buffer(TLB) is defined as a memory cache hardware unit
which is used to reduce the page table access time when the page table is
accessed again and again.
● TLB is a memory cache which is nearer to the CPU and the time taken to access
TLB by CPU is less than that taken by CPU to access main memory.
Or, we can say that TLB is smaller and faster than the main memory at the same time
cheaper in cost and bigger in size than the CPU register.
TLB uses the concept of locality of reference which means that it contains the entries of
only those pages that are frequently accessed by the CPU.
In TLB, there are keys and tags through which the mapping of addresses is done.
● TLB hit is a situation when the desired entry is found in TLB. If a hit happens then
the CPU can simply access the actual address from the main memory.
● If the entry is not in TLB then it will be said to be a TLB miss, in this situation then
the CPU has to access the page table from the main memory and then it will
access the actual frame from the main memory.
So, the effective memory access time will be less in the case of TLB hit as compared to
the case of TLB miss.
If the probability of TLB hit is x% (TLB hit rate) then the probability of TLB miss (TLB
miss rate) will be (1-x) %.
Therefore, the effective memory access time can be formulated as;
EMAT = x (c + m) + (1 - x) (c + k.m + m)
Where,
x → TLB hit,
c → time taken to access TLB,
m → time taken to access the main memory
k will be taken as 1, if single level paging has been used.
With the help of the above formula, we come to know that
● if the TLB hit rate is increased then EMAT will be decreased.
● In the case of multilevel paging Effective access time will be increased.
Example-
A paging scheme making use of a Translation Lookaside buffer. IfTLB access takes 10
ns and the main memory access takes 50 ns. What is the effective memory access time
(in ns) if there is no page fault and the TLB hit ratio is 90% ?
a) 54
b) 60
c) 65
d) 75
Solution-
Given-
TLB access time = 10 ns
Main memory access time = 50 ns
TLB Hit ratio = 90% = 0.9
TLB Miss ratio = 1 – TLB Hit ratio = 1 – 0.9 = 0.1
Putting values in the formula, we will get-
Effective Access Time = 0.9 x { 10 ns + 50 ns } + 0.1 x { 10 ns + 2 x 50 ns }
= 0.9 x 60 ns + 0.1 x 110 ns
= 54 ns + 11 ns
= 65 ns
Thus, Option (C) is correct.

12. VIRTUAL MEMORY

Virtual Memory is a memory space where we can store large programs in the form of pages
while their execution and only important pages or portions of processes are loaded into the
main memory. It is a very useful technique as large virtual memory is provided for user
programs even if the user's system has a very small physical memory.
Advantages of Virtual Memory
● The degree of Multiprogramming will be increased.
● We can run large programs in the system, as virtual space is huge as compared to physical
memory.
● No need to purchase more memory RAMs.
● More physical memory is available, as programs are stored on virtual memory, so they
occupy very less space on actual physical memory.
● Less I/O required, leads to faster and easy swapping of
processes. Disadvantages of Virtual Memory
● The system becomes slower as page swapping takes time.
● Switching between different applications requires more time.
● Hard disk space will be shrinked and users cannot use it properly.

12.1. Swapping
A process must be in memory to be executed. A process can be temporarily swapped
out of memory to a backing store and then swapped in (brought back) into memory for
continued execution.
Fig. Swapping of two processes using a disk as a backing store.
Swapping out a process means removing all of its pages from memory, or marking
them to be removed by any page replacement process. A process is suspended to
ensure that it is not runnable when it is swapped out of the memory. After some time,
the system swaps back the process into main memory from the secondary storage.
12.2. Demand Paging
According to the concept discussed in Virtual Memory, in order to execute some
process, a part of the process is required to be present in the main memory; it means
that few pages will be in the main memory at the same time.
However, deciding and selecting, which page needs to be kept in the main memory and
which one should be kept in the secondary memory, is difficult because we cannot
predict that a process will require which particular page at a particular time.
So, to overcome this problem, another concept called Demand Paging is introduced.
This says keep all pages of a frame in the secondary memory space until they are
required. Or, In other words, demand paging says that do not load any page in the
main memory unless and until it is required.
Whenever any page is referenced in the main memory for the first time, then we can
find that page in the secondary memory.

13. WHAT IS A PAGE FAULT?

If any references page is not found in the main memory, then there will be a page miss and
this concept is known as Page fault or page miss.
If a page fault occurs the CPU has to access the missed page directly from the secondary
memory. If the number of page misses are very high then the effective memory access time
of the system will also increase.
If the page fault rate is P %, the time taken in getting a page from the secondary memory
and again restarting is S (service time) and the memory access time is ‘m’ then the effective
access time can be given as;
EAT = P * S + (1 - P) * (m)
14. PAGE REPLACEMENT

Page replacement is defined as a process of swapping out an existing page from the main
memory frame and replacing it with the desired page.
Page replacement is needed when-
● All the page frames of the main memory are already occupied.
● So, a page has to be replaced to create a space for the desired page.
Page replacement algorithms help to select which page should be swapped out of the main
memory to create a space for the incoming required page.
We have various page replacement algorithms, they are as follows-
● FIFO Page Replacement Algorithm
● LIFO Page Replacement Algorithm
● LRU Page Replacement Algorithm
● Optimal Page Replacement Algorithm
● Random Page Replacement Algorithm
14.1. FIFO Page Replacement Algorithm
● This algorithm works on the concept of “First in First out“.
● This algorithm swaps out the oldest page from the main memory frame,
oldest page means the page which is present in main memory for the longest
time.
● This algorithm is implemented by keeping track of all the pages in a queue.
14.2. LIFO Page Replacement Algorithm
● This algorithm works on the concept of “Last in First out“.
● This algorithm swaps out the newest page from the main memory frame,
newest page means the page which is present in main memory for the shortest
time
● This algorithm is implemented by keeping track of all the pages in a stack.
14.3. LRU Page Replacement Algorithm
● This algorithm works on the concept of “Least Recently Used“.
● This algorithm swaps out the page that has not been referred by the CPU for the
longest time.
14.4. Optimal Page Replacement Algorithm
● Optimal Page Replacement algorithm swaps out the page that will not be
referred by the CPU in future for the longest time span.
● This algorithm cannot be implemented practically, because it is impossible to
predict which pages will be used in the future
● However, it is the best-known algorithm and a benchmark for other page
replacement algorithms and this algorithm gives the least number of page
misses.
14.5. Random Page Replacement Algorithm
● This algorithm randomly swaps out any page.
● So, this algorithm can behave like any other page replacement algorithm like FIFO,
LIFO, LRU, Optimal etc.
Example-
A system uses three page frames for storing pages in the main memory. This system
uses the First in First out (FIFO) page replacement policy. Suppose that initially all the
page frames are empty. Find the total number of page faults occur while processing the
page reference string below-
4 , 7, 6, 1, 7, 6, 1, 2, 7, 2
Also determine the hit ratio and miss ratio.
Solution-
Total number of page references = 10

From here,
Total number of page faults = 6
Total number of page hits
= Total number of page references – Total number of page faults
= 10 – 6
=4
Hit ratio
= Total number of page hits / Total number of page references
= 4 / 10
= 0.4 or 40%
Total number of page faults = 6
Miss ratio
= Total number of page faults / Total number of page references
= 6 / 10
= 0.6 or 60%
What if we increase Main memory Page Frames-
We will think that the number of page faults either decreases or remains constant on
increasing the number of page frames in the main memory. But sometimes an unusual
behavior is seen.Sometimes, on increasing the number of page frames in the main
memory, the number of page faults also increases.
14.6. Belady’s Anomaly
It is a phenomenon in which if we increase the number of page frames in the main
memory then it leads to increase in page faults.
Belady’s Anomaly occurs in the following page replacement algorithms -
● FIFO Page Replacement Algorithm
● Random Page Replacement Algorithm
● Second Chance Algorithm
NOTE- “Algorithms suffer from Belady’s Anomaly” doesn't mean that always on
increasing the number of page frames in the main memory the number of page faults
will increase . This is an unusual behaviour observed only sometimes.
14.7. Segmentation
● Segmentation is another non-contiguous memory allocation technique similar
to paging.
● In segmentation, the process is not divided into fixed size pages like in paging.
● The process is divided into small modules for better visualization
and implementation.
Characteristics of Segmentation-
● Segmentation is a variable size partitioning scheme.
● Secondary memory and main memory both are divided into partitions of
unequal sizes.
● The size of each partition depends on the length of modules.
● Partitions of secondary memory are known as segments.
Segment Table-
● Segment tables contain the information about each segment (partition)of
the process.
● Segment table has two columns.
● First column of the table stores the size or length of the segment.
● Second column of the table stores the base address of the segment in the
main memory.
● It is stored in a separate segment in the main memory.
● (STBR) Segment table base register contains the base address of the
segment table.
For the above demonstration, consider the segment table as follows-
Here,
● The length or size of the segment is indicated by limit.
● The base address of the segment in the main memory is indicated by the Base.
Translating Logical address space into physical address space by segment
table CPU generates a logical address which have two parts:
1. Segment Number
2. Segment Offset
The Segment number is mapped into the segment table. The limit of the segment is
compared with the offset respectively. If the offset value is less than the limit then the
address is valid otherwise the address is invalid and it throws an error .
For valid address, the base address of the segment is added to the segment offset to
obtain the physical address of the actual word present in the main memory.

Advantages of Segmentation
● Segmentation do not suffers from internal fragmentation
● Average Segment Size is greater than the actual page size.
● It has less overhead.
● Relocating segments is way easier than relocating entire address space.
● The segment table is smaller in size as compared to the page table in paging.
Disadvantages of Segmentation-
● It can suffer from external fragmentation.
● It is tough to allocate contiguous memory to variable sized partitions.
● Itt requires costly memory management algorithms.

15. PAGING VS SEGMENTATION

Sr No. Paging Segmentation

1 Non-Contiguous memory allocation Non-contiguous memory allocation

Paging divides programs into fixed Segmentation divides programs into variable
2
size pages. size segments.

3 OS is responsible Compiler is responsible.

4 Paging is faster than segmentation Segmentation is slower than paging

5 Paging is closer to Operating System Segmentation is closer to User

6 It suffers from internal fragmentation It suffers from external fragmentation

7 There is no external fragmentation There is no external fragmentation

Logical address is divided into page Logical address is divided into segment
8
number and page offset number and segment offset

Page table is used to maintain the Segment Table maintains the segment
9
page information. information

Page table entry has the frame Segment table entry has the base address of
10 number and some flag bits to the segment and some protection bits for the
represent details about pages. segments.

16. THRASHING IN OPERATING SYSTEM (OS)

When a program needs space when RAM is full or it needs space larger than RAM, the
Operating System will allocate space from secondary memory and act like it has that much
amount of main memory for that program. This concept is known as virtual memory.

What is Thrashing in OS?


If page fault and swapping of pages from memory happens very frequently at a higher rate,
then the OS has to spend more time on swapping these pages. This state is known as thrashing.
Due to this, CPU utilization will be reduced.
ʎ = point up to which we can put jobs into main memory.
Effect of Thrashing
Whenever thrashing starts, os tries to use either Local page replacement algorithm or
Global page replacement algorithm.
Global Page Replacement
Since global page replacement can access any page, it tries to bring more pages whenever
thrashing is found in the system. Actually, due to this, no process gets enough frames and as
a result thrashing increases more and more. Therefore, global page replacement algorithm is
not suitable for thrashing.
Local Page Replacement
Unlike the global page replacement algorithm, it will select pages which belong to that
process only. So there is a chance to decrease the thrashing. But it is also known that there
are various disadvantages if we use local page replacement policy.
Techniques to Handle Thrashing
● Avoid thrashing as much as possible, do not allow the system to go into thrashing
and instruct the long term scheduler not to bring the process into memory after the
point of ʎ.
● If the system is already in thrashing then tell the mid term scheduler(MTS) to
suspend some of the processes, so that the system can be recovered from thrashing.

****

You might also like