Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 14

INTERRUPTS

Interrupts are signals or events that occur during the execution of a program and cause the normal flow of the
program to be temporarily suspended. Interrupts are typically generated by hardware devices, such as timers,
input/output controllers, or external devices, to request attention or notify the processor about a specific event
that requires immediate action.
When an interrupt occurs, the processor temporarily stops executing the current instruction and transfers control
to a specific interrupt handler routine. This routine, also known as an interrupt service routine (ISR) or interrupt
handler, is responsible for handling the interrupt and performing the necessary actions associated with it.
Interrupts serve various purposes, such as:
1. Input/Output (I/O) handling: Interrupts allow devices to asynchronously request attention from the processor.
For example, when a keyboard key is pressed, an interrupt is generated to handle the input.
2. Timer events: Interrupts can be used to handle timer events, such as scheduling tasks or implementing time-
based operations.
3. Exception handling: Interrupts can be triggered by exceptional conditions, such as divide-by-zero errors or
invalid memory access. The interrupt handler can then handle these exceptions appropriately.
4. Interprocess communication: Interrupts can be used to facilitate communication between processes or threads
in a multitasking environment.

LOGICAL ADRESS VS PHYSICAL ADRESS

#Logical Address Space :- An address generated by the CPU is known as a “Logical Address”.
-User can access logical address of the process.
-User has indirect access to the physical address through logical address.
-Logical address does not exist physically. Hence, aka, Virtual address.
-These logical addresses are usually expressed as numeric values.
- Logical address can be changed.
-The operating system and hardware work together to translate these logical addresses into physical addresses, which
correspond to specific locations in the physical memory chips.
-The set of all logical addresses that are generated by any program is referred to as Logical Address Space.
(To understand logical addresses, an analogy. Imagine a large library with thousands of books arranged on
shelves. Each book has a unique identification number assigned to it. Now, if you want to find a specific book,
you don't need to know exactly where it is physically located on a shelf. Instead, you can use its identification
number to search for it in the library's catalog or database. The identification number serves as a logical
address for the book.)

#Physical Address Space :- An address loaded into the memory-address register of the physical memory. A
Physical address is also known as a Real address because it refers to the actual location of data or instructions in
the physical memory of a computer system. Can be accessed by a user indirectly but never directly. It is typically
expressed as a numeric value, which indicates the memory chip, the specific memory module, the row, column,
and bit location within that module. . It’s location in the main memory physically. It is computed by the Memory
Management Unit (MMU). The set of all physical addresses corresponding to the Logical addresses is commonly known
as Physical Address Space.

# the runtime mapping from virtual to physical address is done by a hardware device called the memory-management
unit (MMU).
SWAPPING
-Swapping is a mechanism in which a process can be swapped temporarily out of main memory to secondary storage
(disk) and make that memory available to other processes. At some later time, the system swaps back the process from
the secondary storage to main memory and its execution can be continued where it left off.
-Swap-out and swap-in is done by MTS.

-Also done when a high priority process arrives.

-Swapping is necessary because main memory is limited and thus it has to be freed up for other process. It is done when
a high priority process comes or when a I/o operation has to be performed.

MEMORY MAPPING AND PROTECTION


B. To separate memory space, we need the ability to determine the range of legal addresses that the process may
access and MUST ensure that the process can access only these legal addresses.

c. The relocation register contains value of smallest physical address (Base address [R]);
the limit register contains the range of logical addresses (e.g., relocation = 100040 & limit = 74600).

d. Each logical address must be less than the limit register.


e. MMU maps the logical address dynamically by adding the value in the relocation register.
f. When CPU scheduler selects a process for execution, the dispatcher loads the relocation and limit registers with
the correct values as part of the context switch. Since every address generated by the CPU (Logical address) is
checked against these registers, we can protect both OS and other users’ programs and data from being modified
by running process.
g. Any attempt by a program executing in user mode to access the OS memory or other uses’ memory results in a
trap in the OS, which treat the attempt as a fatal error.
MEMORY MANAGEMENT TECHNIQUES

NEED : We need several user processes to reside in memory simultaneously. Therefore, we need to consider
how to allocate available memory to the processes that are in the input queue waiting to be brought into
memory.

CONTIGUOUS MEMORY ALLOCATION:

-In this scheme, each process is contained in a single contiguous block of memory.

A). Fixed Partitioning: number of partitions are fixed. The partitions can be of different OR equal sizes. They are fixed
because once decided they cannot be changed.

- One partition can store only 1 process.

LIMITATIONS:

1) Internal Fragmentation: If the size of the process is lesser then the


size of the block then some size of the block gets wasted and
remain unused. This is wastage of the memory and called internal
fragmentation.

-for example: Size of block is 150kb and size of the process is


130kb. Here, 20kb of the memory will be wasted. This is internal
fragmentation.

2) External Fragmentation: The total unused space of various


partitions cannot be used to load the processes even though there is space available but not in the contiguous
form.
-for example: let’s say we have a process p4 of size 180kb and we have 2 blocks of size 100 and 80kb
respectively, but they are of no use since they are not in contiguous form and 1 partition can have only 1
process. This is external fragmentation, even we have space we can’t allocate memory.

3) Low degree of multi-programming: In fixed partitioning, the degree of multiprogramming is fixed and very less
because the size of the partition cannot be varied according to the size of processes.

B). Dynamic Partitioning: In this technique, the partition


size is not declared initially. It is declared at the time of
process loading. (No pre-decided partitions.)

- Here, process size=partition size, as partitions are created


dynamically based on the process size, hence no Internal
Fragmentation.

ADVANTAGES: 1. No internal fragmentation

2. No limit on size of process

3. Better degree of multi-programming

DISADVANTAGES: 1. External fragmentation (No of holes


are also getting created dynamically)

FREE SPACE MANAGEMENT IN PARTITIONING

1) Compaction: (tackle external fragmentation.) Through compaction, we can minimize the probability of
external fragmentation.
- All the free partitions are made contiguous, and all the loaded partitions are brought together.
-By applying this technique, we can store the bigger processes in the memory. The free partitions are
merged which can now be allocated to some other process. This technique is also called defragmentation.

Limitation: The efficiency of the system is decreased in the case of compaction since all the free spaces will
be transferred from several places to a single place. (less multiprogramming).

2) Various algorithms which are implemented by the Operating System for filling holes :

a) FIRST FIT: Allocate the process to the first hole that is big enough. Searching is always done from the
beginning, every time we are allocating the process it will also check previously allocated holes.
PROS: Simple and easy to implement. CONS: Internal fragmentation.
Fast/Less time complexity

b) NEXT FIT: same as first fit, but it doesn’t start from first position, it starts from the place where the last
partition ended. Hence, more fast than First Fit.
c) BEST FIT : Allocate process to a smallest hole that is big enough.
Lesser internal fragmentation but High external fragmentation.
Slow, as required to iterate whole free holes list.

d) WORST FIT: Allocate the largest hole that is big enough.


Slow, as required to iterate whole free holes list.
Leaves larger holes that may accommodate other processes.
NON-CONTIGUOUS MEMORY ALLOCATION(NCM):

The main disadvantage of Dynamic partitioning is External Fragmentation therefore we need more flexible mechanism,
to load processes in the partitions.

In NCM, a process is allocated to different holes after dividing the process.

1) PAGING: Paging is a memory-management scheme that permits the physical address space of a process to be
non-contiguous (I.e., a single process can be allocated to different blocks).
-It avoids external fragmentation and the need of compaction.
-Idea is to divide the physical memory into fixed-sized blocks called Frames + divide logical memory into blocks
of same size called Pages.
IMPORTANT= Process is divided into pages, main memory is divided into frames. + (Page size = Frame size).
-Page size is usually determined by the processor architecture.

{}If Logical Address = 31 bits, then Logical Address Space = 2 31.

Physical Address is divided into:


 Frame Number(f): Number of bits required to represent the frame of Physical Address Space or Frame
number
 Frame Offset(d): frame size of Physical Address Space or word number of a frame or frame offset.

The address generated by the CPU is divided into:


 Page Number(p): Number of bits required to represent the pages in Logical Address Space or Page number
 Page Offset (d): It contains the mapping to the physical frame where the page resides. The page offset is
then combined with the base address of the frame to calculate the physical address of the desired data
within that frame.

PAGE TABLE : A Data structure stores which page is mapped to which frame. It is generated by CPU. Logical
address is mapped to physical address through page table.
 Page table is stored in main memory at the time of process creation and its base address is stored in process control
block (PCB).

Advantages of paging:
1) There is no external fragmentation. Since, Non-contiguous allocation of the pages of the process is
allowed in the random free frames of the physical memory
2) Minute internal fragmentation

Disadvantages of Paging:

1) Each process has a separate page table. Thus, There are too many memory references to access the
desired location in physical memory & because of this when the memory references is made the translation
is slow.
2) Address mapping is hidden from user and it is managed by OS. Therefore, it is OS centric.
3) OS must maintain a frame table.
 Translation Look-aside buffer (TLB)
-A Hardware support to speed-up paging process.

- It’s a hardware cache, high speed memory.

-When we are retrieving physical address using page table, after getting frame address corresponding to the page
number, we put an entry of the into the TLB. So that next time, we can get the values from TLB directly without
referencing actual page table. Hence, make paging process faster.

TLB(effective access time)= hit ratio*(t1+t2) + miss ratio*(t1+t1’+t2)


read from himanshi notes

Segmentation: An important aspect of memory management that become unavoidable with paging is separation
of user’s view of memory from the actual physical memory. Paging divides all the processes into the form of pages
regardless of the fact that a process can have some relative parts of functions which need to be loaded in the same
page.

- Operating system doesn't care about the User's view of the process. It may divide the same function into different
pages and those pages may or may not be loaded at the same time into the memory. It decreases the efficiency of
the system.

-It is better to have segmentation which divides the process into the segments. Each segment
contains the same type of functions such as the main function can be included in one
segment and the library functions can be included in the other segment. Segmentation is a
memory management technique where A logical address space is a collection of segments
and these segments are based on user’s view of logical memory. Segments are of variable
size.

USER’s view of memory :- A programmer will think ki program aise save hua ki addition ka pura
function ek saath, sub ek saath. Toh ye possible hai segmentation mei, but not in paging.

Advantages: A.No internal fragmentation.


b. One segment has a contiguous allocation, hence efficient working within segment.
c. The size of segment table is generally less than the size of page table.
d. It results in a more efficient system because the compiler keeps the same type of functions in one
segment.
Disadvantages: a. External fragmentation.
b. The different size of segment is not good that the time of swapping.

$Translation of Logical address into physical address by


segment table.
CPU generates a logical address which contains two parts:

Segment Number- It is a segment number which is mapped to segment table.


Offset.
SEGMENT TABLE: With the help of segment map tables, the operating system can easily translate a logical address
into physical address on execution of a program.

The Segment number s, is mapped to the segment


table. The limit of the respective segment(from
segment table) is compared with the offset(d). If the
offset is less than the limit then the address is valid
otherwise it throws an error as the address is
invalid.

D<=limit; valid adress

In the case of valid addresses, the base address of


the segment is added to the offset to get the physical address of the actual word in the main memory.

Segmented Paging

Pure segmentation is not very popular and not being used in many of the operating systems. However,
Segmentation can be combined with Paging to get the best features out of both the techniques.

In Segmented Paging, the main memory is divided into variable size segments which are further divided into fixed
size pages.

1. Pages are smaller than segments.


2. Each Segment has a page table which means every program has multiple page tables.
3. The logical address is represented as :

I) Segment Number (base address) : It points to the appropriate Segment Number

II)Page number : It Points to the exact page within the segment

III)page offset. : Used as an offset within the page frame

TRANSLATION OF LA TO PA USING TABLES: The CPU generates a logical address which is divided into two parts:
Segment Number and Segment Offset. The Segment Offset must be less than the segment limit. Offset is further
divided into Page number and Page Offset. To map the exact page number in the page table, the page number is
added into the page table base.

The actual frame number with the page offset is mapped to the main memory to get the desired word in the page of
the certain segment of the process.

Advantages of Segmented Paging

1. It reduces memory usage.


2. Page table size is limited by the segment size.
3. Segment table has only one entry corresponding to one actual segment.
4. External Fragmentation is not there.
5. It simplifies memory allocation.

Disadvantages of Segmented Paging

1. Internal Fragmentation will be there.


2. The complexity level will be much higher as compare to paging.
3. Page Tables need to be contiguously stored in the memory.

VIRTUAL MEMORY

Virtual memory is a technique that allows the execution of processes that are not completely in the MAIN memory.
It provides user an illusion of having a very big main memory. This is done by treating a part of secondary memory
as the main memory.
-In this scheme, User can load the bigger size processes than the available main memory by having the illusion that
the memory is available to load the process.

VIRTUAL MEMORY = Process size>main memory size; still process can execute.

-Instead of loading one big process in the main memory, the Operating System loads the different parts/pages of
more than one process in the main memory that are required by the CPU for its execution.

- It is done by Swapping which is swapped based on page replacement algorithms.

How Virtual Memory Works?

In this scheme, whenever some pages needs to be loaded in the main memory for the execution and the memory is
not available for those many pages, then in that case, instead of stopping the pages from entering in the main
memory, the OS search for the RAM area that are least used in the recent times or that are not referenced and copy
that into the secondary memory to make the space for the new pages in the main memory.

Since all this procedure happens automatically, therefore it makes the computer feel like it is having the unlimited
RAM.

Advantage: programs (TO BE EXECUTED) can be larger than physical memory.

-Each user program could take less physical memory (KYUKI HMARA PROGRAM MAIN MEMORY MEI NHI DAAL
RHE, BAS KUCH PARTS DALA HAI PROGRAM KA), more programs could be run at the same time, with a
corresponding increase in CPU utilization and throughput.

- Running a program that is not entirely in memory.

Dis-advantage: 1) The system becomes slower since swapping takes time.

2) It takes more time in switching between applications.


3) The user will have the lesser hard disk space for its use.

VIRTUAL MEMORY MANAGEMENT

Demand Paging: approach to implement virtual memory

-In demand paging, the pages of a process which are least used, get stored in the secondary memory.
- A page is copied to the main memory when its demand is made, or page fault occurs. There are various page
replacement algorithms which are used to determine the pages which will be replaced.
-a program called lazy swapper; also known as pager is used for this purpose.

How Demand Paging works?


When a process is to be swapped-in, the pager guesses which pages will be used. Instead of swapping in a whole
process, the pager brings only those pages into memory that it believes will come in use.
Hence, it avoids reading into memory pages that will not be used anyway.
-By this way, OS decreases the swap time and the amount of physical memory needed.
-The valid-invalid bit scheme is used to distinguish between pages that are in memory and that are on the
disk(secondary memory).
i) Valid-invalid bit 1 means, the associated page is both legal and in memory.
ii. Valid-invalid bit 0 means, the page either is not valid (not in the LAS of the process) or is valid but is currently
on the disk.

PAGE FAULT(Non-Maskable H/W interrupt)

-Page Fault trap occurs when a process tries to access a page


which is not currently present in memory and OS must bring
the page from swap-space to a frame.
-In such case OS must do page replacement.

PAGE REPLACEMENT

-Page replacement is the process of replacing one page by another in the memory when there is no free frame.

-To load a page in memory when there is no free frame:-

1. Save the content of the page currently in memory on disk.


2. Load new page.
3. Update page table.

-The page replacement algorithm decides which memory page is to be replaced. Some allocated page is
swapped out from the frame and new page is swapped into the freed frame.

ALGOS:
1) FIFO:-

- Allocate frame to the page as it comes into the memory by replacing the oldest page.
- Easy to implement.
-Performance is not always good. (SHOWS BELADY ANAMOLY)
- Belady’s anomaly is present, In FIFO page replacement algorithm, the number of page faults will get increased with the
increment in number of frames

(CHECK NOTEBOOK FOR NUMERICAL)

2)Optimal Page Replacement:-

-In this we replace the page that will not be used for the longest period of time.
- lowest page fault rate, no Belady’s anomaly
- Difficult to implement as OS requires future knowledge of reference string which is kind of impossible
-used for comparison

3)LRU:-

-Page which has not been used for the longest time in main memory is the one which will be selected for replacement.
-It is like optimal page replacement algo looking backwards in time.
-Implemented using counters or stack.

THRASHING

-A situation when a system is spending more time in


servicing page faults than executing.

Thrashing is directly linked with degree of multi-programming.


(The degree of multi-level programming is the number of
processes present in RAM.)

Processes go in block state whenever they are performing I/O


operations. So, at that time CPU becomes idle, throughput will
decrease and its performance will start degrading. But, we are supposed to keep the CPU busy all the time to
increase its utilization, since RAM is limited we can’t bring all the process inside RAM. We use the concept of
paging. (i.e., divide process in pages and bring pages of each process in memory to increase the degree of
multiprogramming and keep the utilization high).

Now, since all the pages of processes are in active state, they will demand other pages of the same process from
the memory and page replacement will be needed again. This will trigger a chain reaction of page faults. This
situation when the CPU utilization diminishes due to high paging activity is called thrashing.

//matlab agar process p1 ka pg2, p2 ka pg5, p2 ka pg 8 ab saare processes toh bhar die unke pages bhi bhr die,
par ab different process alag alag chije maangegenge tab CPU page servicing hi krta rhegaa, which is called
thrashing.//

Causes of thrashing:
1. High degree of multiprogramming.
2. Lack of frames.
3. Page replacement policy.

TECHNIQUES TO HANDLE THRASHING

1) Working set model

-This model is based on the concept of the Locality Model.


- The basic principle states that if we allocate enough frames to a process to accommodate its current locality, it will only
fault whenever it moves to some new locality. But if the allocated frames are lesser than the size of the current locality,
the process is bound to thrash.
// A locality is a set of pages that are actively used together. The locality model states that as a process executes,
it moves from one locality to another. A program is generally composed of several different localities which may
overlap.

2) Page Fault frequency


- The problem associated with Thrashing is the high page fault rate
and thus, the concept here is to control the page fault rate.
-If the page fault rate is too high, it indicates that the process has too
few frames allocated to it. On the contrary, a low page fault rate
indicates that the process has too many frames.
-Upper and lower limits can be established on the desired page fault
rate as shown in the diagram.
-If the page fault rate falls below the lower limit, frames can be
removed from the process. Similarly, if the page fault rate exceeds the
upper limit, more frames can be allocated to the process.

You might also like