Download as pdf or txt
Download as pdf or txt
You are on page 1of 42

Introduction to memory and memory units

Memories are made up of registers. Each register in the memory is one storage location. The
storage location is also called as memory location. Memory locations are identified
using Address. The total number of bit a memory can store is its capacity.
A storage element is called a Cell. Each register is made up of storage element in which one bit
of data is stored. The data in a memory are stored and retrieved by the process
called writing and reading respectively.

A word is a group of bits where a memory unit stores binary information. A word with group of 8
bits is called a byte.
A memory unit consists of data lines, address selection lines, and control lines that specify the
direction of transfer. The block diagram of a memory unit is shown below:

Operating Systems - Unit 4 GNIT, Hyderabad.


Data lines provide the information to be stored in memory. The control inputs specify the direct
transfer. The k-address lines specify the word chosen.
When there are k address lines, 2k memory words can be accessed.
The internal structure of Memory either RAM or ROM is made up of memory cells that contain a
memory bit. A group of 8 bits makes a byte. The memory is in the form of a multidimensional
array of rows and columns. In which, each cell stores a bit and a complete row contains a word.
A memory simply can be divided into this below form.
2n = N
where, n is the no. of address lines and N is the total memory in bytes.
There will be 2n words.

Random Access Memory (RAM) and Read Only Memory (ROM)

Memory is the most essential element of a computing system because without it computer can’t
perform simple tasks. Computer memory is of two basic types – Primary memory(RAM and
ROM) and Secondary memory (hard drive, CD, etc). Random Access Memory (RAM) is
primary-volatile memory and Read Only Memory (ROM) is primary-non-volatile memory.

1. Random Access Memory (RAM) –

• It is also called read-write memory or the main memory or the primary memory.
• The programs and data that the CPU requires during the execution of a program are stored
in this memory.
• It is a volatile memory as the data lost when the power is turned off.
• RAM is further classified into two types- SRAM (Static Random Access Memory) and DRAM
(Dynamic Random Access Memory).

Operating Systems - Unit 4 GNIT, Hyderabad.


2. Read Only Memory (ROM) –

• Stores crucial information essential to operate the system, like the program essential to boot
the computer.
• It is not volatile.
• Always retains its data.
• Used in embedded systems or where the programming needs no change.
• Used in calculators and peripheral devices.
• ROM is further classified into 4 types- ROM, PROM, EPROM, and EEPROM.

Types of Read Only Memory (ROM) –

1. PROM (Programmable read-only memory) – It can be programmed by the user. Once


programmed, the data and instructions in it cannot be changed.

2. EPROM (Erasable Programmable read only memory) – It can be reprogrammed. To


erase data from it, expose it to ultraviolet light. To reprogram it, erase all the previous data.

3. EEPROM (Electrically erasable programmable read only memory) – The data can be
erased by applying an electric field, with no need for ultraviolet light. We can erase only
portions of the chip.

Operating Systems - Unit 4 GNIT, Hyderabad.


Operating Systems - Unit 4 GNIT, Hyderabad.
What is Memory Management?

Memory Management is the process of controlling and coordinating computer memory,


assigning portions known as blocks to various running programs to optimize the overall
performance of the system.
It is the most important function of an operating system that manages primary memory. It helps
processes to move back and forward between the main memory and execution disk. It helps OS
to keep track of every memory location, irrespective of whether it is allocated to some process
or it remains free.

Why Use Memory Management?


Here, are reasons for using memory management:
It allows you to check how much memory needs to be allocated to processes that decide which
processor should get memory at what time.

• Tracks whenever inventory gets freed or unallocated. According to it will update the
status.
• It allocates the space to application routines.
• It also makes sure that these applications do not interfere with each other.
• Helps protect different processes from each other
• It places the programs in memory so that memory is utilized to its full extent.

Memory Management Techniques

Here, are some most crucial memory management techniques:

Single Contiguous Allocation


It is the easiest memory management technique. In this method, all types of computer's memory
except a small portion which is reserved for the OS is available for one application. For
example, MS-DOS operating system allocates memory in this way. An embedded system also
runs on a single application.

Partitioned Allocation
It divides primary memory into various memory partitions, which is mostly contiguous areas of
memory. Every partition stores all the information for a specific task or job. This method consists
of allotting a partition to a job when it starts & unallocate when it ends.

Operating Systems - Unit 4 GNIT, Hyderabad.


Paged Memory Management
This method divides the computer's main memory into fixed-size units known as page frames.
This hardware memory management unit maps pages into frames which should be allocated on
a page basis.

Segmented Memory Management


Segmented memory is the only memory management method that does not provide the user's
program with a linear and contiguous address space.
Segments need hardware support in the form of a segment table. It contains the physical
address of the section in memory, size, and other data like access protection bits and status.

What is Swapping?
Swapping is a method in which the process should be swapped temporarily from the main
memory to the backing store. It will be later brought back into the memory for continue
execution.
Backing store is a hard disk or some other secondary storage device that should be big enough
in order to accommodate copies of all memory images for all users. It is also capable of offering
direct access to these memory images.

Operating Systems - Unit 4 GNIT, Hyderabad.


Benefits of Swapping

Here, are major benefits/pros of swapping:

• It offers a higher degree of multiprogramming.


• Allows dynamic relocation. For example, if address binding at execution time is being
used, then processes can be swap in different locations. Else in case of compile and
load time bindings, processes should be moved to the same location.
• It helps to get better utilization of memory.
• Minimum wastage of CPU time on completion so it can easily be applied to a priority-
based scheduling method to improve its performance.

What is Memory allocation?


Memory allocation is a process by which computer programs are assigned memory or space.
Here, main memory is divided into two types of partitions

1. Low Memory - Operating system resides in this type of memory.


2. High Memory- User processes are held in high memory.

Partition Allocation
Memory is divided into different blocks or partitions. Each process is allocated according to the
requirement. Partition allocation is an ideal method to avoid internal fragmentation.
Below are the various partition allocation schemes :

• First Fit: In this type fit, the partition is allocated, which is the first sufficient block from
the beginning of the main memory.
• Best Fit: It allocates the process to the partition that is the first smallest partition among
the free partitions.
• Worst Fit: It allocates the process to the partition, which is the largest sufficient freely
available partition in the main memory.
• Next Fit: It is mostly similar to the first Fit, but this Fit, searches for the first sufficient
partition from the last allocation point.

What is Paging?
Paging is a storage mechanism that allows OS to retrieve processes from the secondary
storage into the main memory in the form of pages. In the Paging method, the main memory is
divided into small fixed-size blocks of physical memory, which is called frames. The size of a
frame should be kept the same as that of a page to have maximum utilization of the main
memory and to avoid external fragmentation. Paging is used for faster access to data, and it is a
logical concept.

Operating Systems - Unit 4 GNIT, Hyderabad.


What is Fragmentation?

Processes are stored and removed from memory, which creates free memory space, which are
too small to use by other processes.
After sometimes, that processes not able to allocate to memory blocks because its small size
and memory blocks always remain unused is called fragmentation. This type of problem
happens during a dynamic memory allocation system when free blocks are quite small, so it is
not able to fulfill any request.
Two types of Fragmentation methods are:

1. External fragmentation
2. Internal fragmentation

• External fragmentation can be reduced by rearranging memory contents to place all free
memory together in a single block.
• The internal fragmentation can be reduced by assigning the smallest partition, which is
still good enough to carry the entire process.

What is Segmentation?
Segmentation method works almost similarly to paging. The only difference between the two is
that segments are of variable-length, whereas, in the paging method, pages are always of fixed
size.
A program segment includes the program's main function, data structures, utility functions, etc.
The OS maintains a segment map table for all the processes. It also includes a list of free
memory blocks along with its size, segment numbers, and its memory locations in the main
memory or virtual memory.

What is Dynamic Loading?


Dynamic loading is a routine of a program which is not loaded until the program calls it. All
routines should be contained on disk in a relocatable load format. The main program will be
loaded into memory and will be executed. Dynamic loading also provides better memory space
utilization.

What is Dynamic Linking?


Linking is a method that helps OS to collect and merge various modules of code and data into a
single executable file. The file can be loaded into memory and executed. OS can link system-

Operating Systems - Unit 4 GNIT, Hyderabad.


level libraries into a program that combines the libraries at load time. In Dynamic linking method,
libraries are linked at execution time, so program code size can remain small.

Difference Between Static and Dynamic Loading


Static Loading Dynamic Loading

Static loading is used when you want to load In a Dynamically loaded program, references
your program statically. Then at the time of will be provided and the loading will be done
compilation, the entire program will be linked at the time of execution.
and compiled without need of any external
module or program dependency.

At loading time, the entire program is loaded Routines of the library are loaded into
into memory and starts its execution. memory only when they are required in the
program.

Difference Between Static and Dynamic Linking

Here, are main difference between Static vs. Dynamic Linking:

Static Linking Dynamic Linking

Static linking is used to combine all other When dynamic linking is used, it does not
modules, which are required by a program need to link the actual module or library with
into a single executable code. This helps OS the program. Instead of it use a reference to
prevent any runtime dependency. the dynamic module provided at the time of
compilation and linking.

Operating Systems - Unit 4 GNIT, Hyderabad.


Summary:

• Memory management is the process of controlling and coordinating computer memory,


assigning portions called blocks to various running programs to optimize the overall
performance of the system.
• It allows you to check how much memory needs to be allocated to processes that decide
which processor should get memory at what time.
• In Single Contiguous Allocation, all types of computer's memory except a small portion
which is reserved for the OS is available for one application
• Partitioned Allocation method divides primary memory into various memory partitions,
which is mostly contiguous areas of memory
• Paged Memory Management method divides the computer's main memory into fixed-
size units known as page frames
• Segmented memory is the only memory management method that does not provide the
user's program with a linear and contiguous address space.
• Swapping is a method in which the process should be swapped temporarily from the
main memory to the backing store. It will be later brought back into the memory for
continue execution.
• Memory allocation is a process by which computer programs are assigned memory or
space.
• Paging is a storage mechanism that allows OS to retrieve processes from the secondary
storage into the main memory in the form of pages.
• Fragmentation refers to the condition of a disk in which files are divided into pieces
scattered around the disk.
• Segmentation method works almost similarly to paging. The only difference between the
two is that segments are of variable-length, whereas, in the paging method, pages are
always of fixed size.
• Dynamic loading is a routine of a program which is not loaded until the program calls it.
• Linking is a method that helps OS to collect and merge various modules of code and
data into a single executable file.

Operating Systems - Unit 4 GNIT, Hyderabad.


Logical and Physical Address in Operating System

Logical Address is generated by CPU while a program is running. The logical address is virtual
address as it does not exist physically, therefore, it is also known as Virtual Address. This
address is used as a reference to access the physical memory location by CPU. The term
Logical Address Space is used for the set of all logical addresses generated by a program’s
perspective.
The hardware device called Memory-Management Unit is used for mapping logical address to
its corresponding physical address.
Physical Address identifies a physical location of required data in a memory. The user never
directly deals with the physical address but can access by its corresponding logical address.
The user program generates the logical address and thinks that the program is running in this
logical address but the program needs physical memory for its execution, therefore, the logical
address must be mapped to the physical address by MMU before they are used. The term
Physical Address Space is used for all physical addresses corresponding to the logical
addresses in a Logical address space.

Mapping virtual-address to physical-addresses

Operating Systems - Unit 4 GNIT, Hyderabad.


Differences Between Logical and Physical Address in Operating System

1. The basic difference between Logical and physical address is that Logical address is
generated by CPU in perspective of a program whereas the physical address is a location
that exists in the memory unit.
2. Logical Address Space is the set of all logical addresses generated by CPU for a program
whereas the set of all physical address mapped to corresponding logical addresses is called
Physical Address Space.
3. The logical address does not exist physically in the memory whereas physical address is a
location in the memory that can be accessed physically.
4. Identical logical addresses are generated by Compile-time and Load time address binding
methods whereas they differs from each other in run-time address binding method.
5. The logical address is generated by the CPU while the program is running whereas the
physical address is computed by the Memory Management Unit (MMU).

Comparison Chart:

Parameter LOGICAL ADDRESS PHYSICAL ADDRESS

Basic generated by CPU location in a memory unit

Logical Address Space is set of all Physical Address is set of all physical
Address logical addresses generated by CPU addresses mapped to the
Space in reference to a program. corresponding logical addresses.

User can view the logical address of User can never view physical address
Visibility a program. of program.

Generation generated by the CPU Computed by MMU

The user can use the logical address The user can indirectly access physical
Access to access the physical address. address but not directly.

Operating Systems - Unit 4 GNIT, Hyderabad.


Fixed (or static) Partitioning in Operating System

In operating systems, Memory Management is the function responsible for allocating and
managing computer’s main memory. Memory Management function keeps track of the status of
each memory location, either allocated or free to ensure effective and efficient use of Primary
Memory.
There are two Memory Management Techniques: Contiguous, and Non-Contiguous. In
Contiguous Technique, executing process must be loaded entirely in main-memory. Contiguous
Technique can be divided into:
1. Fixed (or static) partitioning
2. Variable (or dynamic) partitioning

Fixed Partitioning:
This is the oldest and simplest technique used to put more than one processes in the main
memory. In this partitioning, number of partitions (non-overlapping) in RAM are fixed but size of
each partition may or may not be same. As it is contiguous allocation, hence no spanning is
allowed. Here partition are made before execution or during system configure.

As illustrated in above figure, first process is only consuming 1MB out of 4MB in the main
memory.
Hence, Internal Fragmentation in first block is (4-1) = 3MB.
Sum of Internal Fragmentation in every block = (4-1)+(8-7)+(8-7)+(16-14)= 3+1+1+2 = 7MB.
Suppose process P5 of size 7MB comes. But this process cannot be accommodated inspite of
available free space because of contiguous allocation (as spanning is not allowed). Hence, 7MB
becomes part of External Fragmentation.

Operating Systems - Unit 4 GNIT, Hyderabad.


There are some advantages and disadvantages of fixed partitioning.
Advantages of Fixed Partitioning –
1. Easy to implement:
Algorithms needed to implement Fixed Partitioning are easy to implement. It simply requires
putting a process into certain partition without focussing on the emergence of Internal and
External Fragmentation.
2. Little OS overhead:
Processing of Fixed Partitioning require lesser excess and indirect computational power.
Disadvantages of Fixed Partitioning –
1. Internal Fragmentation:
Main memory use is inefficient. Any program, no matter how small, occupies an entire
partition. This can cause internal fragmentation.
2. External Fragmentation:
The total unused space (as stated above) of various partitions cannot be used to load the
processes even though there is space available but not in the contiguous form (as spanning
is not allowed).
3. Limit process size:
Process of size greater than size of partition in Main Memory cannot be accommodated.
Partition size cannot be varied according to the size of incoming process’s size. Hence,
process size of 32MB in above stated example is invalid.
4. Limitation on Degree of Multiprogramming:
Partition in Main Memory are made before execution or during system configure. Main
Memory is divided into fixed number of partition.
5. Number of processes greater than number of partitions in RAM is invalid in Fixed
Partitioning.

Operating Systems - Unit 4 GNIT, Hyderabad.


Variable (or dynamic) Partitioning in Operating System

In operating systems, Memory Management is the function responsible for allocating and
managing computer’s main memory. Memory Management function keeps track of the status of
each memory location, either allocated or free to ensure effective and efficient use of Primary
Memory.
There are two Memory Management Techniques: Contiguous, and Non-Contiguous. In
Contiguous Technique, executing process must be loaded entirely in main-memory. Contiguous
Technique can be divided into:
1. Fixed (or static) partitioning
2. Variable (or dynamic) partitioning
Variable Partitioning –
It is a part of Contiguous allocation technique. It is used to alleviate the problem faced by Fixed
Partitioning. In contrast with fixed partitioning, partitions are not made before the execution or
during system configure. Various features associated with variable Partitioning-
1. Initially RAM is empty and partitions are made during the run-time according to process’s
need instead of partitioning during system configure.
2. The size of partition will be equal to incoming process.
3. The partition size varies according to the need of the process so that the internal
fragmentation can be avoided to ensure efficient utilisation of RAM.
4. Number of partitions in RAM is not fixed and depends on the number of incoming process
and Main Memory’s size.

Operating Systems - Unit 4 GNIT, Hyderabad.


There are some advantages and disadvantages of variable partitioning over fixed partitioning as
given below.

Advantages of Variable Partitioning –


1. No Internal Fragmentation:
In variable Partitioning, space in main memory is allocated strictly according to the need of
process, hence there is no case of internal fragmentation. There will be no unused space
left in the partition.
2. No restriction on Degree of Multiprogramming:
More number of processes can be accommodated due to absence of internal fragmentation.
A process can be loaded until the memory is empty.
3. No Limitation on the size of the process:
In Fixed partitioning, the process with the size greater than the size of the largest partition
could not be loaded and process can not be divided as it is invalid in contiguous allocation
technique. Here, In variable partitioning, the process size can’t be restricted since the
partition size is decided according to the process size.
Disadvantages of Variable Partitioning –
1. Difficult Implementation:
Implementing variable Partitioning is difficult as compared to Fixed Partitioning as it involves
allocation of memory during run-time rather than during system configure.
2. External Fragmentation:
There will be external fragmentation inspite of absence of internal fragmentation.
For example, suppose in above example- process P1(2MB) and process P3(1MB)
completed their execution. Hence two spaces are left i.e. 2MB and 1MB. Let’s suppose
process P5 of size 3MB comes. The empty space in memory cannot be allocated as no
spanning is allowed in contiguous allocation. The rule says that process must be
contiguously present in main memory to get executed. Hence it results in External
Fragmentation.

Now P5 of size 3 MB cannot be accommodated in spite of required available space because


in contiguous no spanning is allowed.

Operating Systems - Unit 4 GNIT, Hyderabad.


Non-Contiguous Allocation in Operating System

Paging and Segmentation are the two ways which allow a process’s physical address space to
be non-contiguous. It has advantage of reducing memory wastage but it increases the overheads
due to address translation. It slows the execution of the memory because time is consumed in
address translation.
In non-contiguous allocation, Operating system needs to maintain the table which is called Page
Table for each process which contains the base address of the each block which is acquired by
the process in memory space. In non-contiguous memory allocation, different parts of a process
is allocated different places in Main Memory. Spanning is allowed which is not possible in other
techniques like Dynamic or Static Contiguous memory allocation. That’s why paging is needed to
ensure effective memory allocation. Paging is done to remove External Fragmentation.
Working:
Here a process can be spanned across different spaces in main memory in non-consecutive
manner. Suppose process P of size 4KB. Consider main memory have two empty slots each of
size 2KB. Hence total free space is, 2*2= 4 KB. In contiguous memory allocation, process P
cannot be accommodated as spanning is not allowed.
In contiguous allocation, space in memory should be allocated to whole process. If not, then that
space remains unallocated. But in Non-Contiguous allocation, process can be divided into
different parts and hence filling the space in main memory. In this example, process P can be
divided into two parts of equal size – 2KB. Hence one part of process P can be allocated to first
2KB space of main memory and other part of process P can be allocated to second 2KB space
of main memory. Below diagram will explain in better way:

Operating Systems - Unit 4 GNIT, Hyderabad.


But, in what manner we divide a process to allocate them into main memory is very important to
understand. Process is divided after analysing the number of empty spaces and their size in main
memory. Then only we divide our process. It is very time consuming process. Their number as
well as their sizes changing every time due to execution of already present processes in main
memory.
In order to avoid this time consuming process, we divide our process in secondary memory in
advance before reaching the main memory for its execution. Every process is divided into various
parts of equal size called Pages. We also divide our main memory into different parts of equal
size called Frames. It is important to understand that:

Size of page in process


= Size of frame in memory

Although their numbers can be different. Below diagram will make you understand in better way:
consider empty main memory having size of each frame is 2 KB, and two processes P1 and P2
are 2 KB each.

Resolvent main memory,

Operating Systems - Unit 4 GNIT, Hyderabad.


In conclusion we can say that, Paging allows memory address space of a process to be non-
contiguous. Paging is more flexible as only pages of a process are moved. It allows more
processes to reside in main memory than Contiguous memory allocation.

Operating Systems - Unit 4 GNIT, Hyderabad.


Paging in Operating System

Paging is a memory management scheme that eliminates the need for contiguous allocation of
physical memory. This scheme permits the physical address space of a process to be non –
contiguous.
• Logical Address or Virtual Address (represented in bits): An address generated by the CPU
• Logical Address Space or Virtual Address Space( represented in words or bytes): The set of
all logical addresses generated by a program
• Physical Address (represented in bits): An address actually available on memory unit
• Physical Address Space (represented in words or bytes): The set of all physical addresses
corresponding to the logical addresses
Example:
• If Logical Address = 31 bit, then Logical Address Space = 231 words = 2 G words (1 G = 230)
• If Logical Address Space = 128 M words = 27 * 220 words, then Logical Address = log2 227 =
27 bits
• If Physical Address = 22 bit, then Physical Address Space = 222 words = 4 M words (1 M =
220)
• If Physical Address Space = 16 M words = 24 * 220 words, then Physical Address = log2 224 =
24 bits
The mapping from virtual to physical address is done by the memory management unit (MMU)
which is a hardware device and this mapping is known as paging technique.
• The Physical Address Space is conceptually divided into a number of fixed-size blocks,
called frames.
• The Logical address Space is also split into fixed-size blocks, called pages.
• Page Size = Frame Size
Let us consider an example:
• Physical Address = 12 bits, then Physical Address Space = 4 K words
• Logical Address = 13 bits, then Logical Address Space = 8 K words
• Page size = frame size = 1 K words (assumption)

Operating Systems - Unit 4 GNIT, Hyderabad.


Address generated by CPU is divided into
• Page number(p): Number of bits required to represent the pages in Logical Address Space
or Page number
• Page offset(d): Number of bits required to represent particular word in a page or page size
of Logical Address Space or word number of a page or page offset.
Physical Address is divided into
• Frame number(f): Number of bits required to represent the frame of Physical Address
Space or Frame number.
• Frame offset(d): Number of bits required to represent particular word in a frame or frame
size of Physical Address Space or word number of a frame or frame offset.

The hardware implementation of page table can be done by using dedicated registers. But the
usage of register for the page table is satisfactory only if page table is small. If page table contain
large number of entries then we can use TLB(translation Look-aside buffer), a special, small, fast
look up hardware cache.
• The TLB is associative, high speed memory.
• Each entry in TLB consists of two parts: a tag and a value.
• When this memory is used, then an item is compared with all tags simultaneously. If the item
is found, then corresponding value is returned.

Operating Systems - Unit 4 GNIT, Hyderabad.


Main memory access time = m
If page table are kept in main memory,
Effective access time = m(for page table) + m(for particular page in page table)

Operating Systems - Unit 4 GNIT, Hyderabad.


Page Table Entries in Page Table

Page table has page table entries where each page table entry stores a frame number and
optional status (like protection) bits. Many of status bits used in the virtual memory system. The
most important thing in PTE is frame Number.
Page table entry has the following information –

1. Frame Number – It gives the frame number in which the current page you are looking for is
present. The number of bits required depends on the number of frames.Frame bit is also
known as address translation bit.
2. Number of bits for frame = Size of physical memory/frame size
3. Present/Absent bit – Present or absent bit says whether a particular page you are looking
for is present or absent. In case if it is not present, that is called Page Fault. It is set to 0 if
the corresponding page is not in memory. Used to control page fault by the operating
system to support virtual memory. Sometimes this bit is also known as valid/invalid bits.
4. Protection bit – Protection bit says that what kind of protection you want on that page. So,
these bit for the protection of the page frame (read, write etc).
5. Referenced bit – Referenced bit will say whether this page has been referred in the last
clock cycle or not. It is set to 1 by hardware when the page is accessed.
6. Caching enabled/disabled – Sometimes we need the fresh data. Let us say the user is
typing some information from the keyboard and your program should run according to the
input given by the user. In that case, the information will come into the main memory.
Therefore main memory contains the latest information which is typed by the user. Now if
you try to put that page in the cache, that cache will show the old information. So whenever
freshness is required, we don’t want to go for caching or many levels of the memory. The
information present in the closest level to the CPU and the information present in the closest
level to the user might be different. So we want the information has to be consistency, which
means whatever information user has given, CPU should be able to see it as first as
possible. That is the reason we want to disable caching. So, this bit enables or
disable caching of the page.
7. Modified bit – Modified bit says whether the page has been modified or not. Modified
means sometimes you might try to write something on to the page. If a page is modified,
then whenever you should replace that page with some other page, then the modified
information should be kept on the hard disk or it has to be written back or it has to be saved
back. It is set to 1 by hardware on write-access to page which is used to avoid writing when
swapped out. Sometimes this modified bit is also called as the Dirty bit.

Operating Systems - Unit 4 GNIT, Hyderabad.


Inverted Page Table in Operating System

Most of the Operating Systems implement a separate pagetable for each process, i.e. for ‘n’
number of processes running on a Multiprocessing/ Timesharing operating system, there are ‘n’
number of pagetables stored in the memory. Sometimes when a process is very large in size
and it occupies virtual memory then with the size of the process, it’s pagetable size also
increases substantially.
Example: A process of size 2 GB with:
Page size = 512 Bytes
Size of page table entry = 4 Bytes, then
Number of pages in the process = 2 GB / 512 B = 222
PageTable Size = 222 * 22 = 224 bytes
Through this example, it can be concluded that for multiple processes running simultaneously in
an OS, a considerable part of memory is occupied by page tables only.
Operating Systems also incorporate multilevel paging schemes which further increase the
space required for storing the page tables and a large amount of memory is invested in storing
them. The amount of memory occupied by the page tables can turn out to be a huge overhead
and is always unacceptable as main memory is always a scarce resource. Various efforts are
made to utilize the memory efficiently and to maintain a good balance in the level of
multiprogramming and efficient CPU utilization.
Inverted Page Table –
An alternate approach is to use the Inverted Page Table structure that consists of one-page
table entry for every frame of the main memory. So the number of page table entries in the
Inverted Page Table reduces to the number of frames in physical memory and a single page
table is used to represent the paging information of all the processes.
Through the inverted page table, the overhead of storing an individual page table for every
process gets eliminated and only a fixed portion of memory is required to store the paging
information of all the processes together. This technique is called as inverted paging as the
indexing is done with respect to the frame number instead of the logical page number. Each
entry in the page table contains the following fields.
• Page number – It specifies the page number range of the logical address.
• Process id – An inverted page table contains the address space information of all the
processes in execution. Since two different processes can have similar set of virtual
addresses, it becomes necessary in Inverted Page Table to store a process Id of each
process to identify it’s address space uniquely. This is done by using the combination of PId
and Page Number. So this Process Id acts as an address space identifier and ensures that
a virtual page for a particular process is mapped correctly to the corresponding physical
frame.
• Control bits – These bits are used to store extra paging-related information. These include
the valid bit, dirty bit, reference bits, protection and locking information bits.
• Chained pointer – It may be possible sometime that two or more processes share a part of
main memory. In this case, two or more logical pages map to same Page Table Entry then a
chaining pointer is used to map the details of these logical pages to the root page table.

Operating Systems - Unit 4 GNIT, Hyderabad.


Working – The operation of an inverted page table is shown below.

The virtual address generated by the CPU contains the fields and each page table entry
contains and the other relevant information required in paging related mechanism. When a
memory reference takes place, this virtual address is matched by the memory-mapping unit and
the Inverted Page table is searched to match the and the corresponding frame number is
obtained. If the match is found at the ith entry then the physical address of the process, , is sent
as the real address otherwise if no match is found then Segmentation Fault is generated.
Note: Number of Entries in Inverted page table = Number of frames in Physical address
Space(PAS)
Examples – The Inverted Page table and its variations are implemented in various systems like
PowerPC, UltraSPARC and the IA-64 architecture. An implementation of the Mach operating
system on the RT-PC also uses this technique.
Advantages and Disadvantages:
• Reduced memory space –
Inverted Pagetables typically reduces the amount of memory required to store the page
tables to a size bound of physical memory. The maximum number of entries could be the
number of page frames in the physical memory.
• Longer lookup time –
Inverted Page tables are sorted in order of frame number but the memory look-up takes
place with respect to the virtual address, so, it usually takes a longer time to find the
appropriate entry but often these page tables are implemented using hash data structures
for a faster lookup.
• Difficult shared memory implementation –
As the Inverted Page Table stores a single entry for each frame, it becomes difficult to
implement the shared memory in the page tables. Chaining techniques are used to map
more than one virtual address to the entry specified in order of frame number.

Operating Systems - Unit 4 GNIT, Hyderabad.


Segmentation in Operating System

A process is divided into Segments. The chunks that a program is divided into which are not
necessarily all of the same sizes are called segments. Segmentation gives user’s view of the
process which paging does not give. Here the user’s view is mapped to physical memory.
There are types of segmentation:
1. Virtual memory segmentation –
Each process is divided into a number of segments, not all of which are resident at any
one point in time.
2. Simple segmentation –
Each process is divided into a number of segments, all of which are loaded into memory at
run time, though not necessarily contiguously.
There is no simple relationship between logical addresses and physical addresses in
segmentation. A table stores the information about all such segments and is called Segment
Table.
Segment Table – It maps two-dimensional Logical address into one-dimensional Physical
address. It’s each table entry has:
• Base Address: It contains the starting physical address where the segments reside in
memory.
• Limit: It specifies the length of the segment.

Translation of Two dimensional Logical Address to one dimensional Physical Address.

Operating Systems - Unit 4 GNIT, Hyderabad.


Address generated by the CPU is divided into:
• Segment number (s): Number of bits required to represent the segment.
• Segment offset (d): Number of bits required to represent the size of the segment.
Advantages of Segmentation –
• No Internal fragmentation.
• Segment Table consumes less space in comparison to Page table in paging.
Disadvantage of Segmentation –
• As processes are loaded and removed from the memory, the free memory space is broken
into little pieces, causing External fragmentation.

Operating Systems - Unit 4 GNIT, Hyderabad.


Virtual Memory in Operating System

Virtual Memory is a storage allocation scheme in which secondary memory can be addressed
as though it were part of main memory. The addresses a program may use to reference
memory are distinguished from the addresses the memory system uses to identify physical
storage sites, and program generated addresses are translated automatically to the
corresponding machine addresses.
The size of virtual storage is limited by the addressing scheme of the computer system and
amount of secondary memory is available not by the actual number of the main storage
locations.
It is a technique that is implemented using both hardware and software. It maps memory
addresses used by a program, called virtual addresses, into physical addresses in computer
memory.
1. All memory references within a process are logical addresses that are dynamically
translated into physical addresses at run time. This means that a process can be swapped
in and out of main memory such that it occupies different places in main memory at
different times during the course of execution.
2. A process may be broken into number of pieces and these pieces need not be
continuously located in the main memory during execution. The combination of dynamic
run-time address translation and use of page or segment table permits this.
If these characteristics are present then, it is not necessary that all the pages or segments are
present in the main memory during execution. This means that the required pages need to be
loaded into memory whenever required. Virtual memory is implemented using Demand Paging
or Demand Segmentation.

Demand Paging :
The process of loading the page into memory on demand (whenever page fault occurs) is
known as demand paging.

Operating Systems - Unit 4 GNIT, Hyderabad.


The process includes the following steps :

1. If CPU try to refer a page that is currently not available in the main memory, it generates
an interrupt indicating memory access fault.
2. The OS puts the interrupted process in a blocking state. For the execution to proceed the
OS must bring the required page into the memory.
3. The OS will search for the required page in the logical address space.
4. The required page will be brought from logical address space to physical address space.
The page replacement algorithms are used for the decision making of replacing the page
in physical address space.
5. The page table will updated accordingly.
6. The signal will be sent to the CPU to continue the program execution and it will place the
process back into ready state.
Hence whenever a page fault occurs these steps are followed by the operating system and
the required page is brought into memory.

Advantages :
• More processes may be maintained in the main memory: Because we are going to load
only some of the pages of any particular process, there is room for more processes. This
leads to more efficient utilization of the processor because it is more likely that at least one
of the more numerous processes will be in the ready state at any particular time.
• A process may be larger than all of main memory: One of the most fundamental
restrictions in programming is lifted. A process larger than the main memory can be
executed because of demand paging. The OS itself loads pages of a process in main
memory as required.

Operating Systems - Unit 4 GNIT, Hyderabad.


• It allows greater multiprogramming levels by using less of the available (primary) memory
for each process.
Page Fault Service Time :
The time taken to service the page fault is called as page fault service time. The page fault
service time includes the time taken to perform all the above six steps.
Let Main memory access time is: m
Page fault service time is: s
Page fault rate is : p
Then, Effective memory access time = (p*s) + (1-p)*m
Swapping:
Swapping a process out means removing all of its pages from memory, or marking them so
that they will be removed by the normal page replacement process. Suspending a process
ensures that it is not runnable while it is swapped out. At some later time, the system swaps
back the process from the secondary storage to main memory. When a process is busy
swapping pages in and out then this situation is called thrashing.

Operating Systems - Unit 4 GNIT, Hyderabad.


Thrashing :

At any given time, only few pages of any process are in main memory and therefore more
processes can be maintained in memory. Furthermore time is saved because unused pages
are not swapped in and out of memory. However, the OS must be clever about how it
manages this scheme. In the steady state practically, all of main memory will be occupied with
process’s pages, so that the processor and OS has direct access to as many processes as
possible. Thus when the OS brings one page in, it must throw another out. If it throws out a
page just before it is used, then it will just have to get that page again almost immediately.
Too much of this leads to a condition called Thrashing. The system spends most of its time
swapping pages rather than executing instructions. So a good page replacement algorithm is
required.

In the given diagram, initial degree of multi programming upto some extent of point(lamda),
the CPU utilization is very high and the system resources are utilized 100%. But if we further
increase the degree of multi programming the CPU utilization will drastically fall down and the
system will spent more time only in the page replacement and the time taken to complete the
execution of the process will increase. This situation in the system is called as thrashing.

Causes of Thrashing :
1. High degree of multiprogramming : If the number of processes keeps on increasing in
the memory than number of frames allocated to each process will be decreased. So, less
number of frames will be available to each process. Due to this, page fault will occur more
frequently and more CPU time will be wasted in just swapping in and out of pages and the
utilization will keep on decreasing.
For example:
Let free frames = 400
Case 1: Number of process = 100
Then, each process will get 4 frames.
Case 2: Number of process = 400
Each process will get 1 frame.
Case 2 is a condition of thrashing, as the number of processes are increased,frames per
process are decreased. Hence CPU time will be consumed in just swapping pages.

Operating Systems - Unit 4 GNIT, Hyderabad.


2. Lacks of Frames:If a process has less number of frames then less pages of that process
will be able to reside in memory and hence more frequent swapping in and out will be
required. This may lead to thrashing. Hence sufficient amount of frames must be allocated
to each process in order to prevent thrashing.
Recovery of Thrashing :
• Do not allow the system to go into thrashing by instructing the long term scheduler not to
bring the processes into memory after the threshold.
• If the system is already in thrashing then instruct the mid term schedular to suspend some
of the processes so that we can recover the system from thrashing.

Operating Systems - Unit 4 GNIT, Hyderabad.


Page Replacement Algorithms in Operating Systems

In an operating system that uses paging for memory management, a page replacement
algorithm is needed to decide which page needs to be replaced when new page comes in.

The page replacement algorithm decides which memory page is to be replaced. The process of
replacement is sometimes called swap out or write to disk. Page replacement is done when the
requested page is not found in the main memory (page fault).

There are two main aspects of virtual memory, Frame allocation and Page Replacement. It is
very important to have the optimal frame allocation and page replacement algorithm. Frame
allocation is all about how many frames are to be allocated to the process while the page
replacement is all about determining the page number which needs to be replaced in order to
make space for the requested page.

What If the algorithm is not optimal?

1. if the number of frames which are allocated to a process is not sufficient or accurate then
there can be a problem of thrashing. Due to the lack of frames, most of the pages will be
residing in the main memory and therefore more page faults will occur.

However, if OS allocates more frames to the process then there can be internal fragmentation.

2. If the page replacement algorithm is not optimal then there will also be the problem of
thrashing. If the number of pages that are replaced by the requested pages will be referred in
the near future then there will be more number of swap-in and swap-out and therefore the OS
has to perform more replacements then usual which causes performance deficiency.

Therefore, the task of an optimal page replacement algorithm is to choose the page which can
limit the thrashing.
Types of Page Replacement Algorithms

There are various page replacement algorithms. Each algorithm has a different method by
which the pages can be replaced.

1. Optimal Page Replacement algorithm → this algorithms replaces the page which will
not be referred for so long in future. Although it can not be practically implementable but
it can be used as a benchmark. Other algorithms are compared to this in terms of
optimality.
2. Least recent used (LRU) page replacement algorithm → this algorithm replaces the
page which has not been referred for a long time. This algorithm is just opposite to the
optimal page replacement algorithm. In this, we look at the past instead of staring at
future.
3. FIFO → in this algorithm, a queue is maintained. The page which is assigned the frame
first will be replaced first. In other words, the page which resides at the rare end of the
queue will be replaced on the every page fault.

Page Fault – A page fault happens when a running program accesses a memory page that is
mapped into the virtual address space, but not loaded in physical memory.
Since actual physical memory is much smaller than virtual memory, page faults happen. In case
of page fault, Operating System might have to replace one of the existing pages with the newly
needed page. Different page replacement algorithms suggest different ways to decide which
page to replace. The target for all algorithms is to reduce the number of page faults.
Page Replacement Algorithms :
• First In First Out (FIFO) –
This is the simplest page replacement algorithm. In this algorithm, the operating system
keeps track of all pages in the memory in a queue, the oldest page is in the front of the
queue. When a page needs to be replaced page in the front of the queue is selected for
removal.
Example-1 Consider page reference string 1, 3, 0, 3, 5, 6 with 3 page frames. Find number
of page faults.

Initially all slots are empty, so when 1, 3, 0 came they are allocated to the empty slots —> 3
Page Faults.
when 3 comes, it is already in memory so —> 0 Page Faults.
Then 5 comes, it is not available in memory so it replaces the oldest page slot i.e 1. —>1
Page Fault.
6 comes, it is also not available in memory so it replaces the oldest page slot i.e 3 —>1
Page Fault.
Finally when 3 come it is not avilable so it replaces 0 1 page fault

Belady’s anomaly – Belady’s anomaly proves that it is possible to have more page faults
when increasing the number of page frames while using the First in First Out (FIFO) page
replacement algorithm. For example, if we consider reference string 3, 2, 1, 0, 3, 2, 4, 3, 2,
1, 0, 4 and 3 slots, we get 9 total page faults, but if we increase slots to 4, we get 10 page
faults.
• Optimal Page replacement –
In this algorithm, pages are replaced which would not be used for the longest duration of
time in the future.
Example-2:Consider the page references 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, with 4 page
frame. Find number of page fault.

Initially all slots are empty, so when 7 0 1 2 are allocated to the empty slots —> 4 Page
faults
0 is already there so —> 0 Page fault.
when 3 came it will take the place of 7 because it is not used for the longest duration of time
in the future.—>1 Page fault.
0 is already there so —> 0 Page fault..
4 will takes place of 1 —> 1 Page Fault.

Now for the further page reference string —> 0 Page fault because they are already
available in the memory.
Optimal page replacement is perfect, but not possible in practice as the operating system
cannot know future requests. The use of Optimal Page replacement is to set up a
benchmark so that other replacement algorithms can be analyzed against it.
• Least Recently Used –
In this algorithm page will be replaced which is least recently used.
Example-3Consider the page reference string 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2 with 4 page
frames.Find number of page faults.

Initially all slots are empty, so when 7 0 1 2 are allocated to the empty slots —> 4 Page
faults
0 is already their so —> 0 Page fault.
when 3 came it will take the place of 7 because it is least recently used —>1 Page fault
0 is already in memory so —> 0 Page fault.
4 will takes place of 1 —> 1 Page Fault
Now for the further page reference string —> 0 Page fault because they are already
available in the memory.
Belady’s Anomaly in Page Replacement Algorithms

In Operating System, process data is loaded in fixed sized chunks and each chunk is referred to
as a page. The processor loads these pages in the fixed sized chunks of memory called frames.
Typically the size of each page is always equal to the frame size.

A page fault occurs when a page is not found in the memory, and needs to be loaded from the
disk. If a page fault occurs and all memory frames have been already allocated, then
replacement of a page in memory is required on the request of a new page.

This is referred to as demand-paging. The choice of which page to replace is specified by a


page replacement algorithms. The commonly used page replacement algorithms are FIFO,
LRU, optimal page replacement algorithms etc.

Generally, on increasing the number of frames to a process’ virtual memory, its execution
becomes faster as less number of page faults occur. Sometimes the reverse happens, i.e. more
number of page faults occur when more frames are allocated to a process. This most
unexpected result is termed as Belady’s Anomaly.

Bélády’s anomaly is the name given to the phenomenon where increasing the number of page
frames results in an increase in the number of page faults for a given memory access pattern.

This phenomenon is commonly experienced in following page replacement algorithms:


1. First in first out (FIFO)
2. Second chance algorithm
3. Random page replacement algorithm

Reason of Belady’s Anomaly –


The other two commonly used page replacement algorithms are Optimal and LRU, but Belady’s
Anamoly can never occur in these algorithms for any reference string as they belong to a class
of stack based page replacement algorithms.

A stack based algorithm is one for which it can be shown that the set of pages in memory
for N frames is always a subset of the set of pages that would be in memory with N + 1 frames.

For LRU replacement, the set of pages in memory would be the n most recently referenced
pages. If the number of frames increases then these n pages will still be the most recently
referenced and so, will still be in the memory.

While in FIFO, if a page named b came into physical memory before a page – a then priority of
replacement of b is greater than that of a, but this is not independent of the number of page
frames and hence, FIFO does not follow a stack page replacement policy and therefore suffers
Belady’s Anomaly.
Example: Consider the following diagram to understand the behaviour of a stack-based page
replacement algorithm

The diagram illustrates that given the set of pages i.e. {0, 1, 2} in 3 frames of memory is not a
subset of the pages in memory – {0, 1, 4, 5} with 4 frames and it is a violation in the property of
stack based algorithms. This situation can be frequently seen in FIFO algorithm.
Belady’s Anomaly in FIFO –
Assuming a system that has no pages loaded in the memory and uses the FIFO Page
replacement algorithm. Consider the following reference string:
1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5
Case-1: If the system has 3 frames, the given reference string on using FIFO page replacement
algorithm yields a total of 9 page faults. The diagram below illustrates the pattern of the page
faults occurring in the example.

Case-2: If the system has 4 frames, the given reference string on using FIFO page replacement
algorithm yields a total of 10 page faults. The diagram below illustrates the pattern of the page
faults occurring in the example.

It can be seen from the above example that on increasing the number of frames while using the
FIFO page replacement algorithm, the number of page faults increased from 9 to 10.
Note – It is not necessary that every string reference pattern cause Belady anomaly in FIFO but
there are certain kind of string references that worsen the FIFO performance on increasing the
number of frames.
Why Stack based algorithms do not suffer Anomaly –
All the stack based algorithms never suffer Belady Anomaly because these type of algorithms
assigns a priority to a page (for replacement) that is independent of the number of page frames.
Examples of such policies are Optimal, LRU and LFU. Additionally these algorithms also have a
good property for simulation, i.e. the miss (or hit) ratio can be computed for any number of page
frames with a single pass through the reference string.

In LRU algorithm every time a page is referenced it is moved at the top of the stack, so, the
top n pages of the stack are the n most recently used pages. Even if the number of frames are
incremented to n+1, top of the stack will have n+1 most recently used pages.
Similar example can be used to calculate the number of page faults in LRU algorithm. Assuming
a system that has no pages loaded in the memory and uses the LRU Page replacement
algorithm. Consider the following reference string:
1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5
Case-1: If the system has 3 frames, the given reference string on using LRU page replacement
algorithm yields a total of 10 page faults. The diagram below illustrates the pattern of the page
faults occurring in the example.

Case-2: If the system has 4 frames, the given reference string on using LRU page replacement
algorithm, then total 8 page faults occur. The diagram shows the pattern of the page faults in the
example.

Conclusion –
Various factors substantially affect the number of page faults, such as reference string length
and the number of free page frames available. Anomalies also occurs due to the small cache
size as well as the reckless rate of change of the contents of cache. Also, the situation of fixed
number of page faults even after increasing the number of frames can also be seen as an
anomaly. Often algorithms like Random page replacement algorithm are also susceptible to
Belady’s Anomaly, because it may behave like first in first out (FIFO) page replacement
algorithm. But Stack based algorithms are generally immune to all such situations as they are
guaranteed to give better page hits when the frames are incremented.
Overlays in Memory Management

The main problem in Fixed partitioning is the size of a process has to be limited by the
maximum size of the partition, which means a process can never be span over another. In order
to solve this problem, earlier people have used some solution which is called as Overlays.
The concept of overlays is that whenever a process is running it will not use the complete
program at the same time, it will use only some part of it.

Then overlays concept says that whatever part you required, you load it an once the part is
done, then you just unload it, means just pull it back and get the new part you required and run
it.

Formally,
“The process of transferring a block of program code or other data into internal memory,
replacing what is already stored”.
Sometimes it happens that compare to the size of the biggest partition, the size of the program
will be even more, then, in that case, you should go with overlays.

So overlay is a technique to run a program that is bigger than the size of the physical memory
by keeping only those instructions and data that are needed at any given time.
Divide the program into modules in such a way that not all modules need to be in the memory at
the same time.
Advantage –
• Reduce memory requirement
• Reduce time requirement
Disadvantage –
• Overlap map must be specified by programmer
• Programmer must know memory requirement
• Overlaped module must be completely disjoint
• Programmming design of overlays structure is complex and not possible in all cases
Example –
The best example of overlays is assembler. Consider the assembler has 2 passes, 2 pass
means at any time it will be doing only one thing, either the 1st pass or the 2nd pass. Which
means it will finish 1st pass first and then 2nd pass. Let assume that available main memory
size is 150KB and total code size is 200KB
Pass 1.......................70KB
Pass 2.......................80KB
Symbol table.................30KB
Common routine...............20KB
As the total code size is 200KB and main memory size is 150KB, it is not possible to use 2
passes together. So, in this case, we should go with the overlays technique. According to the
overlays concept at any time only one pass will be used and both the passes always need
symbol table and common routine. Now the question is if overlays-driver* is 10KB, then what is
the minimum partition size required? For pass 1 total memory needed is = (70KB + 30KB +
20KB + 10KB) = 130KB and for pass 2 total memory needed is = (80KB + 30KB + 20KB +
10KB) = 140KB.So if we have minimum 140KB size partition then we can run this code very
easily.
*Overlays driver:-It is the user responsibility to take care of overlaying, the operating system will
not provide anything. Which means the user should write even what part is required in the 1st
pass and once the 1st pass is over, the user should write the code to pull out the pass 1 and
load the pass 2.That is what is the responsibility of the user, that is known as the Overlays
driver. Overlays driver will just help us to move out and move in the various part of the code.
Question –
The overlay tree for a program is as shown below:

What will be the size of the partition (in physical memory) required to load (and
run) this program?
(a) 12 KB (b) 14 KB (c) 10 KB (d) 8 KB
Explanation –
Using the overlay concept we need not actually have the entire program inside the main
memory.Only we need to have the part which are required at that instance of time, either we
need Root-A-D or Root-A-E or Root-B-F or Root-C-G part.
Root+A+D = 2KB + 4KB + 6KB = 12KB
Root+A+E = 2KB + 4KB + 8KB = 14KB
Root+B+F = 2KB + 6KB + 2KB = 10KB
Root+C+G = 2KB + 8KB + 4KB = 14KB
So if we have 14KB size of partition then we can run any of them.
Answer -(b) 14KB

You might also like