Download as pdf or txt
Download as pdf or txt
You are on page 1of 253

Operating Systems

Lecture 9 – Dt. 11th Oct 2023

Instructor:
Dr. F. Lalchhandama

School of Engineering, Jawaharlal Nehru University


Today’s Class
• Deadlock
• System model
• Deadlock Characterization
• Methods for handling deadlocks

School of Engineering, Jawaharlal Nehru University


Deadlock – System Model
• A system can have many resources, such as
• Physical resources – printers, tape drives, memory space, CPU time
• Logical (system) resources – semaphores, mutex locks, and files
• A process must request a resource before using it and must release the
resource after using it
• A process may request as many resources as it requires to carry out its designated
task
• The number of resource requested must not exceed the total number of resources
available in the system
• Under the normal mode of operation, a process may utilize a resource in
only the following sequence:
• Request
• Use
• Release

School of Engineering, Jawaharlal Nehru University


Deadlock – System Model
• A system table records whether each resource is free or allocated
• For each resource that is allocated, the table also records the process to
which it is allocated
• If a process requests a resource that is currently allocated to another
process, it can be added to a queue of processes waiting for this resource
• A set of processes is in a deadlocked state when every process in the set is
waiting for an event that can be caused by another process in the set
• E.g.,
• In a deadlock, processes never finish executing, and system resources are
tied up, preventing other jobs from starting
School of Engineering, Jawaharlal Nehru University
Deadlock – System Model

School of Engineering, Jawaharlal Nehru University


Deadlock Characterization
• A deadlock situation can arise if the following four necessary conditions
hold simultaneously in a system:
• Mutual exclusion: At least one resource must be held in a nonsharable mode; that is,
only one process at a time can use the resource. If another process requests that
resource, the requesting process must be delayed until the resource has been
released
• Hold and wait: A process must be holding at least one resource and waiting to
acquire additional resources that are currently being held by other processes
• No preemption: Resources cannot be preempted; a resource can only be released
only voluntarily by the process holding it, after that process has completed its task
• Circular wait: A set {P0, P1, …, Pn) of waiting processes must exist such that P0 is
waiting for a resource held by P1, P1 is waiting for a resource held by P2, …, Pn-1 is
waiting for a resource held by Pn, and Pn is waiting for a resource held by P0

School of Engineering, Jawaharlal Nehru University


School of Engineering, Jawaharlal Nehru University
Deadlock Characterization
• A deadlock situation can arise if the following four necessary conditions
hold simultaneously in a system:
• Mutual exclusion: At least one resource must be held in a nonsharable mode; that is,
only one process at a time can use the resource. If another process requests that
resource, the requesting process must be delayed until the resource has been
released All four conditions must hold for a
• Hold and wait: A process must be holding at least one resource and waiting to
deadlock to occur
acquire additional resources that are currently being held by other processes
• No preemption: Resources cannot be preempted; a resource can only be released
only voluntarily by the process holding it, after that process has completed its task
• Circular wait: A set {P0, P1, …, Pn) of waiting processes must exist such that P0 is
waiting for a resource held by P1, P1 is waiting for a resource held by P2, …, Pn-1 is
waiting for a resource held by Pn, and Pn is waiting for a resource held by P0

School of Engineering, Jawaharlal Nehru University


Deadlock Characterization
• Deadlocks can be described more precisely in terms of a directed graph called a
system resource-allocation graph
• This graph consists of a set of vertices V and a set of edges E
• The set of vertices V is partitioned into two different types of nodes: P = {P1, P2,
…, Pn}, the set consisting of all the active processes in the system, and R = {R1, R2,
…, Rm}, the set consisting of all resource types in the system
• A directed edge from process Pi to resource type Rj is denoted by Pi  Rj; it
signifies that process Pi has requested an instance of resource type Rj and is
currently waiting for that resource
• A directed edge from resource type Rj to process Pi is denoted by Rj  Pi; it
signifies that an instance of resource type Rj has been allocated to process Pi
• A directed edge Pi  Rj is called a request edge; a directed edge Rj  Pi is called
an assignment edge

School of Engineering, Jawaharlal Nehru University


Deadlock Characterization
R1 R3

P1 P2 P3

P = {P1, P2, P3} R2 R4


R = {R1, R2, R3, R4}
E = {P1  R1, P2  R3,
R1  P2, R2  P2,
R2  P1, R3  P3, R4} Fig.: Resource-allocation graph
School of Engineering, Jawaharlal Nehru University
Deadlock Characterization
• If the graph contains no cycles, then no process in the system is
deadlocked
• If the graph contains a cycle, then a deadlock may exist
• If the cycle involves only a set of resource types, each of which has only a
single instance, then a deadlock has occurred – each process involved in the
cycle is deadlocked
• In this case, a cycle in the graph is both a necessary and sufficient condition for the
existence of deadlock
• If each resource type has several instances, then a cycle does not necessarily
imply that a deadlock has occurred
• In this case, a cycle in the graph is a necessary but not a sufficient condition for the
existence of deadlock

School of Engineering, Jawaharlal Nehru University


Deadlock Characterization
• Consider the set of processes, resources, and their allocation given
below:
P = {P1, P2, P3, P4}
R = {R1, R2}
E = {P1  R1, R1  P3, P3  R2, R2  P1,
R1  P2, R2  P4}

School of Engineering, Jawaharlal Nehru University


Deadlock Characterization
• Consider the set of processes, resources, and their allocation given
below:
P = {P1, P2, P3, P4} P2
R1
R = {R1, R2}
E = {P1  R1, R1  P3, P3  R2, R2  P1,
R1  P2, R2  P4}
P3
P1
R2
Resource-allocation graph with a cycle but no deadlock
Process P4 may release its instance of resource type R2
That resource can then be allocated to P3, breaking the cycle
P4
School of Engineering, Jawaharlal Nehru University
Deadlock Characterization
• Consider the set of processes, resources, and their allocation given
below:
P = {P1, P2, P3}
R = {R1, R2, R3, R4}
E = {P1  R1, R1  P2, P2  R3, R3  P3, P3  R2, R2  P1, P2  R3, R3  P3, P3 R2, R2  P2, R4}

Q. Draw a resource-allocation graph for the above and find out whether there exist a cycle or not.
If yes, how many cycles are present and state with reason whether a deadlock can occur or not?

School of Engineering, Jawaharlal Nehru University


Deadlock Characterization
R1 R3

2 cycles exist in the graph.


Processes P1, P2, and P3 are
deadlocked. Process P2 is waiting
P1 P2 P3 for the resource R3, which is held
by process P3. Process P3 is waiting
R2
R4 for either process P1 or process P2
to release resource R2. In addition,
process P1 is waiting for process P2
to release resource R1

School of Engineering, Jawaharlal Nehru University


Deadlock Characterization
• If the graph contains no cycles, then no process in the system is
deadlocked
• If the graph contains a cycle, then a deadlock may exist

School of Engineering, Jawaharlal Nehru University


Methods for Handling Deadlocks
• In general, the deadlock problem can be dealt in one of three ways:
• Prevention
• Detection
• Recovery
• Prevention or avoidance – this ensures that a system will never enter a
deadlocked state
• Detection and recover – allow the system to enter a deadlocked state
• Ignore the problem altogether and pretend that deadlocks never occur in the
system

School of Engineering, Jawaharlal Nehru University


Methods for Handling Deadlocks
• In general, the deadlock problem can be dealt in one of three ways:
• Prevention or avoidance – this ensures that a system will never enter a
deadlocked state
• Detection and recover – allow the system to enter a deadlocked state
• Ignore the problem altogether and pretend that deadlocks never occur in the
system
• The third solution is the one used by most operating systems,
including Linux and Windows
• It is up to the application developer to write programs that handle
deadlocks

School of Engineering, Jawaharlal Nehru University


Deadlock Prevention
• Four necessary conditions must hold for a deadlock to occur
• By ensuring that at least one of these conditions cannot hold, we can
prevent the occurrence of a deadlock
• Mutual exclusion
• Hold and wait
• No preemption
• Circular wait

School of Engineering, Jawaharlal Nehru University


Deadlock Prevention
• Mutual exclusion
• Sharable resources do not require mutually exclusive cess and thus cannot be
involved in a deadlock
• A process never needs to wait for a sharable resources
• However, some resources are intrinsically nonsharable, e.g. mutex lock

School of Engineering, Jawaharlal Nehru University


Deadlock Prevention
• Hold and wait
• First Solution: When a process requests a resource, it should not hold any
other resources
• Each process should request and be allocated all its resources before it begins
execution
• Second Solution: A process is allowed to request resources only when it has
none
• A process should release all the resources before it request other resource
• Disadvantages:
• Resource utilization is low as resources may be allocated but unused for a
long period
• Starvation is possible

School of Engineering, Jawaharlal Nehru University


Deadlock Prevention
• No preemption:
• If a process is holding some resources and requests another resource that
cannot be immediately allocated to it (that is, the process must wait), then all
resources the process is currently holding are preempted
• It cannot be applied to resources, such as mutex locks and
semaphores

School of Engineering, Jawaharlal Nehru University


Deadlock Prevention
• Circular wait
• Impose a total ordering of all resource types and to require that each process
requests resources in an increasing order of enumeration

School of Engineering, Jawaharlal Nehru University


Deadlock Prevention
• Possible side effects of preventing deadlocks by this method are low
device utilization and reduced system throughput

School of Engineering, Jawaharlal Nehru University


Operating Systems

Lecture 10 – Dt. 12th Oct 2023

Instructor:
Dr. F. Lalchhandama

School of Engineering, Jawaharlal Nehru University


Today’s Class
• Deadlock Cont….
• Methods for handling deadlocks

School of Engineering, Jawaharlal Nehru University


Deadlock Avoidance
• A deadlock avoidance algorithm dynamically examines the resource-
allocation state to ensure that a circular-wait condition can never
exist
• The resource-allocation state is defined by the number of available
and allocated resources and the maximum demands of the processes
• Resource-allocation graph algorithm
• Banker’s algorithm

School of Engineering, Jawaharlal Nehru University


Deadlock Avoidance
• Resource-Allocation Graph Algorithm:
• In addition to the request and assignment edges, a new type of edge, called a
claim edge, is introduced
• A claim edge Pi  Rj indicates that process Pi may request resource Rj at some
time in the future, and is represented in the graph by a dashed line
• When process Pi requests resource Rj, the claim edge Pi  Rj is converted to a
request edge
• When a resource Rj is released by Pi, the assignment edge Rj  Pi is
reconverted to a claim edge Pi  Rj
• Before process Pi starts executing, all its claim edges must already appear in
the resource-allocation graph

School of Engineering, Jawaharlal Nehru University


Deadlock Avoidance
• Resource-Allocation Graph Algorithm:

R1

Applicable only to a resource-


P1 P2
allocation system with a
single instance of each
resource type

R2

School of Engineering, Jawaharlal Nehru University


Deadlock Avoidance
• Banker’s Algorithm:
• Applicable to a resource-allocation system with multiple instances of each
resource type
• Less efficient than the resource-allocation graph scheme
• When a new process enters the system, it must declare the maximum number
of instances of each resource type that it may need
• This number may not exceed the total number of resources in the system
• When a user requests a set of resources, the system must determine whether
the allocation of these resources will leave the system in a safe state
• If it will, the resources are allocated; otherwise, the process must wait until
some other process releases enough resources

School of Engineering, Jawaharlal Nehru University


Deadlock Detection
• If a system does not employ either a deadlock-prevention or a
deadlock-avoidance algorithm, then a deadlock situation may occur
• In this environment, the system may provide:
• An algorithm that examines the state of the system to determine whether a
deadlock has occurred
• An algorithm to recover from the deadlock

School of Engineering, Jawaharlal Nehru University


Deadlock Detection
• Single instance of each resource type
• We can define a deadlock detection algorithm that uses a variant of the
resource-allocation graph, called a wait-for graph
• We obtain this graph from the resource-allocation graph by removing the
resource nodes and collapsing the appropriate edges
• A deadlock exists in the system if and only if the wait-for graph contains a
cycle
• To detect deadlocks, the system needs to maintain the wait-for graph and
periodically invoke an algorithm that searches for a cycle in the graph

School of Engineering, Jawaharlal Nehru University


Deadlock Detection
• Single instance of each resource type
R1 R2
P1

P1 P2 P3 P2

R3
R4
P3 Wait-for graph

School of Engineering, Jawaharlal Nehru University


Deadlock Detection
• Several instances of a resource type:
• This algorithm simply investigates every possible allocation sequence for the
processes that remain to be completed
• If the number of available resources is not sufficient to fulfill the requests of
other processes, then a deadlock exists

School of Engineering, Jawaharlal Nehru University


Banker’s Algorithm
• Uses System Table

School of Engineering, Jawaharlal Nehru University


Banker’s Algorithm
• Uses System Table
Process Allocation Need
R1 R2 R1 R2
P1 1 0 0 1
P2 0 1 1 0
P3 0 1 0 0

Res = {R1 R2} = {1 2}


Avail = {0 0}

School of Engineering, Jawaharlal Nehru University


Banker’s Algorithm
• Uses System Table
Process Allocation Need
R1 R2 R1 R2
P1 1 0 0 1
P2 0 1 1 0
P3 0 1 0 0

• Steps:
• Avail = [0 0]
1. P3 releases R2

School of Engineering, Jawaharlal Nehru University


Banker’s Algorithm
• Uses System Table
Process Allocation Need
R1 R2 R1 R2
P1 1 0 0 1
P2 0 1 1 0
P3 0 1 0 0

• Steps:
• Avail = [0 0]
1. P3 releases R2
Avail = [0 0] + [0 1] = [0 1]

School of Engineering, Jawaharlal Nehru University


Banker’s Algorithm
• Uses System Table
Process Allocation Need
R1 R2 R1 R2
P1 1 0 0 1
P2 0 1 1 0
P3 0 1 0 0

• Steps:
• Avail = [0 1]
2. R2 --> P1, P1 releases R1
Avail = [0 1] + [1 0] = [1 1]

School of Engineering, Jawaharlal Nehru University


Banker’s Algorithm
• Uses System Table
Process Allocation Need
R1 R2 R1 R2
P1 1 0 0 1
P2 0 1 1 0
P3 0 1 0 0

• Steps:
• Avail = [1 1]
3. R1 --> P2, P2 releases R2
Avail = [1 1] + [0 1] = [1 2]

School of Engineering, Jawaharlal Nehru University


Banker’s Algorithm
• Uses System Table
Process Allocation Need
R1 R2 R1 R2
P1 1 0 0 1
P2 0 1 1 0
P3 0 1 0 0

• Steps:
• Avail = [1 2]
No deadlock. Safe sequence is {P3, P1, P2}

School of Engineering, Jawaharlal Nehru University


Recovery from Deadlock
• There are two options for breaking a deadlock
• To abort one or more processes to break the circular wait – process
termination
• To preempt some resources from one or more of the deadlocked
processes – resource preemption

School of Engineering, Jawaharlal Nehru University


Recovery from Deadlock
• Process termination:
• Abort all deadlocked processes
• Abort one process at a time until the deadlock cycle is eliminated
• We should abort those processes whose termination will incur the
minimum cost
• Many factors may effect which process is chosen, including:
• Priority of the process
• Duration of the process to complete and already completed
• Amount and types of resources the process has used
• Amount of more resources required by the process to complete
• Amount of process required to be terminated
• Is process interactive or batch

School of Engineering, Jawaharlal Nehru University


Recovery from Deadlock
• Resource preemption:
• We successively preempt some resources from processes and give these
resources to other processes until the deadlock cycle is broken
• If preemption is required to deal with deadlocks, then three issues need to be
addressed:
• Selecting a victim
• Rollback
• Starvation

School of Engineering, Jawaharlal Nehru University


21ITPC403
Operating Systems
Lecture 11 – Dt. 19th Oct 2023

School of Engineering, Jawaharlal Nehru University


Today’s Class
• Memory Management
• Memory Organization
• Memory Hierarchy
• Memory Protection
• Address Binding
• Logical address and virtual address
• Swapping
• Contiguous memory allocation

School of Engineering, Jawaharlal Nehru University


Memory management

• We have seen how CPU can be shared by a set of processes


• Improve system performance
• Process management
• Need to keep several process in memory
• Share memory
• Learn various techniques to manage memory
• Hardware dependent
Memory management
What are we going to learn?
• Basic Memory Management: logical vs. physical
address space, protection, contiguous memory
allocation, paging, segmentation, segmentation with
paging.

• Virtual Memory: background, demand paging,


performance, page replacement, page replacement
algorithms (FCFS, LRU), allocation of frames, thrashing.
Background
• Program must be brought (from disk) into memory
• Fetch-decode-execute cycle
CPU

• Memory unit only sees a stream of addresses + read


requests, or address + data and write requests
• Sequence of memory addresses generated by
running program
Memory Organization

School of Engineering, Jawaharlal Nehru University


Memory Organization

School of Engineering, Jawaharlal Nehru University


Memory Organization
• Registers are usually accessible within one cycle of the CPU clock
• Completing a memory access may take many cycles of the CPU clock
• Processor needs to stall, since it does not have the data required to
complete the instruction that it is executing
• Cache is introduced to solve this
• Concern is not only on the relative speed, but also ensuring correct
operation
• For proper system operation we must protect the operating system from
access by user processes
• On multiuser systems, we must additionally protect user processes from
one another – provided by hardware
School of Engineering, Jawaharlal Nehru University
Memory Protection
• Memory protection is a mechanism in computer systems that ensures that processes
cannot access or modify memory that they are not authorized to access
• Memory protection is an essential feature of modern operating systems, as it provides a
secure environment for processes to run and prevents processes from interfering with
each other
• Memory protection is implemented through hardware mechanisms such as base and
limit registers, which define the memory regions that a process can access
• The base register contains the starting address of a memory region, and the limit register
contains the size of that memory region
• For example, if the base register contains the value 1000 and the limit register contains
the value 500, it means that the memory region starts at address 1000 and has a size of
500 bytes
• Any memory access instruction that tries to access a location outside this region will
trigger a memory protection fault or a trap to the OS, which treats the attempt as a fatal
error

School of Engineering, Jawaharlal Nehru University


Memory Protection
• Memory protection is a mechanism in computer systems that ensures that processes
cannot access or modify memory that they are not authorized to access
• Memory protection is an essential feature of modern operating systems, as it provides a
secure environment for processes to run and prevents processes from interfering with
each other
• Memory protection The isbase and limit through
implemented registershardware
can be loaded only such as base and
mechanisms
limit registers, which
by thedefine the memory
operating regions
system, whichthat a process
uses can access
a special
• The base register contains theprivileged
starting address of a memory region, and the limit register
instruction
contains the size of that memory region
• For example, if the base register contains the value 1000 and the limit register contains
the value 500, it means that the memory region starts at address 1000 and has a size of
500 bytes
• Any memory access instruction that tries to access a location outside this region will
trigger a memory protection fault or a trap to the OS, which treats the attempt as a fatal
error

School of Engineering, Jawaharlal Nehru University


Memory Protection

School of Engineering, Jawaharlal Nehru University


Memory Protection

Fig. Hardware address protection with base and limit registers

School of Engineering, Jawaharlal Nehru University


Address Binding
• Address binding refers to the process of assigning a memory address
to a variable or a program entity such as a function or object during
the compilation, loading or execution of a program
• The process of address binding is critical in the management of
memory resources in computer systems
• Why assign?
• In most cases, a user program goes through several steps before being
executed
• Addresses may be represented in different ways during these steps
• Addresses in the source program are generally symbolic (such as the variable
count) and it is required to bind these addresses to a relocatable addresses
before moving on to memory
School of Engineering, Jawaharlal Nehru University
Address Binding
• Binding of instructions and data to memory addresses can be done at any
step along the way:
• Compile-time binding: The memory address for a variable or function is assigned
during the compilation process. This method is efficient but inflexible, as the
memory addresses are fixed at compile-time and cannot be changed during the
execution of the program
• Load-time binding: The memory address for a variable or function is assigned during
the loading of the program into memory. This method allows for more flexibility than
compile-time binding, as the memory addresses can be changed before the
execution of the program. However, it still limits the ability to move the program
around in memory after it has been loaded
• Run-time binding: The memory address for a variable or function is assigned during
the execution of the program. This method is the most flexible as it allows for the
program to be moved around in memory during runtime. However, it is also the most
inefficient as it requires additional processing overhead to determine memory
locations during execution

School of Engineering, Jawaharlal Nehru University


Logical and Physical Address Space
• Logical Address:
• An address generated by the CPU is commonly referred to as a logical address
• It is used by the CPU to access memory
• Also called virtual address
• It is not necessarily related to the physical location of the memory in the computer
• Physical Address:
• The actual address of a memory location in the computer’s physical memory
• It corresponds to a specific location in the memory’s hardware
• Also called real address
• The compile-time and load-time address binding methods generate
identical logical and physical addresses
• The execution-time address binding scheme results in differing logical and
physical addresses – logical address is known as virtual address
School of Engineering, Jawaharlal Nehru University
Logical and Physical Address Space
• Logical Address:
• An address generated by the CPU is commonly referred to as a logical address
• It is used by the CPU to access memory
• Also called virtual address
• It is not necessarily related to the physical location of the memory in the computer
• Physical Address:
• The actual address of a memory location in the computer’s physical memory
• It corresponds to a specific location in the memory’s hardware
• Also called real address
• The compile-time and load-time address binding methods generate
identical logical and physical addresses
• The execution-time address binding scheme results in differing logical and
physical addresses – logical address is known as virtual address
School of Engineering, Jawaharlal Nehru University
Logical and Physical Address Space
• When a CPU needs to access a specific memory location, it generates
a logical address
• The memory management unit (MMU) in the CPU then translates the
logical address into a physical address
• This process is known as address translation, and it is performed by
hardware within the computer system

School of Engineering, Jawaharlal Nehru University


Memory-Management Unit (MMU)
• Hardware device that at run time maps virtual to physical address

• Many methods possible

• To start, consider simple scheme where the value in the relocation


register is added to every address generated by a user process at
the time it is sent to memory
• relocation register
• MS-DOS on Intel 80x86 used 4 relocation registers

• The user program deals with logical addresses (0 to max); it never


sees the real physical addresses (R to R+max)
• Say the logical address 25
• Execution-time binding occurs when reference is made to location in
memory
• Logical address bound to physical addresses
Logical and Physical Address Space
• Logical Address Space:
• The logical address space is the set of addresses used by a program or process
to access memory
• It is typically a contiguous range of addresses that the process uses to
reference memory locations
• The logical address space is specific to each process and is managed by the
operating system
• Physical address space:
• The set of all physical addresses corresponding to these logical addresses
• The physical address space is shared by all processes running on the
computer, and it is managed by the hardware of the computer

School of Engineering, Jawaharlal Nehru University


Logical and Physical Address Space
• The operating system maps the logical address space to the physical
address space by using a technique called address translation
• The operating system uses a page table, which is a data structure that
contains information about the mapping between logical addresses
and physical addresses
• When a process accesses a memory location, the processor translates
the logical address into the corresponding physical address using the
page table

School of Engineering, Jawaharlal Nehru University


Logical and Physical Address Space
• In summary, the logical address space is the memory address range
that a process uses to access memory, while the physical address
space is the actual memory address range that corresponds to the
logical address space.
• The operating system uses address translation techniques to map the
logical address space to the physical address space

School of Engineering, Jawaharlal Nehru University


• Memory Management
• Swapping
• Contiguous Memory Allocation

School of Engineering, Jawaharlal Nehru University


Swapping
• A process must be in memory to be executed
• A process can be swapped temporarily out of a memory to a backing store and
then brought back into memory for continued execution
• Swapping involves moving parts of a program or data from the main memory
(RAM) to a secondary storage device (usually a hard disk) when the RAM is full or
when the operating system needs to free up memory for other processes
• When a program requires more memory than is available in the main memory,
the operating system selects some of the data/process that is not being actively
used and transfers it to the secondary storage device to make space for the new
program or data
• The swapped-out data is written to the disk (backing store) and marked as not
currently being in memory
• When the operating system needs to access the swapped-out data, it swaps it
back into the main memory from the disk
School of Engineering, Jawaharlal Nehru University
Swapping

School of Engineering, Jawaharlal Nehru University


Swapping
• Swapping can be slow compared to accessing data from the main memory
because the secondary storage device is slower than the main memory
• Therefore, swapping is used only when there is no other way to get more
memory
• The process of swapping can also cause performance issues, as it involves
the movement of data between the disk and the memory, which can cause
delays in executing the program
• Standard swapping is not used in modern operating systems because of
two much swapping time
• Modified versions of swapping are found on many systems

School of Engineering, Jawaharlal Nehru University


Contiguous Memory Allocation
• The term "contiguous" refers to things that are touching or adjacent
to each other, without any space or gap in between – being in actual
contact

School of Engineering, Jawaharlal Nehru University


Contiguous Memory Allocation
• Main memory is usually divided into two partitions – operating
system at the lower part and user processes at the higher part
• Contiguous memory allocation is a memory management technique
used by operating systems to allocate a contiguous block of memory
to a process
• Each process is contained in a single contiguous section of memory
• The memory can be divided either in fixed-size partitioning scheme
and variable-size partitioning scheme

School of Engineering, Jawaharlal Nehru University


Contiguous Memory Allocation
• Fixed-size partition scheme:
• Each partition may contain exactly one process
• Degree of multiprogramming is bound by the number of partitions
• When a partition is free, a process is selected from the input queue and is
loaded into the free partition
• When the process terminates, the partition becomes available for another
process
• Partitioning can either be equal-size partition or unequal size partition

School of Engineering, Jawaharlal Nehru University


Contiguous Memory Allocation
• Fixed-size partition scheme:
• Say, we have a memory size of 64MB
0 0
OS 8M OS 8M
8M 2M
8M 4M
8M 6M
8M 8M
8M 10M
8M 12M
8M 16M

(a) Equal-size partition (b) Unequal-size partition

School of Engineering, Jawaharlal Nehru University


Contiguous Memory Allocation
• Fixed-size partition scheme:
• Unequal-size partition:
• Each process is assigned a memory to the smallest partition within which it will fit
• Queue for each partition to minimize wasted memory within a partition

School of Engineering, Jawaharlal Nehru University


Contiguous Memory Allocation
• Fixed-size partition scheme: OS 8M
• Unequal-size partition:
2M

4M

6M
Memory
P1 14M P2 2M P3 11M P4 8M 8M

Queue 10M

12M

16M

School of Engineering, Jawaharlal Nehru University


Contiguous Memory Allocation
• Fixed-size partition scheme: OS 8M
• Unequal-size partition:
P2 2M

4M

6M
Memory
P1 14M P2 2M P3 11M P4 8M P4 8M

Queue 10M

P3 11M

P1 14M

School of Engineering, Jawaharlal Nehru University


Contiguous Memory Allocation
• Fixed-size partition scheme:
• Advantages:
• Simple and easy to implement
• Supports multiprogramming as multiple processes can be placed inside the memory
• Easy to manage
• Disadvantages:
• Internal fragmentation
• Limitation on the size of the process
• External fragmentation
• Degree of multiprogramming is less

School of Engineering, Jawaharlal Nehru University


Contiguous Memory Allocation
• Fixed-size partition scheme: OS 8M
• Unequal-size partition:
P2 2M

4M

6M
Memory
P1 14M P2 2M P3 11M P4 8M P4 8M

Queue 10M

P3 11M

Internal P1 14M
Fragmentation
School of Engineering, Jawaharlal Nehru University
Contiguous Memory Allocation
• Variable-Size Partition Scheme
• Also known as dynamic partitioning
• Partitions are of variable size
• Partition size is not declared initially
• Whenever any process arrives, a partition of equal size to the size of
the process is created and then allocated to the process
• Size of the partition is equal to the size of the process
• Eventually, we have holes in the memory – external fragmentation

School of Engineering, Jawaharlal Nehru University


Contiguous Memory Allocation
• Variable-Size Partition Scheme
8M OS
• Also known as dynamic partitioning

P1 20M P2 8M P3 20M P4 8M 56M

School of Engineering, Jawaharlal Nehru University


Contiguous Memory Allocation
• Variable-Size Partition Scheme
8M OS
• Also known as dynamic partitioning
P1 20M

P2 8M
P1 20M P2 8M P3 20M P4 2M

P3 20M

P4 2M
External
Fragmentation

School of Engineering, Jawaharlal Nehru University


Contiguous Memory Allocation
• Variable-Size Partition Scheme
• Advantages:
• No internal fragmentation
• Degree of multiprogramming is dynamic because of the absence of internal
fragmentation
• No limitation on the size of process
• Disadvantages:
• Eternal fragmentation
• Difficult implementation and management as memory is allocated during run-
time

School of Engineering, Jawaharlal Nehru University


Contiguous Memory Allocation
• Dynamic Storage-Allocation
• First-fit: Allocate the first space that is big enough
• Best-fit: Allocate the smallest space that is big enough; must search entire
list, unless ordered by size
• Produces the smallest leftover space
• Worst-fit: Allocate the largest space; must also search entire list
• Produces the largest leftover space

School of Engineering, Jawaharlal Nehru University


Contiguous Memory Allocation
• Solution for fragmentation?
• Compaction
• Non-contiguous allocation

School of Engineering, Jawaharlal Nehru University


21ITPC403
Operating Systems
Lecture 12 – Dt. 25th Oct 2023

School of Engineering, Jawaharlal Nehru University


Today’s Class
• Memory Management
• Non-contiguous allocation
• Segmentation
• Paging

School of Engineering, Jawaharlal Nehru University


Segmentation
• It is a non-contiguous memory-management scheme
• Main memory is divided into logical segments or address spaces
• The logical address space of a process is divided into multiple segments, each of
which has a starting address and a length
• Each segment represents a different part of a process, such as code, data, and
stack, with a name and a length
• The addresses specify both the segment name and the offset within the segment
• Address is specified by two quantities: a segment number and an offset
• When a process needs to access a memory location, it specifies the segment
identifier and the offset within the segment. The operating system translates this
logical address to a physical address by adding the segment's starting address to
the offset value

School of Engineering, Jawaharlal Nehru University


Segmentation
• For mapping logical address to physical address a segment table is
required
• Segment table:
• Each entry in the segment table has a segment base and a segment limit
• Segment base – contains the starting physical address where the segment
resides in memory
• Segment limit – specifies the length of the segment

School of Engineering, Jawaharlal Nehru University


Segmentation
A logical address consists of two
parts:
• a segment number, s, and an
offset into that segment, d
• The segment number is used as an
index to the segment table
• The offset d of the logical address
must be between 0 and the
segment limit. If it is not, we trap
to the operating system
• When an offset is legal, it is added
to the segment base to produce
the address in physical memory of
the desired byte
Thus, segment table is essentially an
Fig. Segmentation Hardware
array of base-limit register pairs

School of Engineering, Jawaharlal Nehru University


Segmentation
• We have five segments
• The segments are stored in physical memory
as shown
• The segment table has a separate entry for
each segment, giving the beginning address
of the segment in physical memory (or base)
and the length of that segment or (limit)
• Segment 2 is 400 bytes long and begins at
location 4300
• So, a reference to byte 53 of segment 2 is
mapped onto location 4300 + 53 = 4353
• A reference to segment 3, byte 852, is
mapped to 3200 + 852 = 4052
• A reference to byte 999 of segment 0 would
result in????
• A reference to byte 1001 of segment 0
would result in????

School of Engineering, Jawaharlal Nehru University


Segmentation
• Advantages:
• Flexibility – segmentation allows a process to access non-contiguous areas of
memory, making it easy to manage complex data structures
• Protection – each segment can be protected
• Sharing – segments can be shared among different processes, which reduces
memory requirements and improves performance
• Disadvantages:
• External fragmentation – segments of different sizes can lead to external
fragmentation
• Overhead – The OS needs to maintain a segment table, which can be time-
consuming and require additional memory
• Complexity – more complex than other memory management techniques

School of Engineering, Jawaharlal Nehru University


Paging
• It is also a non-contiguous memory-management scheme
• Paging avoids external fragmentation and the need for compaction
• It also solves the considerable problem of fitting memory chunks of
varying sizes onto the backing store
• Paging is implemented through cooperation between the operating
system and the computer hardware
• Paging is used in most OSs

School of Engineering, Jawaharlal Nehru University


Paging
• Paging is a memory management scheme used in operating systems
to manage physical memory efficiently and to provide a virtual
memory abstraction to processes
• It is a method of breaking up the physical memory (RAM) and the
logical memory (used by processes) into fixed-size blocks called pages
• These pages allow for more flexible memory allocation and address
translation

School of Engineering, Jawaharlal Nehru University


Paging
• Page Size:
• The physical memory and logical memory are divided into fixed-size pages
• This page size is typically a power of 2, such as 4 KB or 4 MB
• Both the physical and logical memory are divided into these equally sized
pages
• Page Tables:
• Each process has a page table
• The page table is used to map logical addresses (also called virtual addresses)
to physical addresses
• It contains an entry for each page in the logical address space of the process

School of Engineering, Jawaharlal Nehru University


Paging
• Address Translation:
• When a process accesses a memory location, the logical address is divided into two
parts: the page number and the offset within the page
• The page number is used to index the page table to find the corresponding entry,
which contains the physical page number
• The offset within the page remains the same
• The physical page number is combined with the offset to create the actual physical
memory address
• Swapping:
• Paging allows for efficient swapping of pages in and out of physical memory (RAM)
• When a page is not in physical memory and is needed, the operating system can
swap it in from secondary storage (usually a hard disk)
• This enables processes to use more memory than is physically available

School of Engineering, Jawaharlal Nehru University


Paging
• Memory Protection:
• Paging helps enforce memory protection
• Page Replacement:
• If physical memory is full and a new page needs to be brought in from
secondary storage, a page replacement algorithm is used to decide which
page in physical memory to replace
• Fragmentation: Paging helps reduce fragmentation

School of Engineering, Jawaharlal Nehru University


Paging
• Paging implementation involves breaking physical memory into fixed-size
blocks called frames and breaking logical memory into blocks of the same
size called pages

Page Size = Frame Size

• When a process is to be executed, its pages are loaded into any available
memory frames from their source (a file system or the backing store)
• The backing store is divided into fixed-size blocks that are the same size as
the memory frames or clusters of multiple frames
School of Engineering, Jawaharlal Nehru University
Paging

School of Engineering, Jawaharlal Nehru University


Paging
• Every address generated by the CPU is divided into two parts:
a page number (p) and a page offset (d)
• Page number(p): Determines which page of the process the CPU wishes to
read the data from
• Page offset(d): Defines which word on the page the CPU wants to readThe
page number is used as an index into a page table
• The page table contains the base address of each page in physical memory
• This base address is combined with the page offset to define the physical
memory address that is sent to the memory unit
Physical memory = frame no. x page size + offset
School of Engineering, Jawaharlal Nehru University
Paging

Fig. Paging hardware

School of Engineering, Jawaharlal Nehru University


Paging 0
Frame 1 Page 0
number
Page number
2
0 page 0 0 1
1 4 3 Page 2
page 1
page 2 2 3 4 Page 1
3 7
page 3
j 5
Logical Page table
Logical memory 6
addresses
7 Page 3

physical
Fig. Paging model of logical and physical memory memory

School of Engineering, Jawaharlal Nehru University


Paging
• The page size (like the frame size) is defined by the hardware
• The size of a page is a power of 2, varying between 512 bytes and 1 GB per page,
depending on the computer architecture
• The selection of a power of 2 as a page size makes the translation of a logical
address into a page number and page offset particularly easy
• If the size of the logical address space is 2m, and a page size is 2n bytes, then the
high-order m – n bits of a logical address designate the page number, and the n
low-order bits designate the page offset
• Thus, the logical address is as follows
Page number Page offset
p is an index into the page table
p d and d is the displacement within the page
m-n n

School of Engineering, Jawaharlal Nehru University


0

Paging 4 i
j
k
l
8 m
n
0 a
o
1 b p
2 c 0 5
3 d 12
1 6
4 e
5 f 2 1
6 g
16
7 h 3 2
8 i
9 j Page table
10 k 20 a
11 l b
c
12 m d
13 n
14 o 24 e
15 p f
Physical memory g
h
Logical memory 28

School of Engineering, Jawaharlal Nehru University


0
Frame 0
Paging 4 i
j
k
Frame 1
Frame l
number 8 m
Page number n
0 a
o
Frame 2
1 b
2 c
Page 0 0 5
p

3 d 12
1 6 Frame 3
4 e
5 f
Physical
Page 1 2 1
6 g addresses 16
7 h 3 2
8 i
Frame 4
9 j
Page 2 Page table
Logical 10 k 20 a
11 l b
addresses c
Frame 5
12 m d
13 n
14 o Page 3 24 e
f
15 p
g
Frame 6
Physical memory h
Logical memory 28
Frame 7

School of Engineering, Jawaharlal Nehru University


Paging
Frame
number
Page number
0 a
1 b
2 c
Page 0 0 5
3 d
1 6
4 e
5 f 2 1
6 g Page 1
7 h 3 2
8 i
9 j
Page 2 Page table
Logical 10 k
11 l
addresses
12 m
13 n
14 o Page 3
15 p

Logical memory

School of Engineering, Jawaharlal Nehru University


Paging
• Using a page size of 4 bytes and a physical memory of 32 bytes (8 pages),
we show how the programmer’s view of memory can be mapped into
physical memory
• Logical address 0 is page 0, offset 0. Indexing into the page table, we find
that page 0 is in Frame 5.
Physical memory = frame no. x page size + offset = 5 x 4 + 0 = 20
Thus, logical address 0 maps to physical address 20
• Logical address 3 is page 0, offset 3. Indexing into the page table, we find
that page 0 is in Frame 5
Physical memory = frame no. x page size + offset = 5 x 4 + 3 = 23
Thus, logical address 3 maps to physical address 23
School of Engineering, Jawaharlal Nehru University
0
Frame 0
Paging 4 i
j
k
Frame 1
Frame l
number 8 m
Page number n
0 a
o
Frame 2
1 b
2 c
Page 0 0 5
p

3 d 12
1 6 Frame 3
4 e
5 f
Physical
Page 1 2 1
6 g addresses 16
7 h 3 2 32-byte
Frame 4
8 i memory
9 j
Page 2 Page table
Logical 10 k 20 a
11 l b
addresses c
Frame 5
12 m d
13 n
14 o Page 3 24 e
f
15 p
g
Frame 6
Physical memory h
Logical memory 28
Frame 7

School of Engineering, Jawaharlal Nehru University


Paging
Frame
number
Page number
0 a
1 h
2 k
Page 0 0 1
3 n
1 4
4 l
5 e 2 3
6 d Page 1
7 j 3 5 32-byte
8 c memory
9 v
Page 2 Page table
Logical 10 x
11 s
addresses
12 u
13 y
14 o Page 3
15 p

Logical memory

School of Engineering, Jawaharlal Nehru University


0
Frame 0
Paging 4 a
h
k
Frame 1
Frame n
number 8
Page number
0 a Frame 2
1 h
2 k
Page 0 0 1
3 n 12 c
1 4 v Frame 3
4 l x
5 e
Physical
Page 1 2 3 S
6 d addresses 16 l
7 j 3 5 e 32-byte
d Frame 4
8 c memory
9 v
Page 2 Page table j
Logical 10 x 20 u
11 s y
addresses o
Frame 5
12 u p
13 y
14 o Page 3 24
15 p Frame 6
Physical memory
Logical memory 28
Frame 7

School of Engineering, Jawaharlal Nehru University


Paging
• Features of paging
• Mapping logical address to physical address
• The physical address space is divided into fixed-size blocks called frames
• The logical address space is divided into fixed-size blocks called pages
• Page size is equal to frame size
• Number of entries in a page table is equal to number of pages in logical
address space
• The page table entry contains the frame number
• All the page table of the processes are placed in main memory

School of Engineering, Jawaharlal Nehru University


Paging
𝐿𝐴𝑆
• No. of pages =
𝑃𝑎𝑔𝑒 𝑆𝑖𝑧𝑒
𝑃𝐴𝑆
• No. of frames =
𝐹𝑟𝑎𝑚𝑒 𝑆𝑖𝑧𝑒
• If LA = n bits, LAS = 2n words
• If PA = n bits, PAS = 2n bytes
• If LAS = n words, LA = log2n bits
• If PAS = n bytes, PA = log2n bits
• Page Table Size = No. of entries in page table x page table entry size
• Page Table Size = No. of pages in LAS x No. of frames in PAS

School of Engineering, Jawaharlal Nehru University


Paging
𝐿𝐴𝑆
• No. of pages =
𝑃𝑎𝑔𝑒 𝑆𝑖𝑧𝑒
𝑃𝐴𝑆 LA or PA = n bits means the number of bits
• No. of frames =
𝐹𝑟𝑎𝑚𝑒 𝑆𝑖𝑧𝑒 required to represent the logical address or
• If LA = n bits, LAS = 2n words
physical address
• n
If PA = n bits, PAS = 2 bytes
• If LAS = n words, LA = log2n bits
• If PAS = n bytes, PA = log2n bits
• Physical memory = frame no. x page size + offset
• Page table Size = No. of entries in page table x page table entry size
• Page table Size = No. of entries in page table x Frame number
• Frame number is the no. of bits required to represent the frame in the PAS

School of Engineering, Jawaharlal Nehru University


Paging
Q1. Consider a system which has a logical address of 27 bits and
physical address of 21 bits. If the page size is 4 KW, then calculate the
number of pages and the number of frames.

School of Engineering, Jawaharlal Nehru University


Paging
Q1. Consider a system which has a logical address of 27 bits and physical address of
21 bits. If the page size is 4 KW, then calculate the number of pages and the
number of frames.
Soln.: LA = 27 bits, PA = 21 bits, Page Size = 4 KW = Frame size
LAS = 227 Words
PAS = 221 Bytes
𝐿𝐴𝑆 227 227
No. of Pages = = = = 215 = 25 * 210 = 32K
𝑃𝑎𝑔𝑒 𝑆𝑖𝑧𝑒 4 𝐾 𝑤𝑜𝑟𝑑𝑠 22 ∗210
𝑃𝐴𝑆 221 221
No. of Frames = = = = 29 = 512
𝐹𝑟𝑎𝑚𝑒 𝑆𝑖𝑧𝑒 4 𝐾 𝑤𝑜𝑟𝑑𝑠 22 ∗210

School of Engineering, Jawaharlal Nehru University


Paging
Q2. Consider a system where the number of pages = 2K and page size is 8 KW. If the
physical address is 18 bits, then calculate the logical address and the number of
frames.

School of Engineering, Jawaharlal Nehru University


Paging
Q2. Consider a system where the number of pages = 2K and page size is 8 KW. If the
physical address is 18 bits, then calculate the logical address and the number of frames.
Soln.: PA = 18 bits; No. of pages = 2K; Page Size = 8 KW = Frame Size
PAS = 218 Bytes
𝐿𝐴𝑆
No. of Pages =
𝑃𝑎𝑔𝑒 𝑆𝑖𝑧𝑒

LAS = 2 x 210 x 23 x 210 = 224 words


LA = log2224 = 24 bits
𝑃𝐴𝑆 218 218
No. of Frames = = = = 25 = 32
𝐹𝑟𝑎𝑚𝑒 𝑆𝑖𝑧𝑒 8𝐾 𝑤𝑜𝑟𝑑𝑠 23 ∗210

School of Engineering, Jawaharlal Nehru University


Paging
Q2. Consider a system where the number of pages = 2K and page size is 8 KW. If the
physical address is 18 bits, then calculate the logical address and the number of frames.
Soln.: PA = 18 bits; No. of pages = 2K; Page Size = 8 KW = Frame Size
PAS = 218 Bytes
𝐿𝐴𝑆
No. of Pages = Without calculating using log, we can just deduct the LA
𝑃𝑎𝑔𝑒 𝑆𝑖𝑧𝑒
from the LAS. The value at the power of 2 is actually the
LAS = 2 x 210 x 23 x 210 = 224 words no. of bits required to represent the logical address. So,
logical address = 24 bits
LA = log2224 = 24 bits
𝑃𝐴𝑆 218 218
No. of Frames = = = = 25 = 32
𝐹𝑟𝑎𝑚𝑒 𝑆𝑖𝑧𝑒 8𝐾 𝑤𝑜𝑟𝑑𝑠 23 ∗210

School of Engineering, Jawaharlal Nehru University


Paging
Q3. Consider a system with logical address of 32 bits and physical address
space of 64MB and page size of 4KB. If the memory is byte addressable, then
what is the approximate size of page table in bytes?
Soln.:
Page table Size = No. of entries in page table x page table entry size
Page table Size = No. of entries in page table x Frame number
Frame number is the no. of bits required to represent the frames in the PAS
Also, no. of entries in the page table is the same as no. of pages in LAS

School of Engineering, Jawaharlal Nehru University


Paging
Q3. Consider a system with logical address of 32 bits and physical address space of 64MB and page size of 4KB. If the memory
is byte addressable, then what is the approximate size of page table in bytes?
Soln.:
Page table Size = No. of entries in page table x page table entry size
We know that no. of entries in the page table is the same as no. of pages in LAS
𝐿𝐴𝑆 232
No. of pages = = = 220
𝑃𝑎𝑔𝑒 𝑆𝑖𝑧𝑒 4∗210

We know that page table entry contains the number of frames


So, page table entry size = frame number
𝑃𝐴𝑆 64 ∗ 220 226
No. of frames = = = = 214
𝐹𝑟𝑎𝑚𝑒 𝑆𝑖𝑧𝑒 4 ∗210 212

So, no. of bits required to represent the frame = Frame number = 14 bits
Therefore, page table size = 220 x 14 bits ≈ 220 x 16 bits = 220 x 2 Bytes = 2 MB

School of Engineering, Jawaharlal Nehru University


Paging
Q4. Consider a system with byte-addressable memory, 32 bit logical
addresses, 4 kilobyte page size and page table entries of 4 bytes each.
The size of the page table in the system in megabytes is

School of Engineering, Jawaharlal Nehru University


Paging
Q4. Consider a system with byte-addressable memory, 32 bit logical
addresses, 4 kilobyte page size and page table entries of 4 bytes each.
The size of the page table in the system in megabytes is
Soln.:
Page table size = no. of entries in the page table x page table entry size
Page table entry size = 4 bytes
232
No. of entries in the page table = no. of pages = = 220
4 ∗210
Page table size = 220 x 4 bytes = 4MB

School of Engineering, Jawaharlal Nehru University


Paging
Q5. Consider a system having a page table with 4K entries. The logical
address is 29 bits. What is the number of bits required to represent the
physical address if the system has 512 frames?

School of Engineering, Jawaharlal Nehru University


Paging
Q5. Consider a system having a page table with 4K entries. The logical address is 29 bits. What is the
number of bits required to represent the physical address if the system has 512 frames?
Soln.:
To calculate the number of bits required to represent the PA, we need to know the PAS.
We know that PAS = No. of frames x frame size
We also know that frame size = page size
We also know that number of entries in the page table = no. of pages in LAS
𝐿𝐴𝑆 229
So, Page Size = = = 217
𝑁𝑜.𝑜𝑓 𝑝𝑎𝑔𝑒𝑠 4 ∗210
Then, PAS = 512 x 217 = 29 x 217 = 226 bytes
Therefore, the number of bits required to represent the physical address is 26 bits

School of Engineering, Jawaharlal Nehru University


Paging
• When a process needs to access a page that is not currently in
memory, a page fault occurs, and the operating system loads the
required page from disk into a free page frame in memory
• If there are no free page frames available, the operating system must
choose a page to evict from memory to make room for the new page

School of Engineering, Jawaharlal Nehru University


Operating Systems

Lecture 13 – Dt. 26th Oct 2023

School of Engineering, Jawaharlal Nehru University


Today’s Class
• Memory Management
• Paging Cont…

School of Engineering, Jawaharlal Nehru University


Paging – Hardware Support
• Each OS has its own method for storing page tables
• Some allocate a page table for each process
• A pointer to the page table is stored with the other register values in the PCB
• In the simplest case, the page table is implemented as a set of dedicated registers
• These registers should be built with very high-speed logic to make the paging-
address translation efficient
• The use of registers for the page table is satisfactory if the page table is
reasonably small (for example, 256 entries)
• However, contemporary computers have around 1 million entries
• For these machines, the use of fast registers to implement the page table is not
feasible

School of Engineering, Jawaharlal Nehru University


Paging – Hardware Support
• The page table is kept in main memory, and a page-table base register
(PTBR) points to the page table
• Problem – time required to access a user memory location
• If we want to access location i, we must first index into the page table,
using the value in the PTBR offset by the page number for i
• This is combined with the page offset to produce the actual address
• We can then access the desired place in memory
• With this scheme, two memory accesses are needed to access a byte
• If m is the main memory access time
then, effective memory access time (EMAT) = 2m
• This delay would be intolerable under most circumstances

School of Engineering, Jawaharlal Nehru University


Paging – Hardware Support
• Solution – to use a special, small, fast lookup hardware cache called a
translation look-aside buffer (TLB), which is associative, high-speed
memory
• The TLB stores the most recently used logical-to-physical memory
address translations (page numbers and corresponding frame
numbers), so that the MMU can quickly retrieve them without having
to look them up in the page table
• This can significantly speed up memory access times and improve
overall system performance as TLB access time will be significantly
lesser than main memory access time

School of Engineering, Jawaharlal Nehru University


Paging – Hardware Support

School of Engineering, Jawaharlal Nehru University


Paging – Hardware Support
• If the page number is not available in TLB, we have a TLB miss and we need
to access main memory. Once it is located, it will be moved to the TLB
• If the page number is available in TLB, we have a TLB hit and we do not
need to access main memory
• The percentage of times that the page number of interest is found in the
TLB is called the TLB hit ratio
• Intel i7 processor has a 128-entry L1 instruction TLB and a 64-entry L1 data
TLB
• Suppose, TLB access time = c, and TLB hit ratio = x
• Then, the effective memory access time can be calculated as
EMAT = x(c + m) + (1 - x)(c + 2m)
School of Engineering, Jawaharlal Nehru University
Paging – Hardware Support
Q1. Consider a system where the main memory access time = 100ns
and TLB access time = 20ns. The TLB hit ratio is 95%. Calculate effective
memory access time with and without TLB.

School of Engineering, Jawaharlal Nehru University


Paging – Hardware Support
Q1. Consider a system where the main memory access time = 100ns
and TLB access time = 20ns. The TLB hit ratio is 95%. Calculate effective
memory access time with and without TLB.
Soln.:
EMAT = x(c + m) + (1 - x)(c + 2m)
m = 100ns; c = 20ns; TLB hit ratio = x = 95% = 0.95

EMAT without TLB = 2m = 2 x 100ns = 200ns


EMAT with TLB = 0.95(20 + 100) + (1 – 0.95)(20 + 200) = 114 + 11 = 125 ns

School of Engineering, Jawaharlal Nehru University


Paging – Hardware Support
Q2. Consider a system where the TLB access time = 60ns. What TLB hit
ratio is required to reduce the EMAT from 300ns without TLB to 250ns
with TLB?

School of Engineering, Jawaharlal Nehru University


Paging – Hardware Support
Q2. Consider a system where the TLB access time = 60ns. What TLB hit ratio is required to
reduce the EMAT from 300ns without TLB to 250ns with TLB?
Soln.:
EMAT = x(c + m) + (1 - x)(c + 2m)
c = 60ns; TLB hit ratio = x = ?; m = ?
EMAT without TLB = 300ns
EMAT with TLB = 250ns

300 ns = 2m; m = 150ns


Then, 250 = x(60 + 150) + (1 - x)(60 + 2x150)
250 = 210x + 360 – 360x
x = 110/150 = 0.73 = 73%

School of Engineering, Jawaharlal Nehru University


Multilevel Paging
• To avoid the overhead of maintaining the large size page table, multilevel paging is
implemented
• Paging divides memory into small fixed-size blocks called pages
• Multilevel paging further subdivides these pages into smaller pages, creating a hierarchy
of page tables – multiple levels
• Each level contains a subset of the page table, which is used to translate a virtual address
to a physical address
• The number of levels in the page table depends on the size of the virtual address space
and the size of the physical memory
• For example, a 32-bit virtual address space can be divided into four levels of 10 bits each,
resulting in a page table with four levels of page tables
• Each level contains entries that point to the next level of the page table, until the final
level contains the page frame number that corresponds to the virtual address

School of Engineering, Jawaharlal Nehru University


Multilevel Paging
• To avoid the overhead of maintaining the large size page table, multilevel paging is
implemented
• Paging divides memory into small fixed-size blocks called pages
• Multilevel paging further subdivides these pages into smaller pages, creating a hierarchy
of page tables – multiple levels
• Each level contains a subset of the page table, which is used to translate a virtual address
to a physical address
• The number ofLevel
levels
1 in the page
Level 2table depends
………. on the size of nthe virtualOffsets
Level address space
and the size of the physical memory
• For example, a 32-bit virtual address space can be divided into four levels of 10 bits each,
resulting in a page table with four levels of page tables
• Each level contains entries that point to the next level of the page table, until the final
level contains the page frame number that corresponds to the virtual address

School of Engineering, Jawaharlal Nehru University


Multilevel Paging
• Except for the last component, the size of each part is the same as the size
of the frame
• The page table pages are then stored in various frames of main memory
• Another page table is maintained to keep track of the frames that store the
pages of the divided page table
• Multilevel paging allows for more efficient use of memory and reduces the
size of the page table by breaking it up into smaller tables
• However, it also requires more memory overhead for the page tables
themselves, as well as additional processing time to traverse the page table
hierarchy to translate virtual addresses to physical addresses

School of Engineering, Jawaharlal Nehru University


Multilevel Paging

CPU
p1 p2 p3 d

f d

Main
Memory

School of Engineering, Jawaharlal Nehru University


Multilevel Paging
Q1. Consider a system using 2-level paging. The page table is divided
into 2K pages and each page is having 4K entries. The PAS is 64MW,
which is divided into 16K frames. The memory is word addressable. If
the page table entry size in both the levels is 2W, calculate
a. Length of logical address Hint: Frame size will give you offset (d)
No. of page table will give you p1
b. Length of physical address No. of entries in each page will give you p2
st LA = p + d = p1 + p2 + d
c. Page table size of 1 level PA = log2(PAS)
PTS for level 1 = No. of entries in PT1 x Page table entry size
d. Page table size of 2nd level = 2p1 x 2W
PTS for level 2 = No. of entries in PT2 x Page table entry size
= 2p2 x 2W

School of Engineering, Jawaharlal Nehru University


Multilevel Paging
Q1. Consider a system using 2-level paging. The page table is divided
into 8K pages and each page is having 16K entries. The PAS is 128MB,
which is divided into 4KB frames. The memory is byte addressable. If
the page table entry size in both the levels is 32 bits, calculate
a. Length of logical address
b. Length of physical address
c. Page table size of 1st level
d. Page table size of 2nd level

School of Engineering, Jawaharlal Nehru University


Multilevel Paging
• Performance:
• Main memory access time = m
• Without TLB
EMAT = 3m
• When TLB is included
• TLB access time = c, and TLB hit ratio = x
• For n-level,
EMAT = x(c + m) + (1 - x)(c + (n + 1)*m)

School of Engineering, Jawaharlal Nehru University


Multilevel Paging
• Important points to remember when solving a problem
• In multilevel paging, whatever may be the levels of paging, all the PTE
contains frame no.
• If the page size is not mentioned, then the page size will be same in all the
levels

School of Engineering, Jawaharlal Nehru University


Segmented Paging
• It combines the benefits of both segmentation and paging to provide a more flexible and efficient
memory management system
• The main memory is split into variable-size segments, which are subsequently partitioned into
smaller fixed-size pages
• Each segment has a page table, and each process has many page tables
• When a program requests a memory address, the segmented paging system first looks up the
segment ID to determine the corresponding segment
• It then looks up the page number within that segment to locate the physical address of the
requested memory location
• This allows the system to use a combination of segmentation and paging to manage memory
more efficiently

School of Engineering, Jawaharlal Nehru University


School of Engineering, Jawaharlal Nehru University
School of Engineering, Jawaharlal Nehru University
Segmented Paging

School of Engineering, Jawaharlal Nehru University


Operating Systems

Lecture 14 – Dt. 1st Nov 2023

School of Engineering, Jawaharlal Nehru University


Today’s Class
• Memory Management
• Virtual Memory

School of Engineering, Jawaharlal Nehru University


Background
• Address Binding:
• Compile time
• Load time
• Execution time
• The execution time address binding scheme results in differing logical
and physical addresses. Logical address, in this case, is referred to as
virtual address

School of Engineering, Jawaharlal Nehru University


Virtual Memory
• Virtual memory is a computer memory management technique that allows a computer
to use more memory than it physically has available
• secondary memory can be addressed as though it were part of the main memory
• Virtual memory abstracts main memory into an extremely large, uniform array of storage
• Virtual memory involves the separation of logical memory as perceived by users from
physical memory
• Virtual memory enables computers to run larger applications or multiple applications
simultaneously, without requiring more physical memory
• Only part of the program needs to be in memory for execution
• Logical address space can therefore be much larger than physical address space

School of Engineering, Jawaharlal Nehru University


Virtual Memory
• Benefits
• Allows address spaces to be shared by several processes – a region of
memory shared by processes
• Allows for more efficient process creation – pages can be shared during
process creation with the fork()system call
• Program are not constrained by the amount of physical memory that is
available
• User program could take less physical memory and due to this more programs
could be run at the same time – increased CPU utilization and throughput
• Less I/O would be needed to load or swap user programs into memory, so
each user program would run faster

School of Engineering, Jawaharlal Nehru University


Fig.: Virtual Memory

School of Engineering, Jawaharlal Nehru University


Fig.: Virtual Address Space

School of Engineering, Jawaharlal Nehru University


Fig.: Shared library using virtual memory

School of Engineering, Jawaharlal Nehru University


Virtual Memory
• Implementation
• Demand Paging or Demand Segmentation

School of Engineering, Jawaharlal Nehru University


Virtual Memory
• The operating system divides memory into small, fixed-size blocks
called pages, and keeps track of which pages are currently in use by
programs
• When a program needs more memory than is physically available, the
operating system swaps out some of the pages that are not currently
in use to the hard disk, freeing up space in RAM for the program to
use
• When the program needs to access a page that has been swapped
out, the operating system swaps it back into RAM

School of Engineering, Jawaharlal Nehru University


Demand Paging
• Pages are only brought into memory from disk when they are actually
needed (i.e., when a page fault occurs), rather than loading them all
into memory at once
• This approach helps to conserve memory by loading only the data
that is immediately required by the running program
• Demand paging can also lead to performance issues if there are
frequent page faults, which can result in a delay as the operating
system retrieves the necessary pages from disk
• Solution – page replacement algorithms

School of Engineering, Jawaharlal Nehru University


Demand Paging

Fig.: Steps in handling a page fault

School of Engineering, Jawaharlal Nehru University


Demand Paging
• Hardware support – same as the hardware for paging and swapping –
page table and secondary memory
• Performance –
• As long as we have no page faults, the effective memory access time is equal
to the main memory access time (m)
• The time required to service a page fault is called as page fault service time (s)
• If p is the rate of page fault, then

EMAT = p x s + (1 – p) x m

School of Engineering, Jawaharlal Nehru University


Demand Paging
Q. Consider a system where the main memory access time is 20ns with
a page hit ratio of 95%. If the page fault service time is 100ns, what is
the effective memory access time?
(a) 24ns (b) 25ns (c) 26ns (d) 27ns

School of Engineering, Jawaharlal Nehru University


Demand Paging
Q. Suppose an instruction takes i µsec and additional j µsec. What is
the effective memory access time if a page fault occurs on an average
for every k instructions
(a) i + j/k (b) j + i/k (c) k + i/j (d) j + k/i

School of Engineering, Jawaharlal Nehru University


Page Replacement
• When a page fault occurs
• The required page has to be brought from the secondary memory into the
main memory
• A page has to be replaced if all the frames of main memory are already
occupied
• Page replacement is a technique used in virtual memory systems
when there is not enough physical memory available to hold all the
pages needed by running programs
• Goal – minimize page faults

School of Engineering, Jawaharlal Nehru University


Page Replacement
1. Find the location of the desired page
on disk
2. Find a free frame: -
a. If there is a free frame, use it -
b. If there is no free frame, use a
page replacement algorithm
to select a victim frame
3. Read the desired page into the
(newly) free frame. Update the page
and frame tables.
4. Restart the process

School of Engineering, Jawaharlal Nehru University


Page Replacement
• We evaluate an algorithm by running it on a particular string of memory references and
computing the number of page faults
• The string of memory references is called a reference string
• Reference strings can be generated artificially, or we can trace a given system and record
the address of each memory reference
• First – only the page number is considered, rather than the entire address
• Second – if a page p is referenced, then any references to page p that immediately follow will
never cause a page fault
• For example, if we trace a particular process, we might record the following address
sequence:
0100, 0432, 0101, 0612, 0102, 0103, 0104, 0101, 0611, 0102, 0103, 0104, 0101,
0610, 0102, 0103, 0104, 0101, 0609, 0102, 0105
• At 100 bytes per page, this sequence is reduced to the following reference string:
1, 4, 1, 6, 1, 6, 1, 6, 1, 6, 1

School of Engineering, Jawaharlal Nehru University


Page Replacement
• To determine the number of page faults for a particular reference
string and page-replacement algorithm, we also need to know the
number of page frames available
• As the number of frames available increases, the number of page
faults decreases

School of Engineering, Jawaharlal Nehru University


Fig.: Ideal graph of page faults versus number of frames

School of Engineering, Jawaharlal Nehru University


Page Replacement Algorithms
• Algorithms
• FIFO
• Optimal Page Replacement
• Least Recently Used
• Most Recently Used

School of Engineering, Jawaharlal Nehru University


First-In First-Out
• The oldest page (page at the head of the queue) is replaced
• Consider the reference string
7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 1, 2, 0, 1, 7, 0, 1
for a memory with three page frames
• Initially, the three frames are empty
• Hence, the first three references (7, 0, 1) causes page faults and are
brought into these empty frames
7 0 1 2 0 3 0 4 2 3 0 3 2 1 2 0 1 7 0 1

School of Engineering, Jawaharlal Nehru University


First-In First-Out
• The oldest page (page at the head of the queue) is replaced
• Consider the reference string
7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 1, 2, 0, 1, 7, 0, 1
for a memory with three page frames
• Initially, the three frames are empty
• Hence, the first three references (7, 0, 1) causes page faults and are
brought into these empty frames
7 0 1 2 0 3 0 4 2 3 0 3 2 1 2 0 1 7 0 1

School of Engineering, Jawaharlal Nehru University


First-In First-Out
• The oldest page (page at the head of the queue) is replaced
• Consider the reference string
7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 1, 2, 0, 1, 7, 0, 1
for a memory with three page frames
• Initially, the three frames are empty
• Hence, the first three references (7, 0, 1) causes page faults and are
brought into these empty frames
7 0 1 2 0 3 0 4 2 3 0 3 2 1 2 0 1 7 0 1

7 7
0

School of Engineering, Jawaharlal Nehru University


First-In First-Out
• The oldest page (page at the head of the queue) is replaced
• Consider the reference string
7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 1, 2, 0, 1, 7, 0, 1
for a memory with three page frames
• Initially, the three frames are empty
• Hence, the first three references (7, 0, 1) causes page faults and are
brought into these empty frames
7 0 1 2 0 3 0 4 2 3 0 3 2 1 2 0 1 7 0 1

7 7 7
0 0
1

School of Engineering, Jawaharlal Nehru University


First-In First-Out
• The oldest page (page at the head of the queue) is replaced
• Consider the reference string
7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 1, 2, 0, 1, 7, 0, 1
for a memory with three page frames
• The next reference (2) replaces page 7, because page 7 was brought in first
• After this, 0 is the next reference and since it is already in memory, we have no
fault for this reference
7 0 1 2 0 3 0 4 2 3 0 3 2 1 2 0 1 7 0 1

7 7 7
0 0
1

School of Engineering, Jawaharlal Nehru University


First-In First-Out
• The oldest page (page at the head of the queue) is replaced
• Consider the reference string
7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 1, 2, 0, 1, 7, 0, 1
for a memory with three page frames
• The next reference (2) replaces page 7, because page 7 was brought in first
• After this, 0 is the next reference and since it is already in memory, we have no
fault for this reference
7 0 1 2 0 3 0 4 2 3 0 3 2 1 2 0 1 7 0 1

7 7 7 2 2
0 0 0 0
1 1 1

School of Engineering, Jawaharlal Nehru University


First-In First-Out
• The oldest page (page at the head of the queue) is replaced
• Consider the reference string
7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 1, 2, 0, 1, 7, 0, 1
for a memory with three page frames
• The next reference (3) replaces page 0, because page 0 is now first in line

7 0 1 2 0 3 0 4 2 3 0 3 2 1 2 0 1 7 0 1

7 7 7 2 2
0 0 0 0
1 1 1

School of Engineering, Jawaharlal Nehru University


First-In First-Out
• The oldest page (page at the head of the queue) is replaced
• Consider the reference string
7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 1, 2, 0, 1, 7, 0, 1
for a memory with three page frames
• The next reference (3) replaces page 0, because page 0 is now first in line

7 0 1 2 0 3 0 4 2 3 0 3 2 1 2 0 1 7 0 1

7 7 7 2 2 2
0 0 0 0 3
1 1 1 1

School of Engineering, Jawaharlal Nehru University


First-In First-Out
• The oldest page (page at the head of the queue) is replaced
• Consider the reference string
7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 1, 2, 0, 1, 7, 0, 1
for a memory with three page frames
• The next reference (0) replaces page 1, because page 1 is now first in line

7 0 1 2 0 3 0 4 2 3 0 3 2 1 2 0 1 7 0 1

7 7 7 2 2 2
0 0 0 0 3
1 1 1 1

School of Engineering, Jawaharlal Nehru University


First-In First-Out
• The oldest page (page at the head of the queue) is replaced
• Consider the reference string
7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 1, 2, 0, 1, 7, 0, 1
for a memory with three page frames
• The next reference (0) replaces page 1, because page 1 is now first in line

7 0 1 2 0 3 0 4 2 3 0 3 2 1 2 0 1 7 0 1

7 7 7 2 2 2 2
0 0 0 0 3 3
1 1 1 1 0

School of Engineering, Jawaharlal Nehru University


First-In First-Out
• The oldest page (page at the head of the queue) is replaced
• Consider the reference string
7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 1, 2, 0, 1, 7, 0, 1
for a memory with three page frames
• This process continues as shown below
7 0 1 2 0 3 0 4 2 3 0 3 2 1 2 0 1 7 0 1

7 7 7 2 2 2 2 4 4 4 0 0 0 0 0 0 0 7 7 7
0 0 0 0 3 3 3 2 2 2 2 2 1 1 1 1 1 0 0
1 1 1 1 0 0 0 3 3 3 3 3 2 2 2 2 2 1

School of Engineering, Jawaharlal Nehru University


First-In First-Out
• The oldest page (page at the head of the queue) is replaced
• Consider the reference string
7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 1, 2, 0, 1, 7, 0, 1
for a memory withNumber of page
three page framesfaults = 15
• This process continues as shown below
7 0 1 2 0 3 0 4 2 3 0 3 2 1 2 0 1 7 0 1

7 7 7 2 2 2 2 4 4 4 0 0 0 0 0 0 0 7 7 7
0 0 0 0 3 3 3 2 2 2 2 2 1 1 1 1 1 0 0
1 1 1 1 0 0 0 3 3 3 3 3 2 2 2 2 2 1

School of Engineering, Jawaharlal Nehru University


First-In First-Out
• Due to the bad replacement choice, the page fault rate increases and
slows process execution
• Belady’s anomaly – for some page-replacement algorithms, the page-
fault rate may increase as the number of allocated frames increases
• Consider the following reference string:
1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5
The number of faults for four frames is ten, which is greater than the
number of faults for three frames (nine)

School of Engineering, Jawaharlal Nehru University


First-In First-Out

Fig.: Page-fault curve for FIFO replacement on a reference string

School of Engineering, Jawaharlal Nehru University


Optimal Page Replacement
• Replace the page that will not be used for the longest period of time
• This algorithm guarantees the lowest possible page-fault rate for a
fixed number of frames
• Does not suffer from Belady’s anomaly
• However, it is difficult to implement, as it requires future knowledge
of the reference string

School of Engineering, Jawaharlal Nehru University


Optimal Page Replacement
• Replace the page that will not be used for the longest period of time
• Consider the reference string
7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 1, 2, 0, 1, 7, 0, 1
for a memory with three page frames
• Initially, the three frames are empty
• Hence, the first three references (7, 0, 1) causes page faults and are brought into these empty
frames
7 0 1 2 0 3 0 4 2 3 0 3 2 1 2 0 1 7 0 1

School of Engineering, Jawaharlal Nehru University


Optimal Page Replacement
• Replace the page that will not be used for the longest period of time
• Consider the reference string
7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 1, 2, 0, 1, 7, 0, 1
for a memory with three page frames
• Initially, the three frames are empty
• Hence, the first three references (7, 0, 1) causes page faults and are brought into these empty
frames
7 0 1 2 0 3 0 4 2 3 0 3 2 1 2 0 1 7 0 1

7 7 7
0 0
1

School of Engineering, Jawaharlal Nehru University


Optimal Page Replacement
• Replace the page that will not be used for the longest period of time
• Consider the reference string
7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 1, 2, 0, 1, 7, 0, 1
for a memory with three page frames
• The next reference (2) replaces page 7, as it is not used for the longest period of time
• The subsequent reference (0) is already in memory

7 0 1 2 0 3 0 4 2 3 0 3 2 1 2 0 1 7 0 1

7 7 7 2 2
0 0 0 0
1 1 1

School of Engineering, Jawaharlal Nehru University


Optimal Page Replacement
• Replace the page that will not be used for the longest period of time
• Consider the reference string
7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 1, 2, 0, 1, 7, 0, 1
for a memory with three page frames
• The next reference (3) replaces page 1, as it is not used for the longest period of time
• The subsequent reference (0) is already in memory

7 0 1 2 0 3 0 4 2 3 0 3 2 1 2 0 1 7 0 1

7 7 7 2 2 2 2
0 0 0 0 0 0
1 1 1 3 3

School of Engineering, Jawaharlal Nehru University


Optimal Page Replacement
• Replace the page that will not be used for the longest period of time
• Consider the reference string
7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 1, 2, 0, 1, 7, 0, 1
for a memory with three page frames
• This process continues as shown below

7 0 1 2 0 3 0 4 2 3 0 3 2 1 2 0 1 7 0 1

7 7 7 2 2 2 2 2 2 2 2 2 2 2 2 2 2 7 7 7
0 0 0 0 0 0 4 4 4 0 0 0 0 0 0 0 0 0 0
1 1 1 3 3 3 3 3 3 3 3 1 1 1 1 1 1 1

School of Engineering, Jawaharlal Nehru University


Optimal Page Replacement
• Replace the page that will not be used for the longest period of time
• Consider the reference string
7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 1, 2, 0, 1, 7, 0, 1
for a memory with three page frames
Number of page faults = 9
• This process continues as shown below

7 0 1 2 0 3 0 4 2 3 0 3 2 1 2 0 1 7 0 1

7 7 7 2 2 2 2 2 2 2 2 2 2 2 2 2 2 7 7 7
0 0 0 0 0 0 4 4 4 0 0 0 0 0 0 0 0 0 0
1 1 1 3 3 3 3 3 3 3 3 1 1 1 1 1 1 1

School of Engineering, Jawaharlal Nehru University


Least Recently Used Page Replacement
• LRU associates with each page the time of that page’s last use
• Replace the page that has not been used for the longest period of
time
• Two implementations are feasible:
• Counter – whenever a reference to a page is made, the contents of the clock
register (counter) are copied to the time-of-use field in the page-table entry
for that page. The clock is incremented for every reference
• Stack – whenever a page is referenced, it is removed from the stack and put
on the top. Most recently used page is always at the top of the stack and least
recently used page is always at the bottom
• Does not suffer from Belady’s anomaly

School of Engineering, Jawaharlal Nehru University


Least Recently Used Page Replacement
• Replace the page that has not been used for the longest period of time
• Consider the reference string
7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 1, 2, 0, 1, 7, 0, 1
for a memory with three page frames
• Initially, the three frames are empty
• Hence, the first three references (7, 0, 1) causes page faults and are brought into these empty
frames
7 0 1 2 0 3 0 4 2 3 0 3 2 1 2 0 1 7 0 1

7 7 7
0 0
1

School of Engineering, Jawaharlal Nehru University


Least Recently Used Page Replacement
• Replace the page that has not been used for the longest period of time
• Consider the reference string
7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 1, 2, 0, 1, 7, 0, 1
for a memory with three page frames
• The next reference (2) replaces page 7, as it has not been used for the longest period of time
• The subsequent reference (0) is already in memory, but the time associated with page 0 will be
updated
7 0 1 2 0 3 0 4 2 3 0 3 2 1 2 0 1 7 0 1

7 7 7 2 2
0 0 0 0
1 1 1

School of Engineering, Jawaharlal Nehru University


Least Recently Used Page Replacement
• Replace the page that has not been used for the longest period of time
• Consider the reference string
7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 1, 2, 0, 1, 7, 0, 1
for a memory with three page frames
• The next reference (3) replaces page 1, as it has not been used for the longest period of time

7 0 1 2 0 3 0 4 2 3 0 3 2 1 2 0 1 7 0 1

7 7 7 2 2 2
0 0 0 0 0
1 1 1 3

School of Engineering, Jawaharlal Nehru University


Least Recently Used Page Replacement
• Replace the page that has not been used for the longest period of time
• Consider the reference string
7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 1, 2, 0, 1, 7, 0, 1
for a memory with three page frames
• The next reference (0) is already in memory, but the time associated with page 0 will be updated

7 0 1 2 0 3 0 4 2 3 0 3 2 1 2 0 1 7 0 1

7 7 7 2 2 2 2
0 0 0 0 0 0
1 1 1 3 3

School of Engineering, Jawaharlal Nehru University


Least Recently Used Page Replacement
• Replace the page that has not been used for the longest period of time
• Consider the reference string
7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 1, 2, 0, 1, 7, 0, 1
for a memory with three page frames
• The next reference (4) replaces page 2, as it has not been used for the longest period of time

7 0 1 2 0 3 0 4 2 3 0 3 2 1 2 0 1 7 0 1

7 7 7 2 2 2 2 4
0 0 0 0 0 0 0
1 1 1 3 3 3

School of Engineering, Jawaharlal Nehru University


Least Recently Used Page Replacement
• Replace the page that has not been used for the longest period of time
• Consider the reference string
7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 1, 2, 0, 1, 7, 0, 1
for a memory with three page frames
• This process continues as shown below

7 0 1 2 0 3 0 4 2 3 0 3 2 1 2 0 1 7 0 1

7 7 7 2 2 2 2 4 4 4 0 0 0 1 1 1 1 1 1 1
0 0 0 0 0 0 0 0 3 3 3 3 3 3 0 0 0 0 0
1 1 1 3 3 3 2 2 2 2 2 2 2 2 2 7 7 7

School of Engineering, Jawaharlal Nehru University


Least Recently Used Page Replacement
• Replace the page that has not been used for the longest period of time
• Consider the reference string
7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 1, 2, 0, 1, 7, 0, 1
for a memory with three page frames
Number of page faults = 12
• This process continues as shown below

7 0 1 2 0 3 0 4 2 3 0 3 2 1 2 0 1 7 0 1

7 7 7 2 2 2 2 4 4 4 0 0 0 1 1 1 1 1 1 1
0 0 0 0 0 0 0 0 3 3 3 3 3 3 0 0 0 0 0
1 1 1 3 3 3 2 2 2 2 2 2 2 2 2 7 7 7

School of Engineering, Jawaharlal Nehru University


Most Recently Used Page Replacement
• Replace the page that is most recently used
• May suffer from Belady’s anomaly

School of Engineering, Jawaharlal Nehru University


Thrashing
• If the OS throws out a page just before it is used, then it will just have
to get that page again almost immediately
• Too much of this leads to a condition called Thrashing
• The system spends most of its time swapping pages rather than
executing instructions
• This high paging activity is called thrashing
• Causes:
• High degree of multiprogramming – With increase in number of process, the
number of available frames decreased leading to frequent page fault
• Lacks of frames

School of Engineering, Jawaharlal Nehru University


Fig.: Thrashing

School of Engineering, Jawaharlal Nehru University


Operating Systems

Lecture 15 – Dt. 2nd Nov 2023

School of Engineering, Jawaharlal Nehru University


Today’s Class
• File Management

School of Engineering, Jawaharlal Nehru University


• What is a file?
• Different types of file?
• Attributes of a file?
• What is a file directory?
• What operations can be taken upon a file?
• What is file system?

School of Engineering, Jawaharlal Nehru University


File System
• A method in which an operating system uses to store, organize, and
manage files and directories on a storage device
• FAT – File Allocation Table
• NTFS – New Technology File System
• EXT – Extended File System
• HFS – Hierarchical File System
• APFS – Apple File System
• ZFS – Zettabyte File System
• YAFFS – Yet Another Flash File System
• Advantage
• Disadvantage
School of Engineering, Jawaharlal Nehru University
File Concept
• File system consists of two distinct parts:
• A collection of files, each storing related data
• Directory structure, which organizes and provides information about all the
files in the system
• File –
• A logical storage unit abstract from the physical properties of storage devices
• A collection of records (related information)
• Data cannot be written to secondary storage unless they are within a file
• Files represent programs and data
• Information in a file is defined by its creator

School of Engineering, Jawaharlal Nehru University


File Concept
• Types of file: • Attributes of file:
• Text file – a sequence of • Name
characters organized into lines • Identifier
• Source file – a sequence of • Type
functions, each of which is further • Location
organized as declarations followed
by executable statements • Size
• Executable file – a series of code • Protection
sections that the loader can bring • Time, date, and user identification
into memory and execute

School of Engineering, Jawaharlal Nehru University


Fig.: Common file types

School of Engineering, Jawaharlal Nehru University


File Concept
• File operations:
• Creating a file
• Writing a file
• Reading a file
• Repositioning within a file
• Deleting a file
• Truncating a file

School of Engineering, Jawaharlal Nehru University


Directory Structure
• Single-Level Directory:
• All files are contained in the same directory
• Two files cannot have the same name
• Two-level Directory:
• Separate directory for each user
• Two similar files in two different directories need to be both updated to avoid
inconsistency
• Tree-Structured Directory:
• A tree hierarchy with a single root directory at the top
• Provides clear hierarchical organization for files and directories
• Acyclic Graph Directory:
• A shared directory or files exists in the file system in two or more places at once
• The same file may be in two different directories

School of Engineering, Jawaharlal Nehru University


Directory Structure
• Single-Level Directory:
• All files are contained in the same directory
• Two files cannot have the same name
• Two-level Directory:
• Separate directory for each user
• Two similar files in two different directories need to be both updated to avoid
inconsistency
• Tree-Structured Directory:
• A tree hierarchy with a single root directory at the top
• Provides clear hierarchical organization for files and directories
• Acyclic Graph Directory:
• A shared directory or files exists in the file system in two or more places at once
• The same file may be in two different directories

School of Engineering, Jawaharlal Nehru University


Access Methods
• Sequential Access
• Information is processed in order, one record after the other
• Editors and compilers usually access files in this fashion
• Direct access
• Random access – no particular order

School of Engineering, Jawaharlal Nehru University


File Allocation Methods
• Contiguous allocation
• Linked Allocation
• Indexed Allocation

School of Engineering, Jawaharlal Nehru University


Advantages Disadvantages
Contiguous • Simple and easy to implement. • Fragmentation can occur, leading to wasted
• Fast access times, as the entire file is space
Allocation stored in one continuous block • Difficulty in accommodating growing or
changing file sizes
Linked • No fragmentation, as each block is • Slower access times, as data blocks are not
allocated dynamically stored sequentially
Allocation • Simple to implement and manage free • Inefficient storage of small files, as each
space block has some overhead
Indexed • Efficient access times, as the index block • Requires additional space for index blocks
provides direct access to data blocks • Limited by the number of pointers that can
Allocation • Less fragmentation compared to be stored in the index block
contiguous allocation

School of Engineering, Jawaharlal Nehru University


File Sharing
• Multiple Users
• Remote File Systems
• Client-Server model
• Distributed Information Systems

School of Engineering, Jawaharlal Nehru University


Operating Systems

Lecture 16 – Dt. 8th Nov 2023

School of Engineering, Jawaharlal Nehru University


Today’s Class
• I/O Management

School of Engineering, Jawaharlal Nehru University


• I/O Devices
• Organization of the I/O function
• Disk I/O
• Operating System design Issues

School of Engineering, Jawaharlal Nehru University


I/O Devices
• I/O (input/output) devices are hardware components that allow data
to be entered into a computer system (input) or sent out of the
computer system (output)
• Keyboard, mouse, monitor, printer, scanner, speakers, microphone,
webcam, touchscreen, joystick

School of Engineering, Jawaharlal Nehru University


Organization of I/O function
• The organization of I/O function in a computer system involves
several components working together to manage the flow of data
between the computer and its I/O devices
• Device driver
• I/O controller
• Interrupt handler
• Bus
• Buffer
• I/O scheduler
School of Engineering, Jawaharlal Nehru University
Organization of I/O function
• The organization of I/O function in a computer system involves several
components working together to manage the flow of data between the computer
and its I/O devices
• Device driver:
• A device driver is a software component that communicates with the I/O device to manage
its operations. It acts as an intermediary between the device and the operating system,
allowing the computer to control and communicate with the device
• I/O controller
• Interrupt handler
• Bus
• Buffer
• I/O scheduler
School of Engineering, Jawaharlal Nehru University
Organization of I/O function
• The organization of I/O function in a computer system involves several
components working together to manage the flow of data between the
computer and its I/O devices
• Device driver
• I/O controller:
• The I/O controller is a hardware component that manages the transfer of data
between the I/O device and the computer system. It is responsible for coordinating
the transfer of data and ensuring that the data is sent and received correctly
• Interrupt handler
• Bus
• Buffer
• I/O scheduler

School of Engineering, Jawaharlal Nehru University


Organization of I/O function
• The organization of I/O function in a computer system involves several
components working together to manage the flow of data between the
computer and its I/O devices
• Device driver
• I/O controller
• Interrupt handler:
• An interrupt handler is a software component that responds to hardware interrupts
generated by I/O devices
• Bus
• Buffer
• I/O scheduler
School of Engineering, Jawaharlal Nehru University
Organization of I/O function
• The organization of I/O function in a computer system involves several
components working together to manage the flow of data between the
computer and its I/O devices
• Device driver
• I/O controller
• Interrupt handler
• Bus:
• A bus is a communication pathway that connects the I/O devices to the computer
system. It is responsible for transferring data and commands between the devices
and the system
• Buffer
• I/O scheduler

School of Engineering, Jawaharlal Nehru University


Organization of I/O function
• The organization of I/O function in a computer system involves several
components working together to manage the flow of data between the
computer and its I/O devices
• Device driver
• I/O controller
• Interrupt handler
• Bus:
• A bus is a communication pathway that connects the I/O devices to the computer
system. It is responsible for transferring data and commands between the devices
and the system
• Buffer
• I/O scheduler

School of Engineering, Jawaharlal Nehru University


Organization of I/O function
• The organization of I/O function in a computer system involves several
components working together to manage the flow of data between the computer
and its I/O devices
• Device driver
• I/O controller
• Interrupt handler
• Bus
• Buffer:
• A buffer is a temporary storage area used to hold data that is being transferred between the
I/O device and the computer system. Buffers are used to smooth out differences in data
transfer rates between the devices and the system, ensuring that data is transferred
smoothly and efficiently
• I/O scheduler

School of Engineering, Jawaharlal Nehru University


Organization of I/O function
• The organization of I/O function in a computer system involves several
components working together to manage the flow of data between the computer
and its I/O devices
• Device driver
• I/O controller
• Interrupt handler
• Bus
• Buffer
• I/O scheduler:
• An I/O scheduler is a component of the operating system that manages the order and priority
of I/O operations. It determines which I/O operations are executed first and manages the
allocation of system resources to ensure that I/O operations are completed as efficiently as
possible

School of Engineering, Jawaharlal Nehru University


I/O Buffering
• Benefits:
• Reduced overhead
• Improved throughput
• Improved responsiveness
• Reduced contention

School of Engineering, Jawaharlal Nehru University


I/O Buffering
• Benefits:
• Reduced overhead:
• By buffering I/O data, the operating system can reduce the number of system calls
needed to transfer data between the device and memory. This can result in lower
overhead and improved performance
• Improved throughput
• Improved responsiveness
• Reduced contention

School of Engineering, Jawaharlal Nehru University


I/O Buffering
• Benefits:
• Reduced overhead
• Improved throughput:
• I/O buffering can improve the overall throughput of I/O operations by allowing the
system to transfer data in larger chunks, rather than transferring small amounts of data
at a time
• Improved responsiveness
• Reduced contention

School of Engineering, Jawaharlal Nehru University


I/O Buffering
• Benefits:
• Reduced overhead
• Improved throughput
• Improved responsiveness:
• I/O buffering can improve the responsiveness of the system by allowing applications to
continue running while I/O operations are being performed
• Reduced contention

School of Engineering, Jawaharlal Nehru University


I/O Buffering
• Benefits:
• Reduced overhead
• Improved throughput
• Improved responsiveness
• Reduced contention:
• I/O buffering can reduce contention for system resources, such as the CPU and memory,
by allowing multiple I/O operations to be performed concurrently

School of Engineering, Jawaharlal Nehru University


Operating Systems

Lecture 17 – Dt. 22nd Nov 2023

School of Engineering, Jawaharlal Nehru University


Today’s Class
• I/O Management
• Disk I/O
• Operating System Design Issues

School of Engineering, Jawaharlal Nehru University


Disk I/O
• Disk I/O (Input/Output) refers to the process of reading and writing data to
and from a disk storage device, such as a hard disk drive (HDD) or solid-
state drive (SSD)
• Loading and saving files, installing software, and running applications
• Several steps:
• The operating system receives a request for disk I/O from an application
• The operating system checks the file system to locate the requested data on the disk
• The operating system initiates the disk I/O operation, which involves moving the
read/write head of the disk to the correct location and reading or writing data to or
from the disk
• Once the data is read or written, it is transferred to or from the application's memory
using I/O buffering techniques
• The operating system signals the application that the disk I/O operation has
completed

School of Engineering, Jawaharlal Nehru University


Geometry of a Disk

School of Engineering, Jawaharlal Nehru University


School of Engineering, Jawaharlal Nehru University
Geometry of a Disk
• Platter: a circular hard surface on which data is stored persistently by
inducing magnetic changes to it
• A disk may have one or multiple platters
• Surface: One side of a platter
• Data is encoded on each surface
• Tracks: A surface is divided into concentric tracks
• Many thousands of tracks on a surface
• Hundreds of tracks fit into the width of a human hair
• Cylinder: A stack of tracks of fixed radius is a cylinder

School of Engineering, Jawaharlal Nehru University


Geometry of a Disk
• Head/Arm: Reading or writing is accomplished by a disk head
attached to a disk arm
• One head per surface
• Heads record and sense data along tracks
• Generally only one head is active at a time
• Sector: A track is divided into 512-byte blocks called sectors
• Sectors are numbered from 0 to n − 1 (n-sector disk)
• Multi-sectors operations are possible (e.g., update 4 Mb at a time)
• A sector is the granularity for atomic operations

School of Engineering, Jawaharlal Nehru University


Accessing a Sector
• Seek Time:
• The time it takes to move the head from its current track to the track
containing the target sector
• Rotational Delay:
• Time for the target sector to pass under the disk head
• Rotating speed of modern disks: 7,200 RPM to 15,000 RPM (RPM= rotations
per minute)
• Transfer time: Time for I/O to take place

I/O Time = Seek time + Rotational delay + Transfer time

School of Engineering, Jawaharlal Nehru University


Q. If a disk system has an average seek time of 30ns and rotation rate of 360RPM. Each track of the
disk has 512 sectors each of size 512 Bytes. what is the time taken to read 4 successive sectors, also
compute the effective data transfer rate.

School of Engineering, Jawaharlal Nehru University


Q. If a disk system has an average seek time of 30ns and rotation rate of 360RPM. Each track of the
disk has 512 sectors each of size 512 Bytes. what is the time taken to read 4 successive sectors, also
compute the effective data transfer rate.
Soln.:
Given, seek time = 30ns; rotation rate = 360rpm; no. of sector = 512; sector capacity = 512 Bytes
We have to find data transfer rate and disk access time of sectors
4 x 512B = 2048 Bytes
360 rotation = 60 secs
1 rotation = 1/6 secs
In 1 rotation, we read 1 track i.e., 512 sectors i.e., 512 x 512 B = 256 KB
So, to read 256 KB, we need 1/6 secs
Hence, in 1 sec, we read 256 KB x 6 = 1536 KB
Therefore, data transfer rate is 1536 KB/sec

School of Engineering, Jawaharlal Nehru University


Q. If a disk system has an average seek time of 30ns and rotation rate of 360RPM. Each
track of the disk has 512 sectors each of size 512 Bytes. what is the time taken to read 4
successive sectors, also compute the effective data transfer rate.
Soln.:
Given, seek time = 30ns; rotation rate = 360rpm; no. of sector = 512; sector capacity = 512
Bytes
We have to find data transfer rate and disk access time of sectors
4 x 512B = 2048 Bytes
Since data transfer rate is 1536 KB/sec
1 B of data can be transferred in 1/(1536 x 103) sec
So, 2048 B can be transferred in 2048/(1536 x 103) sec = 0.00133 sec
We know that 1 rotation is done in 1/6 sec, so, avg. rotational latency 1/(6 x 2) = 0.0833 sec
Disk I/O = Avg seek time + Avg. rotational latency + data transfer rate
= 30ns + 0.0833 + 0.00133 sec
= 0.0843 sec

School of Engineering, Jawaharlal Nehru University


Q. If a disk system has an average seek time of 60ns and rotation rate of 3600RPM.
Each track of the disk has 256 sectors each of size 2 KB. What is the time taken to
read 1200 random sectors, also compute the effective data transfer rate.

School of Engineering, Jawaharlal Nehru University


Operating System Design Issues
• Memory management
• Process management
• Device management
• File management
• Security
• User interface
• Performance
• Scalability
• Reliability

School of Engineering, Jawaharlal Nehru University


Operating System Design Issues
• Memory management:
• The operating system must manage the allocation and deallocation of memory to programs
and ensure that memory is used efficiently. This involves techniques such as virtual memory,
paging, and swapping
• Process management
• Device management
• File management
• Security
• User interface
• Performance
• Scalability
• Reliability

School of Engineering, Jawaharlal Nehru University


Operating System Design Issues
• Memory management
• Process management:
• The operating system must manage the creation, execution, and termination of processes.
This involves scheduling algorithms, synchronization mechanisms, and interprocess
communication
• Device management
• File management
• Security
• User interface
• Performance
• Scalability
• Reliability

School of Engineering, Jawaharlal Nehru University


Operating System Design Issues
• Memory management
• Process management
• Device management:
• The operating system must manage the interaction between devices and programs.
This involves device drivers, interrupt handling, and device allocation
• File management
• Security
• User interface
• Performance
• Scalability
• Reliability

School of Engineering, Jawaharlal Nehru University


Operating System Design Issues
• Memory management
• Process management
• Device management
• File management:
• The operating system must manage the creation, deletion, and organization of files
on disk. This involves file systems, access control, and directory structures
• Security
• User interface
• Performance
• Scalability
• Reliability

School of Engineering, Jawaharlal Nehru University


Operating System Design Issues
• Memory management
• Process management
• Device management
• File management
• Security:
• The operating system must provide security features to protect the system from
unauthorized access and malicious attacks. This involves authentication, access control, and
encryption
• User interface
• Performance
• Scalability
• Reliability

School of Engineering, Jawaharlal Nehru University


Operating System Design Issues
• Memory management
• Process management
• Device management
• File management
• Security
• User interface:
• The operating system must provide a user interface that allows users to interact with the
system. This involves the design of graphical user interfaces (GUIs), command-line interfaces,
and system utilities
• Performance
• Scalability
• Reliability

School of Engineering, Jawaharlal Nehru University


Operating System Design Issues
• Memory management
• Process management
• Device management
• File management
• Security
• User interface
• Performance:
• The operating system must be designed to perform efficiently, with minimal overhead and
optimal use of system resources. This involves optimizing algorithms, reducing context
switching, and minimizing system calls
• Scalability
• Reliability

School of Engineering, Jawaharlal Nehru University


Operating System Design Issues
• Memory management
• Process management
• Device management
• File management
• Security
• User interface
• Performance
• Scalability:
• The operating system must be designed to scale with the growth of the system,
supporting larger numbers of users, processes, and devices
• Reliability

School of Engineering, Jawaharlal Nehru University


Operating System Design Issues
• Memory management
• Process management
• Device management
• File management
• Security
• User interface
• Performance
• Scalability
• Reliability:
• The operating system must be designed to be reliable and fault-tolerant, with
mechanisms to handle hardware failures and software errors

School of Engineering, Jawaharlal Nehru University

You might also like