Download as pdf or txt
Download as pdf or txt
You are on page 1of 90

B.Sc.

Final Year SEMESTER VI Operating Systems SYLLABUS


Unit I
Introduction: Computer System Architecture, Computing Environments.
Operating System Structures: Operating System Services, User Interface for Operating System, System Calls,
Types of System Calls, Operating System Structure.
Process Management: Process Concept, Process Scheduling, Operations on Processes, Inter
processCommunication, Examples–Producer-Consumer Problem.

Unit II
CPU Scheduling: Concepts, Scheduling Criteria, Scheduling Algorithms.
Process Synchronization: Critical-Section Problem, Peterson’s Solution, Synchronization,Semaphores, Monitors.
Deadlocks: System Model, Deadlock Characterization, Methods for Handling Deadlocks,Deadlock Prevention,
Deadlock Avoidance, Deadlock Detection, Recovery from Deadlock.

Unit III
Main Memory: Introduction, Swapping, Contiguous Memory Allocation, Segmentation, Paging.
Virtual Memory: Introduction, Demand Paging, Page Replacement, Allocation of Frames,Thrashing.

Unit IV
Mass Storage Structure: Overview, Disk Scheduling, RAID Structure.
File Systems: File Concept, Access Methods, Directory and Disk Structure, File-System Mounting,Protection.File
System Implementation, Directory Implementation, Allocation Methods, Free-Space Management.

Operating Systems Lab


1.a) Use vi editor to create different files, writing data into files, modifying data in files.
b) Use different types of Unix commands on the files created in first program.
2.Write shell programs using ‘case’, ‘then’ and ‘if’ & ’else’ statements.
3.Write shell programs using while, do-while and for loop statements.
4.a)Write a shell script that accepts two integers as its arguments and Compute the value of first number raised to
the power of the second number.
b) Write a shell script that takes a command–line argument and reports on whether it is directory, a file, or
something else.
5.a)Write a shell script that accepts a file name, starting and ending line numbers as arguments and displays all the
lines between the given line numbers.
b) Write a shell script that deletes all lines containing a specified word
in one or more files supplied as arguments to it.
6.a)Write a shell script that displays a list of all the files in the current directory to which the user has read, write
and execute permissions.
b) Develop an interactive script that ask for a word and a file name and then tells how many times that word
occurred in the file.
7. Write a program to simulate the UNIX commands like ls, mv, cp.
8. Write a program to convert upper case to lower case letters of a given ASCII file.
9. Write a program to program to search the given pattern in a file.
10. Write a program to demonstrate FCFS process schedules on the given data.
11.Write a program to demonstrate SJF process schedules on the given data.
12.Write a program to demonstrate Priority Scheduling on the given burst time and arrival times.
13.Write a program to demonstrate Round Robin Scheduling on the given burst time and arrival times.
14. Write a program to implementing Producer and Consumer problem using Semaphores.
15.Write a program to simulate FIFO, LRU, LFU Page replacement algorithms.
16.Write a program to simulate Sequential, Indexed and Linked file Allocation.

1
UNIT-1
Operating system:
• An operating system is a program that manages the computer hardware.
• It provides basis for application programs and acts as an intermediate between the computer user and computer
hardware.
• A computer system can be divided into 4 components they are hardware, the operating system, the application
programs and user.
• The hardware means CPU, memory and I/O devices which provides the basic computing resources for the
system.
• The application programs such as word processors, spread sheets, compilers and web browsers define the way
in which these resources are use to solve users computing problems .
• The operating system controls the hardware and coordinate its use among the various application programs for
the various users.

OS AND KERNEL
The OS is the software package that communicate directly to the hardware and our application.
The kernel is the lowest level of operating system. It is the main part of the operating system and is responsible
for translating the command in to something that can be understand by the computer.
Kernel is internal core of the OS.

Bootstrap program
It is a program that initializes the operating system during start up. It is the first code that is executed when the
computer system is started.
It is stored in ROM or EEPROM which is non-volatile memory. The OS is loaded into RAM by the bootstrap
program when the system powered up or rebooted.
It doesn’t require any outside input to start. Bootstrapping process involves self tests , loading BIOS ,
configuration settings etc.

Kernel mode vs User mode


Kernel mode
It can also called supervisor mode, system mode or privileged mode.
At system boot time, the hardware starts in kernel mode.
When cpu is in kernel mode, the code being executed can access any memory address and any hardware resource.
Hence kernel mode is a very privileged and powerful mode.
If a program crashes in kernel mode the entire system will be halted
User mode
User applications starts in user mode.whenever a trap or interrupt occurs , the hardware switches from user mode
to kernel mode.

2
When CPU is in user mode, the programs don’t have direct access to memory and hardware resources. In user
mode, if any program crashes only that particular program is halted. That means the system is in a safe state even
if a program in user mode crashes.
Hence most programs in an OS runs in user mode.

Computer system architectures:


A computer system may be organised in a number of different ways we can categories according to the
number of general purpose processors used.

1. Single processor system:


→Most systems use a single processor. These systems is range from PDA’s through mainframes.
→ On a single processor, there is one main CPU, capable of executing a general purpose instruction set, including
instructions from user processor.
→They have special purpose processor in the form of device specific processor, I/O processors.
→These special purpose processors do not run user processes.
→These processors relieves the overhead of the main CPU.

2. Multi processor system:


They also known as parallel system or tightly coupled systems they have 2 or more processors in close
communication, sharing the computer bus, clock, memory and peripheral devices.
→Multi processor system are of 2 types
a) Asymmetric multi processing (AMP):
In which each processor is assigned a specific task. A master processor controls the systems this scheme defines
a master-slave relationship the master processor schedules and allocates work to the slave processors.
b) Symmetric multi processing (SMP):
In which each processor performs all tasks with in OS. All processors are peers, no master-slave relationship
exists between processors.
→It is most common in use
Ex: window XP, Linux, MAC OS X

Advantages
i) Increased throughput:
By increasing the number of processors, we expect to get more work done in less time.
ii) Economy of scale:
They cost less than equivalent multiple single processor systems, because they share peripherals, mass storage
and power supplies.
iii) Increased reliability:
If function can be distributed properly among several processors, then the failure of one processor will not halt
the system, only show it down.

3. Clustered systems:

→Clustered systems gather-together multiple CPU’s to accomplish computational work.


→They are composed of 2 or more individual systems/nodes joined together.
→They have storage and are closely linked via network.

3
→Clustering can be structured asymmetrically or symmetrically.
→In asymmetric clustering one machine is in hot stand by mode which the other is running the application hot
stand by host machine does nothing but monitor the active server.
→In symmetric clustering 2 or more hosts are running applications and are monitoring each other it is efficient as
it uses all of the available hardware.

Advantages
→It provides high availability service i.e service will continue even if one or more systems in the cluster fail.
→It provides high performance computing environment because they are capable of running an application
concurrently on all computers in the cluster.

Computing Environments
1. Personal computing environment
In this a single computer is used by single person. Such computer is called as personal computer.
All hardware devices are present at single location and are packed as a single unit.

2. Time sharing computing


In this multiple computers connected to single large computer called server. Server shares resources to
other system. It allows multi programming and multi tasking.

3. Client-server computing:
In client server computing the client requests a resource and the server provides that resources. A server may
serve multiple clients at the same time while a client is in contact with only one server.
Both the client and server usually communicate via a computer network but sometimes they may reside in the
same system.

4. Peer-to-peer computing:
In this model, all nodes with in the system are considered peers, and each may not act as either a client or a
server , depending on whether it is requesting or providing a service.
→In this services can be provided by several nodes distributed through out the network.
→It is a distributed application architecture.
→The drawback of this model is, difficult to backup the data as it is stored in different computer systems and there
are no central server.
→It is difficult to provide security.

4. Web based computing:


In this model, it consists of ultra-thin client networked over the internet or intranet
→Applications in this model consists of code on the servers distributed to thin clients containing a browser.
4
→Web based application require a web server, which is a piece of software that uses HTTP to deliver HTML or
XML pages over the network.
→It is limited by scalability and availability.

5. Cloud computing environment:


Cloud computing is the use of various services, such as software development platforms, servers, storage and
software over the internet, offer referred to as “cloud”.
→In this resources are retrieved from the internet through web-based tools and applications.
→It is type of computing that relies on shared computing resources, rather than having local servers or personal
devices to handle application.

Operating system services


An OS provides an environment for the execution of programs. It provides certain services to programs and
the users of those programs.
1. User interface:
Almost all operating systems have user interface and it can take several forms.
→CLI-command line interface, which uses text commands.
→Batch interface in which commands and directives to control those commands are entered into files and those
files are executed.
→GUI-graphical user interface, In which commands are selected from menus using pointing device.

2. Program execution:
The system must be able to load a program into memory and to run that program.
→The program must be able to end its execution, either normally or abnormally.

3. I/O operations:
→A running program may require i/o, which may involve a file or an i/o device.
→For efficiency and protection, users usually cannot control i/o devices directly so OS provides a means to do i/o.

4. File-system manipulation:
→Programs need to read and write files and directories
→They also need to create and delete them by name, search for a given file and list file information.
→Some programs include permissions management to allow or deny access to files or directories based on a file
ownership.
→OS provides a variety of file systems to do all these tasks.

5. Communications:
→There are many situations to make communication between processes that are executing on the same computer
or between processes that are executing on different systems.
→Communications may be implemented via shared memory or through message passing.
→Packets of information are moved between processes by OS.

6. Error detection:
→Errors may occur in the CPU and memory hardware in I/O devices and in the user program.
→For each type of error, the OS should take an appropriate action to ensure correct and consistant computing.

5
7. Resource allocation:
Many different types of resources are managed by OS they are CPU, cycles, main memory, file storage, I/O
devices, printers, modems, USB storage devices etc.
→When there are multiple users or multiple jobs running at the same time, resources must be allocated to each of
them by OS using scheduling routines.

8. Accounting:
Os keeps track of which users use how much and what kinds of computer resources.
→This statistics is a valuable tool for researches who wish to reconfigure the system to improve computing
services.

9. Protection and security:


Protection involves ensuring that all access to system resources is controlled.
→Security involves authenticating of each user by means of a password, to gain access to system resources.

User operating system interface


There are several ways for users to interface with in the OS the 2 main fundamental approaches are command
line interface and graphical user interface.

1. Command line interface/command interpreter (CLI):


→It allows users to directly enter commands to be performed by OS.
→The main functions of the command interpreter is to get and execute the next user specified command.
→Many of the commands used for manipulating files create, delete, list, print, copy execute and so on.
→These commands can be implemented in 2 ways.
→In one approach, the command interpreter itself contains a code to execute the command.
→In second approach, as implements most commands through system programs, the command interpreter does not
understand the command, it uses command to identify a file to be loaded into memory and executed.
→Programs can add new commands to the system easily by creating new files.

2. Graphical user interface:


→In another strategy for interfacing with OS it is a user friendly interface.
→In this, users work with a mouse based window and menu system. The user moves the mouse to position its
pointer an images or icons on the screen.
→These icons represent programs, files, directories and system functions.
→These programs are invoked by clicking mouse button.
→The user interface can vary from system to system and even form user to user with in a system.

System calls
• System calls provide an interface between a process and operating system. These calls are generally available
as routines written in c and c++ languages.
• When a program makes a system call, the mode is switched from user mode to kernel mode. This is called
context switch.
• When a program in user mode requires access to RAM or hardware resource, it must ask the kernel to provide
access to that resource. This is done by via system call.
• Generally system calls are made by user level programs in the following situations:
o Creating,opening,closing and deleting files.
o Creating a connection in the network, sending and receiving packets.
o Requesting access to hardware device.

API(Appliaction Programming Interface):


• Application developers design programs according to an API
• API specifies a set of functions that are available to an application programmer, including the parameters that
are passed to each function and the return values the programmers can expect.
• Three of most common API’S available to app programmers. They are win32.API,posIX.API,java API.
• Each OS has its own name for each system call.
• Functions of API invoke the actual system calls on behalf of the application programmer.
• For eg,the win32 function createprocess() calls, NTcreateprocess() system call in the windows kernel.

6
Benefits of using API than system call by programmer:
1. Programs portability:
A program is designed using an API can be compiled, run on any system that supports the same API.
2. Actual system calls are more detailed and difficult to work with an API.

System call interface


The runtime support system for most programming language provides a system call interface.
It serves as a link to the system calls made available by OS.
It interprets function calls in API & involves the necessary system calls with in the OS.
A number is associated with each system call, the system call interface maintains a table indexed according to
these numbers.
System call interface then invokes the intended system call in the OS kernel & returns the states of the system call
& any return values.
The caller no need to know how the system call is implemented or what it does during execution.

Types of system calls


1. Process control
• end, abort
• load, execute
• create process, terminate process
• get process attributes, set process attributes
• wait for time, wait event, signal event
• allocate & free memory
2. File management
• create file, delete file
• open, close
• Read, write, reposition
• get file attributes, set file attributes
3. Device management
• Request device, release device
• Read, write, reposition
• get device attributes, set device attributes
• logically attach/ detach devices
4. Information maintenance
• get time/date, set time/date
• get system data, set system data
• get process, file or device attributes
• set process, file or device attributes
5. Communications:
• create, delete, communication connection
• send, receive messages
• transfer states information
7
• attach/detach remote devices
6. Protection
• Set / get permission
• Allow / deny user

System calls can be grouped into 6 categories.


1. Process control:
• A process or job executing one program want to load or execute another program.
A running program needs to be able to halt its execution either normally/abnormally.
• Sometimes a new process may need to wait to finish its execution for certain amount of time or wait for a
specific event to occur.
These tasks are done by system calls.
2. File management:
• System calls for file management requires the name of a file & some file attributes.
• We may also read, write or reposition & finally we need to close the file.
• Some calls of operations are needed for directories.
3. Device management:
• A process may need several resources to execute they may be main memory, disk drivers, access to files &so
on..
• Various system calls are implemented to request a resource, to grant a resource, to wait until resource is
available & read, write, reposition the device.
4. Information maintainance:
• These system calls are used for the purpose of transferring information between the user program &
OS
• They give system date, time, no. of current users, version number, amount of free memory or disk space & so
on…
• Some are helpful in debugging a program, to dump memory, a time profile of a program etc.
5. Communication:
• There are 2 models of inter process communication : the message passing model & the shared memory model.
System calls are needed to get host Id, and process Id for open connection, close connection, read & write
messages & so on..
6. Protection:
Protection provides a mechanism for controlling access to the resources provided by a computer system. It is very
important working with multi programmed computer systems with several users.
System calls need to get permission, get permission & allow/deny user.

Examples of Windows & Unix system calls

8
Os structure
1. Monolithic systems
• In this, the entire OS runs as a single program in kernel mode ie; not divided into modules
• The OS is written as a collection of procedure, linked together into a single large executable binary
program
• Each procedure in system is free to call any other one
• In this approach, every procedure is visible to every other procedure
• It is difficult to implement and maintain, limited by hardware functionality
• Used in MSDOS, earlier version of UNIX

2. Layered approach
• In this OS is broken into a number of layers. The bottom layer is the hardware & the highest layer is the
user interface
• Each layer consists of data structures & operations which are invoked by higher level layers
• A layers does not need to know how these operations are implemented, but needs to know what these
operations.
• If layer 0 i.e hardware is running correctly, then its services can be used by layer 1. Now the layer 1 is
debugged and if any bug is found , the error must be in that layer because the layer below it is already
debugged
• The design & implementation of system are simplified
• In this, construction & debugging is simple
• It simplifies system verification
• Adding new functionalities or removing is very easy

Limitations:
• It needs careful planning because a layer can use only lower level layers
• This approach has less efficiency than other types because as parameters are passed from one layer to other
layer and at each layer the parameters may be modified, data may need to be passed and so on.
• Each layer adds overhead to the system call, then it takes longer time to run
• It is always not possible to divide the functionality
• Large number of functionalities need more layers which leads degradation in performance.
• No communications betweem non adjacent layers
Eg; os/2, windows NT

3. MicorKernels
• The kernel is broken down into separate processes. This approach removes all non-essential components from
the kernel and implementing them as system and user level programs. So that some are run in kernel space and
some run in user space.
• The main function of the micro kernel is to provide a communication facility between them
• Communication is provided by message passing.
• Micro kernels provide minimal process and memory management.
• The advantage of this is that it provides flexibility and extendibility.
• Any new service can be added to user space without modification of kernel
• It also increases portability of OS from one machine to another
• It provides more security and reliability , since most services are running at user level than at kernel processes.
• If a service fails, the rest of OS remains untouched i.e. other servers can still work efficiently.
• Micro kernel can suffer from performance decreases due to increased system function overhead
9
Ex. : Tru64, UNIX, QNX real time OS

4. Modules
• The best current methodology for OS design involves using object oriented programming techniques to create
a modular kernel
• In this, kernel has a set of core components & links in additional services either during boot time or during
runtime
• This strategy uses dynamically loadable modules
• In this any module can call any other module which is not possible in layered approach
• The primary module has only core functions & knowledge of how to load & communicate with other modules.
• It is more efficient because modules do not need to invoke message passing in order to communicate like in
micro approach.

Monolithic Vs Micro kernel

Process concept
• Process is the fundamental concept of OS structure. A process is an instance of an executing program
• Each process has its own virtual CPU.
• Process is an active entity
• Two processes may be associated with the same program & considered as 2 separate execution sequences
• A process includes the current values of program counter & processor registers
• A process includes the following:
o Stack - contains temporay data such as function parameters, return addresses & local variables
o Data section - contains global, static variables
o Heap - memory that is allocated dynamically
o Text - includes current activity represented by the value of program counter & contents of processor
registers
• Each process is given an integer identifier termed as process identifier PID

10
Process states
As a process executes, it changes state. The state of a process is defined as the current activity of the process.
1.New:
A newly created process is one which has not even loaded into main memory through its associated (PCB) process
control block has been created
2.Ready:
A process in ready state is waiting for an opportunity to be executed. All the ready processes are placed in the
ready queue
3.Running:
A process is said to be in running state if it is being executed by the processor
4.Blocked:
Here the processor Waits for the occurrence of an event in order to be executed until that event is completed , it
can’t proceed further
5.Exit
A process is said to be in exit state if it is aborted or halted due to some reason. An exit process must be freed from
the pool of executable processes by the OS.
Only one process can be running on any processor at any instant. Many processes may be ready and waiting.
Program Vs process
Program Process
1.It is a passive entity 1. It is a active entity
2.It is a set of instruction written in Computer language 2 It is a program in execution
3.It does nothing until it gets executed 3 It is an instance of Executing program & Perform
specific action
4. It has a longer life span because it is stored in the 4 Process has a shorter and limited life span because it
memory until it is not manually deleted gets terminated after the completion of the task
5. the resource requirement is memory on disk to store 5 the resources requirement for process are CPU,
program as file and not required any other resources memory address, disk , I/O etc.

Process control block(PCB):


Each process is represented in the OS by PCB, also called as Task control block.
It contains all the information needed to keep track of process. PCB serves as the repository for any information
that may vary from process to other process.

11
• Process ID : Unique identification for each of process in OS.
• Process state : Current state of the process i.e. ready, running, writing etc
• Pointer : A pointer to parent process
• Program counter : A pointer to the address of next instruction to be executed for this process
• CPU registers : The registers vary in number & type, depending on computer architecture, they include
accumulations, index registers, general purpose register etc they are used by process to store for execution in
running state
• Process privileges: This is required to allow/disallow access to system resources
• CPU scheduling information: This information includes a process priority, pointers to scheduling queues &
any other scheduling parameters
• Memory management information : This information includes the value of base & limit registers, page tables
or the segment tables, depending on the memory system used by OS
• Accounting information: This information includes the amount of CPU & real time used, time limits, account
numbers, job/process numbers & so on..
• I/O status information: This information includes the list of I/O devices allocated to the process, a list of
open files& so on..

Thread
Single thread of control allows the process to perform any one task at one time
Many modern OSs have extended the process concept to allow a process to have multiple threads of execution &
thus to perform more than one task at a time.
On a system that supports threads, the PCB is expanded to include Information for each thread.
Thread is a light weight process
Thread shares the resources of parent process.

12
Process scheduling
The objective of multiprogramming is to have some process running at all times to maximize CPU utilization.
The objective of time sharing is to switch the CPU among processes so frequently that users can interact with each
program while it is running.
To meet these objectives, the process scheduling selects an available process for program execution on CPU.

Scheduling queues
1.Job queue:
As process enter the system , they are put into a job queue which consists of all processes in the system
2.Ready queue:
The process that are residing in main memory & are ready and waiting to execute are kept on a list called ready
queue. It is stored as linked list.
3.Device queue:
It contains list of process waiting for a particular I/O device. Each device has its own queue.

Queuing diagram:
• Each rectangular box in this represents a queue. The circles represents the resources that serve the queues, &
arrows indicates the flow of process in the system.
• A new process is initially put in the ready queue. It waits there until it is selected for execution or is
dispatched. Once the process is allocated the CPU and is executing, one of several events could occur.
o The process could issue an I/O request and then be placed in an I/O queue
o The process could create a new sub process & wait for the sub process termination
o The process could be removed forcibly from CPU , as a result of an interrupt, & be put back in the
ready queue.
• In first, second, the process the eventually switches from the waiting state to the ready state & is then out back
in the ready queue.
• A process continues this cycle until it terminates, at which time it is removed from all queues & has its PCB &
resources deal located.

Schedulers Types
In general , processes can be 2 types : I/O bound and CPU bound.
1. I/O bound Process
It is one that spends most of its time doing I/O than it spends doing computations. If all processes are I/O bound,
then ready queue is almost empty.
2. CPU bound process
It generates I/O requests infrequently and it spends most of time doing computations.

Schedulers are in 2 ways


1. The long Term Scheduler or Job Scheduler
It selects processes from the disk pool and loads them into memory for execution.
2. the Short term scheduler or CPU scheduler
It selects from among the processes that are ready to execute and allocates the CPU to one them

• Distinction between these two schedulers lies in frequency of execution. The Short Term scheduler must
select a new process for the CPU frequently. A Process may execute for only a few mille seconds. Because of
the short time between process executions , it is fast
13
• The long Term Scheduler executes much less frequently . the gap between 2 processes is in minutes. It
controls the degree of multiprogramming. i.e. the number of processes in memory. It is need to invoked only
when a process leaves the system.
• On some systems , long term scheduler may be absent or minimal. For example Unix, Windows.

Context Switch
• Interrupts cause the OS to change a CPU from its current task and to run kernel runtime.
• Then system needs to save current context of the process running on the CPU so that it can restore that context
when its processing is done, essentially suspending the process and then resuming it.
• The context represented in the PCB of the process, it includes the value of CPU registers, the process state,
memory management information etc.
• Switching the CPU to another process requires performing a state save of the current process and a state
restore of a different process. This task is known as context Switch.
• When a context switch occurs, the kernel saves the content of the old process in its PCB and loads the saved
context of the new process scheduled to run.
• If OS provides multiple sets of registers, then context switch simply requires changing the pointer to the
current register set.
• Context switch times are highly dependent on hardware support

Operations on processes
The processes in systems can execute concurrently and they may be created and deleted dynamically. Thus systems
provide a mechanism for process creation and termination.
1. Process Creation
• There are 4 possibilities to create a process
o when the system is initialized
o when execution of process creation system call by a running process
o when user requests to create a new process
o when a batch job is initiated
• A process may crate several new processes via system call during execution. The creating process is called
parent process and the new processes are called child processes. Each of these new processes may in turn
create other processes , forms a tree of processes
• When a new process is created, OS assigns a unique process identifier (PID) to it and inserts a new entry in
process table.
• A process will need certain resources like CPU time, memory, files I/O devices to perform its task.
• Child process may obtain its resources directly from OS or it may use / share resources of the parent process.
• In addition to physical and logical resources that a process obtains when it is created, initialization data may be
passed along by parent to child process.
• When a process creates new process,
• The parent continues to concurrently with its children or
14
• The parent waits until some or all of its children have terminated.

Process Termination
• A process terminates when it finishes executing its final statement and OS delete it by using exit() system call
• Then all the resources of the process like physical and virtual memory, open files and I/O buffers are deal
located by OS
• A process can be terminated either by OS or by parent process.
• A parent may terminate a child process due to one of the following reasons.
• When task given to the child is not required now
• When child has taken more resources than its limit
• The parent is exiting , as a result all children are deleted. This is called cascaded termination.

Interprocess communication
• Processes executing concurrently in the OS may be either independent processes or co-operating processes.
• Any process that does not share data with any other process is called Independent process. It can not affect or
affected by other processes.
• Any process that shares data with other processes is a co-operating process.
• Reasons for using cooperating processes
o Information sharing for several users who need same
o Speed up computation by task into subtasks , executing them in parallel
o Modularity i.e. dividing system functions into separate processes or threads
o Convenience i,e, even an individual user may work on many tasks at same time. For example, a user
may be editing, printing and compiling in parallel
• Cooperating processes require an inter process communication (IPC) mechanism that will allow them to
exchange data and information.
• There are 2 models of inter process communication
1. Shared memory 2. Message passing
1. Shared memory
• In this model, a region of memory that is shared by cooperating processes is established
• Processes can then exchange information by reading and writing data to the shared region
• Shared region resides in the address space of the process creating the shared memory segment.
• It allows maximum speed and convenience of communication
• It is faster than message passing
• System calls are required only to establish shared memory regions
• Kernel assistance is not required after shared memory is established

2. Message Passing
• In this model, communication takes place by means of messages exchanged between the cooperating
processes.
• It provides communication and to synchronization between processes
15
• It is useful where the communicating processes may reside on different computers connected by network.
• A message passing facility provides at least 2 operations. They are send message and receive message.
• It is useful for exchanging smaller amount of data
• It is easier to implement
• It is implemented using many number of system calls than the first method
• It is more time consuming task, having kernel intervention.

Producer consumer problem


• The concept of co operating processes can be illustrated using producer consumer problem.
• A producer process produces information that is consumed by a consumer process.
• Producer consumer provides a useful metaphor for client server paradigm. Think server a producer and a
client as a consumer.
• For example, a web server produces (provides) HTML files and images which are consumed( i.e. read) by the
client web browser requesting the resource.
• One solution to the producer consumer problem uses shared memory. To allow produced and consumer
processes to run concurrently, we must have available a buffer of items that can be filled by the producer and
emptied by the consumer.
• This buffer will reside in region of memory that is shared by the producer and consumer processes
• A producer can produce one item while the consumer is consuming another item.
• The producer and consumer must be synchronized , so that consumer does not try to consume an item that has
not yet been produced
• 2 types of buffers can be used. They are unbounded buffer and bounded buffer.
• The unbounded buffer places no practical limit on the size of the buffer. The consumer may have to wait for
new items, but the producer can always produce new items
• The bounded buffer assumes a fixed buffer size. The consumer must wait if the buffer is empty and the
producer must wait if the buffer is full.

16
UNIT-2
CPU Scheduling:
When a computer is multi programmed, 2 or more processes are in the ready state, then situation
occurs for competing for the CPU at the same time. Then OS makes the choice is called scheduler,
which process to run next. The algorithm it uses is called scheduling algorithm. Scheduler picks right
process to run and makes efficient use of CPU.
When to schedule:
→When a new process is created, a decision needs to be made whether to run parent / child process.
Sine both are in ready state.
→When a process exits i.e, no longer run, so some other process must be choosen from the number of
ready processes.
→When a process blocks on I/O or for some reason, another process has to be selected to run.
→When an I/O interrupt occurs, a scheduling decision may be made. (i.e, running state to ready state)

CPU-I/O Burst Cycle


→The success of CPU scheduling depends on a property of processes. Process execution consists of a
cycle of CPU execution and I/O wait. Processes alternate between these 2 states.
→Process execution begins with a CPU burst, followed by I/O burst and so on. Final CPU burst ends
with a system request to terminate execution.
→The duration of CPU bursts vary from process to process and from computer to computer.

Scheduling schemes
Scheduling algorithms can be divided into 2 types with respect to how they deal with clock
interrupts because scheduling decision can be made at each clock interrupt.
i) Non-preemptive or Cooperative:
This algorithm picks a process to run and then just lets it run until it blocks (either on I/O or
waiting for another process) or until it voluntarily releases the CPU. It will not forcibly suspended. So
no scheduling decisions are made during clock interrupts.
ii) Preemptive scheduling:
This algorithm picks a process and lets it run for a maximum of some fixed time. If it is still
running, at the end of the time interval, it is suspended and scheduler picks another process to run.

1
Dispatcher:
One of the components involved in the CPU scheduling function is the dispatcher. The dispatcher is
the module that gives control of the CPU to the process selected by the short term scheduler. This
function involves the following
→ Switching context
→Switching to users mode
→Jumping to the proper location in the user program to restart the program
The dispatcher should be as fast as possible, since it is invoked during every process switch. The time it
takes for the dispatcher to stop one process and start another running is known as dispatch latency.

Scheduling criteria
Different CPU scheduling algorithms have different properties. In choosing which algorithm to use
in a particular situations, based on some criteria. The criteria includes the following

a) CPU utilization:
The selected algorithm should keep the CPU as busy as possible.
b) Throughput:
The number of processes that are completed per time unit, called Throughput. Using this, we
can measure the work is being done by CPU.
c) Turnaround time:
The interval from the time of submission of a process to the time of compilation is the
turnaround time.
→Turnaround time is the sum of the periods spent waiting to get into memory, waiting in the ready
queue, executing on the CPU and doing I/O.
d) Waiting time:
The amount of time that a process spends waiting in the ready queue,
→Waiting time is the sum of the periods spent waiting in the ready queue.
e) Response time:
The amount of time taken from the submission of a request until the first response is produced.

Scheduling Algorithms
1. First-come, First-served scheduling:
→This is the simplest of all CPU-scheduling algorithms.
→The process that requests the CPU first is allocated the CPU first.
→The implementation is managed with FIFO queue.
→The code for FCFS scheduling is simple to write and understand.
→When the running process blocks, the first process on the queue is run next. when a blocked process
become ready, it is put on the end of queue like a new one.
→This algorithm is non pre-emptive.

2
Example
Process no. Arrival time burst time complete time TurnAroundTime Waitingtime
1 0 4 4 4 0
2 1 3 7 6 3
3 2 1 8 6 5
4 3 2 10 7 5
5 4 5 15 11 6

Turnaround time = complete time – arrival time


Waiting time = turn around time – Burst time

Gantt chart
P1 P2 P3 P4 P5
0 4 7 8 10 15

Drawback:
There is a convoy effect as all the other processor wait for the one big process to get off the CPU.
This effect results in lower CPU and device utilization.

2. Shortest-job-first scheduling (S J F):


→When the CPU is available, it is assigned to the process that has smallest next CPU burst.
→If the next CPU bursts of 2 processes are the same, FCFS scheduling is used to break the tie.
→It gives minimum average waiting time for a given set of process.
→The difficulty with SIF algorithm is knowing the length of next CPU burst.
→SJF scheduling is used frequently in long-term scheduling and cannot implemented in short-term
scheduling .
→SJF can be either pre-emptive or non pre-emptive.

Example
Process no. Arrival time burst time complete time TurnAroundTime Waitingtime
1 1 7 4 7 0
2 2 5 16 14 9
3 3 1 9 6 5
4 4 2 11 7 5
5 5 8 24 19 11

Gantt chart
idle P1 P3 P4 P2 P5
0 1 8 9 11 16 24

3. Shortest-Remaining-Time first scheduling (SRTF):


A pre-emptive SJF algorithm will pre-empt the currently executing process Where as non pre-emptive
SJF algorithm will allow the currently running process to finish its CPU burst.
Pre-emptive SJF scheduling is sometimes called shortest remaining time first scheduling.

3
Example
Process no. Arrival time burst time complete time TurnAroundTime Waitingtime
1 0 7 19
2 1 5 12
3 2 3 4
4 3 1 1
5 4 2 5
6 5 2 2

Gantt chart
P1 P2 P3 P4 P3 P3 P6 P5 P2 P1
0 1 2 3 4 5 6 7 9 13 19

4. Priority scheduling:
→A priority is associated with each process, and the CPU is allocated to the process with the highest
priority.
→Equal priority processes are scheduled in FCFS order.
→The large CPU-burst, has lower priority and vice-versa.
→Priorities are indicated by some fixed rang of numbers.
→Priorities can be defined either internally or externally i.e, statically/dynamically.
→It can be either pre-emptive or non pre-emptive.
→Major problem with this algorithm is indefinite blocking or starvation.
→A solution to the problem of indefinite blocking of low priority processes aging i.e, gradually
increasing the priority of processes that wait for a long time

Example (non – pre-emptive) (assume that higher no. Represents high priority)
Process ArrivalTime BurstTime Priority CompleteTime TurnAroundTime Waitingtime
1 0 4 4 14 4 0
2 1 5 5 16 15 10
3 2 1 (high) 7 5 3 2
4 3 2 2 18 15 13
5 4 3 1 21 17 14
6 5 6 6 11 6 0
Gantt chart
P1 P3 P6 P2 P4 P5
0 4 5 11 16 18 21

Example ( pre-emptive) (assume that higher no. Represents low priority)


Process ArrivalTime BurstTime Priority CompleteTime TurnAroundTime Waitingtime
1 0 8 3 12 12 4
2 1 1 1(H) 2 1 0
3 2 3 2 5 3 0
4 3 2 3 14 11 9
5 4 6 4(L) 20 16 10
Gantt chart
P1 P2 P3 P1 P4 P5
0 1 2 5 12 14 20
4
Starvation:
→A major problem in priority scheduling algorithm is starvation or indefinite blocking.
→A process that is ready to run but waiting for the CPU can be considered blocked.
→A priority scheduling algorithm can leave some low priority processes waiting indefinitely because
higher priority processes can prevent a low priority process, this is known as starvation.
→A solution to this problem is aging. Aging is a technique of gradually increasing the priority of
processes that wait in the system for a long time.
→For example, if priorities range from 127 (low) to 0 (high), we would increase the priority of waiting
process by 1 every 15 minutes.

5. Round Robin Scheduling: (RR)


→This algorithm is designed especially for time sharing system.
→It is pre-emptive scheduling.
→In this, a small unit of time, called a time quantum or time slice is defined. It is generally from 10 to
100ms.
→The ready queue is treated as circular queue, the CPU scheduler goes around the ready queue,
allocating the CPU to each process for a time interval of up to 1 time quantum.
→Some processes may have CPU burst of less than 1 time quantum. In this case, the process itself will
release the CPU voluntarily. The scheduler will then proceed to the next process in the ready queue.
→Otherwise, if the CPU burst of the currently running process is longer than 1 time quantum, the timer
will go off and will cause an interrupt to the OS.
→A context switch will be executed, and the process will be put at the tail of the ready queue. The
CPU scheduler will then select the next process in the ready queue.
→The average waiting time in this is often long.
→The performance of this algorithm depends heavily on the size of the time quantum.
→If the time quantum is large, the RR policy is same as FCFS policy. If the time quantum is small,
results frequent context switching. It affects the performance.

Example ASSUME THAT QUANTUM =2


Process no. Arrival time burst time complete time TurnAroundTime Waitingtime
P1 0 3 7 4
P2 1 5 17 12
P3 2 2 4 2
P4 3 5 16 11
P5 4 5 16 11

GANTT CHART
P1 P2 P3 P1 P4 P5 P2 P4 P5 P2 P4 P5
0 2 4 6 7 9 11 13 15 17 18 19 20

6. Multilevel queue scheduling:


→A multilevel queue scheduling algorithm partitions the ready queue into several separate queues.
→The processes are permanently assigned to one queue, generally based on some property of the
process, such as memory size, process priority, or process type.
→Each queue has its own scheduling algorithm.

5
→For example, separate queues might be used for foreground and background processes. The
foreground queue scheduled by RR algorithm, while the background queue in scheduled by FCFS
algorithm.
→In addition, there must be scheduling among the queues, which is commonly implemented as fixed-
priority pre-emptive scheduling.

Fig : Multilevel queue scheduling Fig : Multilevel feedback queue schedule

7. Multilevel feedback queue scheduling:


→This schedule is similar to multilevel queue scheduling but allows a process to move between
queues.
→The idea is to separate process according to the characteristics of their CPU bursts.
→If a process uses too much CPU time, it will be moved to a lower priority queue, this scheme leaves
I/O bound and interactive processes in the higher priority queues.
→In addition, a process that waits too long in a lower priority queue may be moved to a higher priority
queue. This form of aging prevents starvation.

Process Synchronization
Race condition:
A situation, where several processes access and manipulate the same data concurrently and the
outcome of the execution depends on the particular order in which the access takes place is called race
condition.
To guard against the race condition above, we need to ensure that only one process at a time can
be manipulate common variable
To make such a guarantee, we require that the processes be synchronized.

The critical-section problem:


→Consider a system consisting of ‘n’ processes. Each process has a segment of code, called critical
section, in which the process may be changing common variable, updating a table, writing a file, and so
on.
→When one process is executing in its critical section, no other process is to be allowed to execute in
its critical section.
→The critical section problem is to design a protocol that the process can use to cooperate. Each
process must request permission to enter its critical section.
→The section of code implementing this request is entry section. The critical section may be followed
by any exit section. The remaining code is the remainder section.
→A solution to the critical section problem must satisfy the following 3 requirements
1. Mutual exclusion:
If process pi is the executing in its critical section, then no other processes can be executing in their
critical section.
6
2. Progress:
If no process is executing in the critical section and some processes wish to enter their critical sections ,
then only those processes that are not executing in their remaining section can participate in deciding
which will enter its critical section next , and this selection can not be postponed indefinitely.
3. Bounded waiting
There exists a bound , or limit, on the number of times that other processes are allowed to enter their
critical sections after a process has made a request to enter its critical section and before that request is
granted.
→Two general approaches are used to handle critical section in operating system (1)pre-emptive
kernels (2)non pre-emptive kernels.
→A pre-emptive kernel allows a process to the be pre-empted which it is running in kernel mode.
→A non pre-emptive kernel does not allow a process running in kernel mode to be pre-empted , a
kernel mode process will run until it exists kernel mode , blocks , or voluntarily yields control of the
CPU
→A non pre-emptive kernel is free from race conditions because only one process is active in the
kernel at a time.
→Pre-emptive kernel must be designed carefully to free from race condition, pre-emptive kernel is
suitable for real time programming and it is more responsive

Peterson’s solution:
→A classic software based solution to the critical section problem is known as peterson’s solution.
→It is restricted to 2 processes that alternate execution between their critical solutions and remainder
sections.
→Let us assume that Pi and Pj are 2 processes.
→Petersons solution requires 2 data items to share two processes.
int turn;
Boolean flag[2];
→ The variable turn indicates whose turn it is to either its critical section, flag array used to indicate if
a process is ready to enter its critical section.
→To enter the critical section, process pi first sets flag[i] to true and then sets to the value j for turn.
→If both processes try to enter at the same time, turn will be set to both i and j, but it will be
overwritten by any of them

To prove that this solution is correct, we need to show that


1. Mutual exclusion is preserved
2. The progress requirement is satisfied
3. The bounded waiting requirement is met

→To prove property1, Pi enters critical section only if either flag[j]=false or turn=i. Two processes
cannot execute at the same time. Since the value of turn can be either 0 or 1 but cannot be both.
7
→To prove properties of 2 and 3, we note that Pi can be prevented from entering the critical solution
only if it is struck in the while loop with condition flag[j]=true and turn=j.
If Pj is not ready to enter critical section, then flag[j]=false, then Pi can enter its critical section.
Once Pi exists critical section, it will reset flag[i]=false allowing Pj to enter critical section.
If Pi reset flag[i]=true, it must also set turn to j thus, it provides progress and bounded waiting.

Synchronization hardware:
Software based solution i.e Peterson’s solution is suitable on modern computer architectures.
→Another solution for critical section problem requires a tool lock. Race conditions are prevented by
acquiring a lock before a process entering into critical section and release the lock when it exits.

→In uniprocessor environments the problem can be solved by preventing interrupts that occurred while
shared variable is modified. This is implemented by non pre-emptive kernels.
→In multiprocessor environments, disabling interrupts is time consuming and decreases system
efficiency. Therefore, they provide special hardware instructions.
(i)in one of the algorithm, provides 2 instructions TestAndSet() on and swap() executed automatically.
→If 2 TestAndSet() instructions are executed simultaneously (each on different CPU), they can
implement mutual exclusion by declaring a Boolean variable lock, initialized to false
→The swap() instruction operates in 2 words. Each process has local variable key Boolean
They do not satisfy bounded waiting requirement.
To satisfy all the requirements, use data structures
Boolean waiting[n];
Boolean lock;
→Process Pi can enter its critical section only if either waiting[i]=false or key=false. The value of key
can become false only if the TestAndSet() is executed.
The waiting[i] becomes false only if another process leaves the critical section remaining all are in true,
thus provides mutual exclusion.
→A process exits the critical section by setting lock to false or sets waiting[j]=false. Both allow a
process that is waiting to enter its critical section to proceed.
→To prove the bounded waiting is met, note that when a process leaves its critical section, it scans the
array waiting[] in the cyclic ordering(i+1,i+2.........,n-1,0........i-1). It designates the 1st process in this, is
in entry section as the next one to enter critical section.

8
Semaphores
In 1965, Dijkstra proposed a new and very significant technique for managing concurrent processes by
using the value of simple integer variable to synchronize the progress of interacting processes. This
integer variable is called semaphore.
So it is basically a synchronizing tool and is accessed only through 2 low standard atomic operations,
wait and signal designated by P(S) and V(S) respectively.
Semaphore is a synchronization tool.
Semaphore is a variable which can hold only a non-negative integer value, shred between all the
threads, with operations wait and signal.
A semaphore S is an integer variable that, apart from initialization, is accessed only through 2
standard atomic operations wait() and signal().
The wait() operation was originally termed P, signal () was termed as V.

→All modifications to the integer value of semaphores in the wait() and signal() operations must be
executed indivisibly. That is, when one process modifies semaphore value, no other process can
simultaneously modify that same semaphore value.
Wait( ) decrements the value of its argument S, as soon as it would become non negative.
Signal() increments the value of its argument S, as there is no more process blocked on the queue
→In addition , in the case of wait(s) , testing of integer value of S (s<=0), as well as its possible
modification (S--) must be executed without interruption.
→Semaphores can be 2 types. They are Counting semaphores and binary semaphores.
The value of counting semaphores can range
Properties
• It is simple and always have non-negative integer value.
• Works with many processes.
• Can have many different critical sections with different semaphores
• Each critical section has unique access semaphores
• Can permit multiple processes into the critical section at once, it desirable
• Less complicated

Types of semaphores
Semaphores are mainly of 2 types
1. Binary semaphore
It is a special form of semaphore used for implementing mutual exclusion , hence it is often called a
mutex. A binary semaphore is initialized to 1 and only takes the values 0 and 1 during execution of a
program.
It is used to deal with multiple processes.

9
2. Counting semaphores
These are used to implement bounded concurrency. It is used to control access to a given resource
consisting of finite number of instances. The semaphore is initialized to the number of resources
available.
Each process that wishes to use a resource performs wait() operation on the semaphore ( there by
decrementing the count)
When a process releases a resource, it performs a signal() operation (incrementing the count)
When the count for the semaphore goes to 0, all resources are being used. After that , processes that
wish to use a resource will block until the count becomes greater than 0.
Limitations of semaphores
Priority inversion is a big limitation of semaphores
Their use is not enforced, but is by convention only
With improper use , a process may block indefinitely. Such a situation is called Dead Lock.

Monitiors
Monitor is one the ways to achieve process synchronization
Monitor is supported by programming languages to achieve mutual exclusion between processes
It is the collection of condition variables and procedures combined together in a special kind of module
or a package.
The processes running out side the monitor can not access the internal variable of monitor but can call
procedures of the monitor.
Only one processor at a time can execute code inside monitors . monitor is an abstract data type.
Syntax
Monitor Demo
{
Variables ;
Condition variables;
Procedure p1 {---------}
Procedure p2{----------}
}
Condition variables
Two different operations are performed on the condition variables of the monitor they are wait() ,
signal()
Procedures in the monitors help the OS to synchronize the processes

Dead Locks
SYSTEM MODEL
A system consists of finite number of resources to be distributed among a number of competing
processes.
The resources are partitioned into several types. Memory space, CPU cycles, files and I/O devices are
examples of resource types.
A process must request a resource before using it and must release the resource after using it. A
process may request as many resources as it requires to carry out its designated tasks.
Under the normal mode of operation, a process may utilize a resource in request, use, release sequence.

10
A system table records whether each resource is free/ allocated . If a resource is allocated, then table
records which process it is allocated. If a process requests a resource, i.e. allocated another process, it
can be added to a queue of processes waiting for the resource.
A set of processes is in dead locked state when every process in the set is waiting for an event that can
be caused only by another process in the set.
In a deadlock , processes never finish executing , and system resources are tied up, preventing other
jobs from starting.

Necessary conditions for deadlock


A deadlock situation can arise if the following 4 conditions hold simultaneously in a system
1. mutual exclusion
At least one resource must be held in a non sharable mode i.e. only one process at a time can use a
resource. If another process requests that resource, the requesting process must be delayed until the
resource has been released.
2.Hold and wait
A process must be holding at least one resource and waiting to acquire additional resources that are
currently being held by other processes
3. No pre-emption
Resources can not be pre-empted i.e. a resource can be released only voluntarily by the process holding
it, after that process has completed its task.
4. Circular wait
If P0 is waiting for a resource held by P1 and P1 is waiting for a resource held by p2. P2 is waiting for a
resource held by P0.

Resource allocation graph


Deadlocks can be described using directed graph called a system resource allocation graph. It consists
set of vertices V and set of edges E.
The set of vertices partitioned into set of active processes and set of resource types in the system.
A directed edge from process pi to resource type Rj is denoted by Pi → Rj signifies that Pi has
requested an instance of resource type Rj and is currently waiting for that. It is called request edge.
Rj -> Pi is signifies that resource Rj has been allocated to Pi, is called assignment edge.
Pictorially, each process is represented as a circle, and each resource type as a rectangle. If resource
have more than one instance of resource, they are represented as dots in rectangle.
If the graph contains no cycles , then no process in the system is dead locked. If the graph contains a
cycle, then a dead lock may exist or may not exist.
If each resource type has exactly one instance, then a cycle implies a deadlock. If each resource type
has several instances, then a cycle does not imply the dead lock

11
Fig 1 no cycle no deadlock

fig2 fig3

In Fig2 two cycles are exist.


1. P1→R1→P2→R3→P3→R2→P1
2. P2→R3→P3→R2→P2
Process P1 ,P2, P3 are deadlocked.
In Fig3
P1→R1→P3→R2→P1
There is no deadlock. Because P4 may release its resource R2 and that can be allocated to P3 ,
breaking the cycle.

Methods for handling deadlocks


Deadlock problem can deal in any one of 3 ways.
1. we use a protocol to prevent or avoid deadlock , ensuring that the system will never enter a
deadlocked state.
2. we can allow the system to enter a deadlocked state , detect it and recover
3. We can ignore the problem altogether and pretend that deadlocks never occur in the system.
3rd one used by most of the operating systems

Dead lock Prevention


We can prevent the occurrence of deadlock, by ensuring that at least one of 4 deadlock conditions can
not hold.
1. Mutual exclusion
The mutual exclusion condition must hold for non sharable resources

12
Example for non – sharable resource is printer. Example for sharable resource is read only file.
A process never needs to wait for sharable resource.
We can not prevent deadlocks by denying this condition, because some resources are non
sharable.
2. Hold and Wait
To avoid dead lock, ensure that the hold and wait condition never occurs in the system
One protocol can be used that each process to request and be allocated all its resources before it
begins execution. But resource utilization may be low using this.
An alternative protocol can be used that before process can request an additional resources, it
must release the resources that are allocated. It gives a problem of starvation
3. No preemeption
Ensure that this condition does not hold by using a protocol.
If a process is holding some resources and request another resource that can not be allocated,
then all the resources the process is holding are pre-empted. The process with be restarted only
when it can regain its all resources.
It can not be applied to resources such as printers , tape drives .
4. Circular wait
To avoid circular wait, assign a unique integer number to each resource type.
Each process can request resources only in an increasing order of enumeration

Deadlock avoidance
Deadlock prevention results low device utilization and reduced system throughput
Deadlock avoidance algorithms need every process to tell in advance the maximum number of
resources of each type that if may need. Based on all these information we may decide if a process
should wait for a resource or not, and thus avoid chances for circular wait.
a. Safe state
If a system is already in safe state, we can try to stay away from an unsafe state and avoid
deadlock.
Deadlocks can not be avoided in an unsafe state.
A system can be considered to be in safe state if it is not in a state of dead lock and can allocate
resources up to maximum available.
A safe sequence of processes and allocation of resources ensures a safe state.
These algorithms try not to allocate resources to a process if it will make the system in an unsafe
state.
In this method resource utilization may be low.
Because whenever a process requests a resource that is currently available, the system must decide
whether the resource can be allocated immediately or whether the process must wait. The request
is granted only if the allocation leaves the system in safe state.

b. Resource allocation graph algorithm


A resource allocation graph is used to avoid deadlock. If there are no cycles in graph , then there
are no dead locks. If there are cycles , there may be a deadlock.
If there is only one instance of every resource, then a cycle implies a deadlock
Vertices of graph are resources and processes. Graph has request edge, assignment edge and
claim edge.
An edge from a process to resource is request edge (Pi → Rj)

13
And edge from a resource to process is allocation edge (Rj → Pi)
A claim edge denotes that a request may be made in future and is represented as dashed line(
Pi→Rj)
Based on claim edges, we can see if there is a chance for a cycle and then grant requests if the
system will again be in a safe state.
The resource allocation graph is not must useful if there are multiple instances for a resource.

c. Bankers algorithm
Banker’s algorithm is resource allocation and deadlock avoidance algorithm which test all the request
made by processes for resources.
It check for safe state, if after granting request system remains the safe state then it allows the request
If there is no safe state, it don’t allow the request made by the process
Inputs to Banker’s Algorithm
Maximum need of resources by each process
Currently allocated resources by each process
Maximum free available resources in the system
Request will only be granted under below condition
If the request made by process is less than or equal to maximum need to that process (request <= need)
If the request made by process less than or equal to freely available resource in the system (Request < =
available)

14
Example
Total resources in system
A B C D
6 5 7 6
Available system resources are
A B C D
3 1 1 2
Processes (currently allocated resources)
A B C D
P1 1 2 2 1
P2 1 0 3 3
P3 1 2 1 0
Process ( maximum resources)
A B C D
P1 3 3 2 2
P2 1 2 3 4
P3 1 3 5 0

Need = Maximum Resources - Currently Allocated


Processes (Need Resources)

A B C D

P1 2 1 0 1
P2 0 2 0 1
P3 0 1 4 0

EXAMPLE 2
The resources in the system
A B C
10 5 7
Allocation Maximum
A B C A B C
P0 0 1 0 7 5 3
P1 2 0 0 3 2 2
P2 3 0 2 9 0 2
P3 2 1 1 2 2 2
P4 0 0 2 4 3 3

Available Need
A B C A B C
3 3 2 P0 7 4 3
P1 1 2 2
P2 6 0 0
P3 0 1 1
P4 4 3 1

15
The system is currently in safe state. Indeed , the sequence < p1, p3,p4,p2,p0> satisfies the safety
criteria
Suppose p1 requests one additional resource of type A , 2 instances of Type C ( 1, 0, 2)

Allocation Need Available


A B C A B C A B C
P0 0 1 0 7 4 3 3 3 0
P1 3 0 2 0 2 0
P2 3 0 2 6 0 0
P3 2 1 1 0 1 1
P4 0 0 2 4 3 1
To decide , whether this request can be immediately granted we first check Request <= available. i.e.
(1,0,2) <=(3,3,2) which is true.
Now the new system is safe, if the sequence is <P1,P3,P4,P0,P2> . Then grant the request of process
P1.
Now in this state, if P4 request for (3, 3,0) , can not be granted , since the resources are not available.
Or if request for (0,2,0) by P0 can not be granted, even though the resources are available, since the
resulting state is unsafe.

Dead lock Detection


In this approach, the OS doesn’t apply any mechanism to avoid or prevent the deadlocks. Therefore
the system considers that the deadlock will definitely occur.
In order to get rid of deadlocks, the deadlock detection algorithm and recover algorithms are used

a. Single instance of each Resource type.


• If all resource types has only single instance , then we use a graph called wait –for –graph which is
a variant of resource allocation graph
• Here vertices represent processes and directed edge from P1 to P2 indicate P1 is waiting for a
resource held by P2.
• If the graph contains cycle, indicates a deadlock. So system can maintain, wait-for-graph. And
check for cycles periodically to detect any deadlock.
• It is not applicable for a system having multiple instances of resource type.

Fig : Resource allocation Graph Fig : Wait for Graph

b. Multiple instances of Resource type


It is similar to Banker’s algorithm
1. Let work and finish be vectors of length m and n respectively. Initialize Work = available. For i=
0,1,...n-1 , if allocation not equal to 0 , then Finish[i] = false otherwise Finish[i]= true
16
2. Find an index i such that both
o Finish[ i] = false
o Request i <= work
o If no such i exists , go to step 4
3. work = work + allocation
o Finish[i] = true go to step 2
4. if Finish[i ]= false, then Pi is dead locked
Example
The resources in system
A B C
7 2 6
Allocation Request Avialable
A B C A B C A B C
P0 0 1 0 0 0 0 0 0 0
P1 2 0 0 2 0 2
P2 3 0 3 0 0 0
P3 2 1 1 1 0 0
P4 0 0 2 0 0 2
System is not in deadlocked state, if the sequence of processes are in <P0,P2,P3,P1,P4 > Results in
Finish [i] = true for all i.
Suppose, now that process P2 makes one addition request of type C. The request matrix is modified.
REQUEST
A B C
P0 0 0 0
P1 2 0 2
P2 0 0 1
P3 1 0 0
P4 0 0 2
Now system is deadlocked

Recovery from deadlock


If the system is in deadlock state, some methods for recovering it from the deadlock state must be
applied. There are 2 options for breaking a deadlock.
1. Process Termination or Killing
a. Abort all deadlocked resources
this method breaks the deadlock cycle , if the problem becomes very serious. All processes will
execute again from starting.
Fast and a lot of process work is lost, leads to inefficiency
b. Abort one process at a time until deadlock cycle is eliminated
more work to do resolve a deadlock
Bigger problem is to decide which process to abort
2. Resource Pre-emption
Pre-empt some resources from processes and give these resources to other processes until the deadlock
cycle is broken.

17
If pre-emption is used, then 3 issues need to be considered.
a. Selecting a Victim
we must determine the order of pre-emption to minimize the cost. Cost factors may include
number of resources a deadlocked process is holding and the amount of time so far process has
executed etc.
b. Roll back
Whenever a deadlock is detected, it is easy to see which resources are needed. To do the
recovery of deadlock , a process that owns a needed resource is rolled back to a point in time
before it acquired some other resource just by starting one of its earlier check points
c. Starvation
The resources will not always be pre-empted from the same process to avoid starvation

18
UNIT-3
MAIN MEMORY
➢ Main memory and registers built in the processor itself are the only storage that the CPU can
access directly.
➢ The machine instructions can take memory addresses as arguments, and none can take disk
addresses. If the data are not in memory, they must be moved there before the CPU can
operate on them.
➢ Registers are built into the CPU are generally accessible within one cycle of CPU clock.
➢ The data from main memory are accessible with many cycles of the CPU clock.
➢ The protection of memory space is accomplished by having CPU hardware we can provide this
protection by using 2 registers. The base register holds the smallest legal physical memory
address. The limit specifies the size of the range.

LOGICAL VS PHYSICAL ADDRESS SPACE


An address generated by the CPU is commonly referred to as a logical address or virtual
address. The compile time and load time address binding methods generate identical logical
and physical addresses. The set of all logical addresses generated by a program is a logical
address space.
An address seen by the memory unit i.e. the one loaded into the memory address register
of the memory is referred as physical address. The set of all physical addresses corresponding
to these logical addresses is a physical address space.
Basis for Logical address Physical address
comparison
Basic It is the virtual address generated The physical address is a
by CPU location in a memory unit
Address space Set of all logical addresses Set of all physical addresses
generated by CPU in reference to a mapped to the corresponding
program is referred as logical logical addresses is referred
address space as physical addresses
Visibility the user can view the logical The user can never view
address of a program physical address of program
Access The user uses the logical address to The user can not directly
access the physical address access physical address

Generation The logical address is generated by Physical address is computed


the CPU by MMU

SWAPPING
➢ Swapping is mechanisms in which a process can be swapped temporarily out of main memory
to secondary storage and make that memory available to other processes. At some later time,
the system swaps back the process from the secondary storage to main memory.
➢ If multiprogramming environment with round robin scheduling algorithm, when each process
finishes its quantum, it will be swapped with another process.
➢ Swapping policy is also used for priority based scheduling algorithms if higher priority process
arrives, the memory manager can swap out the lower priority process and then load and
execute higher priority process.
➢ A process that is swapped out will be swapped back into the same memory space it occupied
previously. This restriction is dictated by the method of address binding.
1
➢ Swapping requires a backing store. The backing store is a fast disk. The system maintains a
ready queue consisting of all processes that are ready to run and are in disk.
➢ Whenever CPU scheduler decides to execute a process, it calls the dispatcher. The dispatcher
checks to see whether the next process in the queue is in memory. It if is not, and if there is no
free memory region, the dispatcher swaps out a process currently in memory and swaps in the
desired process.
➢ The context switch time in such a swapping system is high. The major part of the swap time is
transfer time and is directly proportional to the amount of memory swapped. If we want to
swap a process, we must be sure that is completely idle or only when there are no binding I/O
operations.
➢ Most modern OS no longer use swapping, because of is too slow and there are faster
alternatives available.
Eg: paging

CONTIGUOUS MEMORY ALLOCATION


In contiguous memory allocation each process is contained in a single contiguous block of memory.
Memory is divided into several fixed size partitions. Each partition contains exactly one process.
When a partition is free, a process is selected from the input queue and loaded into free partition.
The free block of memory is known as holes. The set of holes is searched to determine which hole is
best to allocate

Memory protection
With relocation (base) and limit registers, each logical address must be less than the limit register. The
MMU maps logical address dynamically by adding the value in the relocation register. This mapped
address is sent to memory.
The main aim of memory protection is to prevent a process from accessing memory that has not been
allocated to it.
Memory allocation can done in different strategies
1) First fit
Allocate the first hole that is big enough. Searching can start either at the beginning of the set of holes
or at location where the previous first fit search ended. It is faster.

2
2) Best fit
Allocate the smallest hole that is big enough. We must search the entire list, unless the list is ordered
by size.
3) Worst fit
Allocate the largest hole we must search the entire list unless it is ordered by size.
First fit and best fit are better than worst fit in terms of storage utilization.

FRAGMENTATION
➢ Fragmentation occurs in a dynamic memory allocation system while using first fit or best fit
strategies.
➢ It occurs when most of the free blocks are too small to satisfy any request. It is generally
termed as inability to use the available memory.
➢ As processes are loaded and removed from memory, the free memory space is broken in to
little pieces. As a result of this, free holes exists to satisfy a request but is non-contiguous. i.e.
the memory is fragmented in to large number of small holes. This phenomenon is known as
external fragmentation.
➢ Memory fragmentation can be external as well as internal.
➢ At times the physical memory is broken into fixed size blocks and memory is allocated in unit
of block sizes. The memory allocated to a process may be slightly larger than the requested
memory. The difference between allocated and required memory is known as internal
fragmentation. The memory that is internal to a partition but is of no use.
➢ One solution to the problem of external fragmentation is compaction. The goal is to shuffle the
memory contents so as to place all free memory together in one large block. It is possible only
if relocation is dynamic.
Compaction algorithm produces one large hole of available memory.
It is expensive.
➢ Another solution is to permit the logical address space of processes to be non-contiguous thus
allowing a process to be allocated physical memory whenever such memory is available.

3
PAGING
➢ Paging is a memory management scheme that permits the physical address space of a process
to be non-contiguous. Paging avoids external fragmentation and need of compaction. It solves
the problem if fitting memory chunks of varying sizes onto the backing store.
➢ The basic method for implementing paging involves breaking physical memory into fixed
sized blocks called frames and breaking logical memory into blocks of the same called pages.
➢ The backing store is divided into fixed sized blocks that are of the same size as the memory
frames.
➢ Paging has been handled by hardware.
➢ Every address generated by the CPU is divided into 2 parts: a page number and page offset.
The page number is used as an index into a page table. The page table contains the base
address of each page in physical memory.
➢ The page size is defined by hardware. The size of a page is typically of power of 2, varying
between 512 bytes and 16MB per page; depending on computer architecture.
➢ If the size of logical address space is 2m, page size is 2n units, then the high order m-n bits of a
logical address designate the page number, and the ‘n’ low order bits designate the page offset.
➢ When we use a paging scheme, we have no external fragmentation. Any frame can be allocated
to a process that needs it. But we may have some internal fragmentation.
➢ To avoid internal fragmentation, small page sizes are desirable. But it overheads in each page
table entry.
➢ When a process arrives in the system to be executed, its size, expressed in pages, is examined.
Each page of the process needs one frame. Thus, if the process requires ‘n’ pages, at least ‘n’
frames must be available in memory. If ‘n’ frames are available, they are allocated to a process.
The first page of the process is loaded into one of the allocated frames, and the frame number
is put into the page table and so on.
➢ The user program views memory as one single space, containing only this one program. But it
is scattered through net physical memory.

Frame Table
• The operating system manages the physical memory. So it must be aware of the allocation details
of physical memory-which frames are allocated, which frames are available, how many total
frames there are, and so on.
• This information is generally kept in data structure called a frame table.

4
• The frame table has one entry for each physical page frame, indicating whether the latter is free or
allocated and, if it is allocated, to which page of which process or processes.
• Os also maintains a copy of page table for each process.
Paging example for 32 byte memory with 4 byte pages

STRUCTURE OF THE PAGE TABLE


Most modern computer systems support a large logical address space 232 to 264. In such an
environment, the page table itself becomes excessively large. We would not want to allocate the page
table contiguously in main memory.
Hence divide the page table into smaller pieces. It is done in several ways.

1) Hierarchical paging
➢ One way is to use 2-level paging algorithm, in which the page table itself is also paged.
➢ For example, a system with a 32-bit logical address space and a page size of 4 KB. A logical
address is divided into a page number consisting of 20 bits and a page offset consisting of 12 bits.
Because we page the page table, the page number is further divided into 10 bit page number and a
10 bit page off set. Thus, a logical address is as follows.
Where p1is an index into the outer page table and p2 is the displacement within the page of the
outer page table because address translation works from the outer page table inward, this scheme is
also known as a forward mapped page table.

5
2. Hashed page tables
➢ A common approach for handing a address spaces larger than 32 bits is to use a hashed page
table, with the hash value being the virtual page number.
➢ Each entry in the hash table contains a linked list of elements that hash to the same location.
➢ Each elements consists of 3 fields 1) the virtual page number, 2) the value of the mapped page
frame, 3)a pointer to the next element in the linked list.
➢ The virtual page number in the virtual address is hashed into the hash table. The virtual page
number is compared with field 1 in the first element in the linked list.
➢ If there is a match, the corresponding page frame is used to form the desired physical address.
If there is no match, subsequent entries in the linked list are searched for a matching virtual
number.

3 .Inverted page tables


Each process has an associated page table. The page table has one entry for each page that the
process is using. Each page table may consist of millions of entries. These tables may consume
large amounts of physical memory just to keep track of how other physical memory is being
used.
➢ This problem can be solved by using inverted page table.
➢ It has one entry for each real page/frame of memory. Each entry consists of the virtual address
of the page stored in that real memory location; with information about the process that owns
the page.
➢ Thus only one page table in the system, and if has only one entry for each page of physical
memory.
➢ Each inverted page table entry is a pair <process-id, page no.>
➢ This scheme decreases the amount of memory needed to store each page table, but it increases
the amount of time needed to search the table when a page reference occurs.
➢ Because page table is sorted by physical address, but searching occur on virtual addresses.
6
➢ This scheme has difficulty to implement shared memory because there is only me virtual page
entry for every physical page.

SEGMENTATION
➢ In operating systems, segmentation is a memory management technique in which, the memory
is divided into the variable size parts. Each part is known as segment which can be allocated to
a process.
➢ The details about each segment are stored in a table called as segment table. Segment table is
stored in one/many of the segments.
➢ Segment table contains mainly 2 information about segment.
1 .base- It is the base address of the segments.
2. Limit – It is the length of the segment.
➢ Paging is a memory management technique paging is more close to OS rather than the user. It
divides all the process in to the form of pages regardless of the fact that a process can have
some relative parts of functions which needs to be loaded on the same page.
➢ OS doesn’t care about the user’s view of the process. It may divide the same function into
different pages and those pages may or may not be loaded at the same time into the memory. It
decreases the efficiency of the system.
➢ It is better to have segmentation which divides the process into the segments. Each segment
contains same type of functions such as main function can be included in one segment and the
library function can be included in other segments.
➢ CPU generates a logical address which contains 2 parts.
1. Segment number 2.off set
➢ The segment number is mapped to the segment table. The limit of the respective segment is
compared with the offset.
➢ If the offset is less than the limit then the address is valid otherwise it throws an error as the
address is invalid.
➢ In the case of valid address, the base address of the segment is added to the offset to get the
physical address of actual word in the main memory.
Advantages
1. No internal fragmentation.
2. Average segment size is large than the actual page size.
3. Less overhead.
4. It is easier to relocate segments than entire address space.
5. The segment table is lesser size as compare to page table on paging.
Disadvantages
1. It can have external fragmentations.
2. It is difficult to allocate contiguous memory to variable sized partition.

7
3. Costly memory management algorithms.

Paging Vs segmentation
Paging segmentation
Non contiguous memory allocation Non contiguous memory allocation.
Paging divides program into fixed size pages Segmentation divides program into variable
size segments
OS is responsible Compiler is responsible
Paging is faster than segmentation It is slow
It is closer to OS It is closer to user
It suffers from internal fragmentation and no It suffers from external fragmentation and no
external fragmentation. internal fragmentation.
Logical address is divided in to page number Logical address is divided into segment
&page offset. number and segment offset.
Page table is used to maintains the page Segment table maintains the segment
information information.
Page table entry has the frame number and Segment table entry has the base address of
some flag bits to represent details about the segment and some protection bits for the
pages. segment
Page size is specified by hardware. Segment size is specified by user.

VIRTUAL MEMORY
A computer can address more memory than the amount physically installed on the system. This extra
memory is called virtual memory.
➢ Virtual memory is a space where large programs can store themselves in form of pages while
their execution and only the required pages or portions of processes are loaded onto the main
memory.
➢ This technique is useful as large virtual memory is provided for user programs when a very
small physical memory is there.
➢ Most processes never need all their pages at once. For the following reasons.
- Error handing code is not needed unless that specific error occurs, some of which
are quite rare.
- Array are often over-sized for worst case scenarios, and only a small fraction of the
arrays are actually used in practice
- Certain features of certain programme are rarely used.

8
Benefits of having virtual memory
1. Large programs can be written, as virtual space available is huge compared to physical memory.
2. Less I/0 required, leads to faster and easy swapping of processes
3. More physical memory is available, as programs are stored on virtual memory, so they occupy very
less space on actual physical memory.
4. Each user program take less physical memory, more programs could be run at some time ,so
increases CPU utilization and through put

Virtual address space


➢ It refers to the logical view of how a process is stored in memory. In this view a process begins
at a certain logical address and exists in contiguous memory.
➢ Heap may grow upward in memory as it is used for dynamic memory allocation.
Stack may grow downward in memory through successive function calls.
➢ The large blank space between heap and stack is part of the virtual (Hole) address space.
➢ Virtual memory involves the separation of logical memory from physical memory.
➢ Virtual memory allows files and memory to be shared by 2/more processes through page
sharing such as system libraries.

Demand paging
➢ Demand paging is a technique which is used in virtual memory systems. With this, pages
are loaded when they are demanded during program execution pages that are never
accessed are thus never loaded into physical memory.
➢ A demand paging is similar to a paging system with swapping where processes reside in
secondary memory(disk)
➢ It is can be termed as lazy swapper because demand paging technique never swaps a page
into memory unless that page will be needed pager is more accurate term rather than
swapper.
➢ Initially, pager loads pages which will be required to the process immediately instead of
swapping whole process. So that decreases swapping time and the amount of physical
memory needed.
➢ This scheme, need some hardware to distinguish between valid& invalid pages. The pages
that are moved in to memory is set as valid pages. The pages that are not moved in to
memory are marked as invalid in page table.

9
Page fault
If the process tries to access a page that was not brought in to memory, causes a page fault .
Because that page marked as invalid causes a trap to the OS.
When a page fault trap is triggered following steps are followed.
Steps for handing page fault
1. The memory address which is requested by the process is first checked, to verify whether the
reference was a valid or invalid.
2. If it is invalid, the process is terminated.
3. We find a free frame
4. We schedule a disk operation to read the desired page into the newly allocated frame.
5. When the disk read is complete, we modify the internal table kept with the process and the page
table to indicate that the page is now in memory.
6. We restart the instruction that was interrupted by the trap. The process can now access the page as
though it has always been in memory.

Hardware for demand paging


1. Page Table: This table has the ability to mark an entry invalid through a valid- invalid bit or
a special value of protection bits.
2. Secondary Memory: It holds those pages are not present in main memory.

Pure demand paging


➢ In the extreme case, we can start executing a process with no pages in memory. When the
OS sets the instruction pointer to the first instruction of the process, which is on a non –
memory resident page, the process immediately faults for the page.
➢ After this page is brought into memory, the process continues to execute, faulting as
necessary until every page that it needs is in memory.
➢ At that point, it can execute with no more faults. This scheme, is pure demand paging
because never bring a page into memory until it is required.

10
Page Replacement
➢ Page replacement is basis to demand paging.
➢ Page replacement takes the following approach.
If no frame is free, we find one that is not currently being used and free it. We can free a frame
by writing its contents to swap space and changing the page table to indicate that the page is no
longer in memory.
➢ We can now use the freed frame to hold the page for which the process faulted.

Steps for page replacement


1. Find the location of the desired page on the disk.
2. Find a free frame
a) If there is a free frame, use it
b) If there is no free frame, use a page replacement algorithm to select a victim frame.
c) Write the victim frame to the disk, change the page and frame tables accordingly.
3. Read the desired page into the newly freed frame; change the page and frame tables
4. Restartu the user process.

11
Page Replacement Algorithms
Reference string
We evaluate an algorithm by running it on a particular string of memory references and computing
the number of page faults.
We can generate a reference string artificially by using random number generator or we can trace a
given system and record the address of each memory reference.
The latter choice produces a large number of data, where we note 2 things.
➢ For a given page size, we need to consider only the page number, not the entire address.
➢ If we have a reference to page p, then any immediately following references to page p will
never cause page fault.
Page p will be in memory after the first reference.
Ex :sequences of address -123,215,600,1234,76,96
If page size is 100,then the reference string is 1,2,6,12,0,0.

FIFO Page Replacement


➢ The simplest page replacement algorithm OS keeps track of all pages in the memory in a queue,
oldest page is in the front of the queue.
➢ When a page needs to be replaced page in the front of queue is selected for removal.
➢ It is easy to understand and program.
➢ Its performance is not always good.
➢ A bad replacement choice increases the page fault rate and slows process execution
➢ This algorithm results Belady’s anomally
Ex:
Reference string 1, 3, 0,3,5,6 and 3 page slots.
➢ Initially all slots are empty, so when 1,3,0 came they are allocated, so empty slots →3 page
faults.
➢ When 3 comes, it is already in memory so → 0 page fault
➢ When 5 comes, it is not available in memory, so it replaces the oldest page slot i.e. 1 →1
page fault
➢ Finally 6 comes, it is also not available in memory so it replaces the oldest page slot i.e.3 → 1
page fault

Ex 2

12
Belady’s anomaly
It is a phenomenon, in which increasing the number of page frames results in an increase in the
number of page faults for certain memory access patterns
This phenomenon is common is commonly experienced in first in first out (FIFO) page
replacement algorithm.

Optimal Page Replacement


➢ Optimal page replacement algorithm, replaces the page that will not be used for the longest
period of time.
➢ It has been called OPT or MIN.
➢ It results lowest page fault rate.
➢ Never suffer from Belady’s anomaly.
➢ It is twice as good as FIFO replacement.
➢ But it is difficult to implement, because if requires future knowledge of the reference string.

13
LRU Page Replacement
➢ This algorithm replaces the page that has not been used for the longest period of time.
➢ LRU associates with each page the time of that page’s last used.
➢ When a page must be replaced, LRU chooses the page that has not been used for the longest
period of time.
➢ It requires substantial hardware assistance.
➢ It never suffers from Belady’s anomaly.
➢ It can be implemented using counters or stack.
➢ Drawback of this is to identify the page to replace you need to find the minimum time stamp
values.
Ex1

Ex2

Second Chance Algorithm


➢ A simple modification to FIFO having reference bits in associated with each entry in the page
table.
➢ When a page has been selected, of inspects its reference bit.
➢ If the value is 0, we proceed to replace this page, but if the reference is set to 1, we give the
page a second chance and move on to select the next FIFO page.
➢ When a page gets a second chance, its reference bit is cleared, and its arrival time is reset to the
current time.
➢ Thus, a page is given a second chance will not be replaced until all other pages have been
replaced or given second chances.
14
➢ It can implement using a circular queue.
➢ It suffers from Belady’s anomaly.

Counting Based Page Replacement


It keeps a counter of the number of references that have been made to each page and develop the
following 2 schemes.
a) LRU(Least frequently used)Page Replacement algorithm
➢ It requires the page with the smallest count be replaced.
➢ The reason for this selection is that actively used page should have a large reference count.
➢ LRU is a type of cache algorithm used to manage memory with in a computer whenever cache
is overflowed.
➢ In LRU, we check the old page as well as the frequency of that page and if the frequency of the
page is larger than the old page we cannot remove it.
➢ If all old pages are having same frequency then take last i.e. FIFO method for that and remove
that page.

b) MFU (Most frequently used) page replacement


If has better performance the LRU
MFU is based on the argument that the page with the smallest count was probably just brought in
and has get to be used.

Page buffering algorithms


➢ To get process start quickly, keep a pool of free frames
➢ On page fault, select a page to be replaced.
➢ Write a new page in the frame of free pool, mark the page table and restart the process.
➢ Now write the dirty page out of disk and place the frame holding replaced page in the free
pool.
➢ It is used in some systems along with FIFO replacement algorithm.
➢ When FIFO algorithm mistakenly replaces a page that is in active are, that page is quickly
retrieved from the free frame pool and no I/O is necessary.

Allocation of Frames
➢ Virtual memory is implemented using demand paging. Demand paging necessitates the
development of page replacement algorithm and frame allocation algorithm.
➢ Frame allocation algorithms are used if you have multiple processes. It helps to decide how
many frames to allocate to each process.
➢ You cannot allocate more than the total no. Of available frames.
➢ At least minimum no. Of frames should be allocated to each process.
➢ If less no. Of frames are allocated, page fault rate increases, then slowing the process execution
➢ There should be enough frames to hold all the different pages that any single instruction
can reference.

Frame Allocation Algorithms


2 algorithms commonly used to allocate frames to process are
1) Equal allocation
In a system, with ‘m’ frames and ‘n’ processes, each process gets equal number of frames i.e. m/n.

15
For instance, if there are 93 frames and 5 processes, each process will get 18 frames. The 3 leftover
frames can be used as a free frame buffer pool.
Disadvantage
In systems with processes of varying sizes, if does not make much sense to give each process equal
frames.
Allocation of a large no. Of frames to a small process will lead to the wastage of large no. Of
allocated unused frames.

2) Proportional allocation
Frames are allocated to each process according to the process size.
For process p:of size Si, the no. Of allocated frames is ai = si/s*m.
Where m- is the number of frames in the system.
s- is the sum of the sizes of all the processes.
Ex:
If m=62 frames. Size of process1 is 10kb(10 pages)
Size of process 2 iis 127kb . Let page size 1 kb
Now no. Of frames allocated to p1 10/137
No. Of frames allocated p2 127/137*62=57
Like the processes share availabl2e frames according to their needs rather than equally.

Global Vs Local Allocation


The number of frames allocated to a process can change dynamically on whether you have used
global replacement or local replacement for replacing pages in case of a page fault.

a) Global Replacement
It allows a process to select a replacement frame from the set of all frames, even if that frame is
currently allocated to some other process. That is one process can take a frame another.
➢ It does not hinder a process and results in greater system through put and it is more common
method.
➢ Problem with it is that a process cannot control its own page fault rate.
➢ The set of pages in memory for a process depends not only on the paging behaviour of that
process but also on the paging behaviour of other processes
➢ A higher priority process can select frames from low priority processes

b) Local Replacement
When a process needs a page which is not in the memory, it can bring in new page and allocate
it a frame from only its own set of allocated frames only.
➢ In this, the set of pages in memory for a process is affected by the paging behaviour of only
that process.
➢ It hinders a process, by not making available to it other, so less used pages of memory.

16
Thrashing
A process that is spending more time paging than executing is said to thrashing. i.e. high paging
activity is called thrashing.
➢ When a process doesn’t have enough frames to hold all pages for its execution, so it is
swapping pages in& out very frequently to executing. Sometimes, the pages which will be
required in the near future have to be swapped out.
➢ Initially when the CPU utilization is low, the process scheduling mechanism, increase the
level of multi programming by loading new processes into the memory of the same time.
➢ The new processes get started by taking frames from running processes if system is
implementing global replacement algorithm.
➢ It may cause more page faults and a longer queue for the paging device.
➢ As a result, CPU utilization drops even further, and the CPU scheduler tries to increase the
degree of multiprogramming.
➢ Thrashing has occurred.
➢ Page fault rate increases tremendously. So effective memory access time increases. No work
is getting done, because the processes are spending all their time paging.
➢ To increase CPU utilization and stop thrashing, we must decrease the degree of multi
programming.
➢ We can limit the effect of thrashing by using local replacement algorithm.
➢ If processes are thrashing, they will be in the queue for the paging device most of the time.
The average service time for a page fault will increases.
➢ The effective access time will increases even for a process that is not thrashing.
➢ To prevent thrashing, we must provide a process with as many frames as it needs.
➢ The working set strategy, starts by looking at how many frames a process is actually using.
This approach defines the locality model of process execution.

Working-set model
➢ This approach defines the locality model of process execution. To prevent thrashing, process
should be provided with as frames as if needs.
Working set model is one of the techniques to know how many frames a process is using.
➢ The locality model states that as process executes, if moves from locality to locality. A locality is
a set of pages that are actively used together. A program is composed of several different
localities, which may overlap.
➢ For example, when a function is called, it defines a new locality, when exit the function, the
process leaves that locality
➢ We can allocate enough frames to a process to accommodate its current locality. It will fault for
the pages in its locality until these pages are in memory. Then it will not fault again until it
changes localities.
➢ If we do not allocate enough frames to accommodate the size of the current locality, the process
will thrash, since if cannot keep in memory all the pages that it is actively using.

17
Page fault frequency
• Thrashing has a high fault rate. When it is too high, we know that process needs more frames.
If the page fault rate is too low, then the process may have too many frames.
• We can establish upper and lower bounds on the desired page fault rate.
• If the actual page fault rate exceeds the upper limit, we allocate process another frames.
• If the page fault rate falls below the lower limit, we remove a frame from process. Thus we
can directly measure &control the page fault rate to prevent thrashing.
• If working set strategy implemented, we may have to suspend a process because if page fault
rate increases, and no free frames are available, we must select some process and suspend it.

18
UNIT- 4

MASS STORAGE STRUCTURE


Magnetic Disk
➢ Magnetic disks provide the bulk of secondary storage for modern computer systems. Each disk
platter has a circular shape like a CD.
➢ The 2 surfaces of platter are covered with a magnetic material we store information by recording it
magnetically on the platters.
➢ A read write head just above each surface of every platter. The heads are attached to a disk arm
that moves all the heads as a unit.
➢ The surface of platter is logically divided into circular trades which are sub divided into sectors.
The set of tracks that are at one arm position makes up a cylinder.
➢ The storage capacity of common disk drives is measured in gigabytes.
➢ When the disk is in use, a drive motor spins it at high speed. Disk speed has 2 parts, the transfer
rate is the rate at which data flow between the drive& the computer.
The positioning time (or) random access time consists of time necessary to move the disk arm to
the desired cylinder, called seek time.
➢ A disk can be removable or fixed. Removable disks consist of one platter.
➢ A disk drive is attached to a computer by set wires called I/O bus. The data transfers on a bus are
carried act by special electronic processors called controllers. They are host controllers and disk
controller.
➢ The number of sectors per track has been increasing as disk technology improves.
Sector 0 is the first sectors of the first track on the outer most cylinders.
➢ Computers access disk storage in 2 ways. One way is via I/O ports (or) host attached storage. This
is common on small systems.
The other way is via a remote host in a distributed file system. This referred to as network attached
storage.

Disk scheduling
The disk bandwidth is the total number of bytes transferred, divided by the total time between the first
request for service and the completion of the last transfer.
• We can improve both the access time and the bandwidth by managing the order in which disk
I/O requests are serviced.
• Whenever a process needs I/O to or from the disk, it issues a system call to the OS. The request
specifies
o Whether this operation is input or output.
19
o what the disk address for the transfer is
o what the memory address for the transfer is
o what the number of sectors to be transferred is
• If the desired disk drive and controller are available, the request can be serviced immediately.
If the drive /controller are busy, any new requests for service will be placed in the queue of
pending requests for that drive.
• For a multiprogramming system with many processes the disk queue may often have several
pending requests.
• The main purpose of disk scheduling algorithm is to select a disk request from the queue of I/O
requests and decide the schedule when this request will be processed.
• Goal of disk scheduling algorithm is fairness, high through put, minimal travelling head time.
1. FCFS scheduling (First come first served)
➢ It is the simplest form of disk scheduling algorithm
➢ It services the IO requests in the order in which they arrive.
➢ There is no starvation in this, every request is serviced . It is fair.
Disadvantages
➢ Does not optimize the seek time
➢ Does not provide fastest service
➢ May not provide best possible service.
EX

2. SSTF Scheduling (Shortest seek Time First)


This algorithm selects the request with least seek time from the current head position. Since seek time
increases with the number of cylinders traversed by the head.
SSTF chooses the pending request closest to the current head position.
It reduces the total seek time as compared to ACFS it improves the performance than FCFS
Disadvantages
➢ It may cause starvation for some requests
➢ It is not optimal.
➢ Overhead to calculate seek time in advance.

20
3. SCAN Scheduling
➢ In this, the disk arm starts at one end of the disk and moves towards the other end, servicing
requests as it reaches each cylinder, until if gets to the other end of the disk.
➢ At other end, the direction of head movement is reversed, and servicing continues.
➢ The head continuously scans back and forth across the disk. It is called as elevator algorithm.
➢ High through put &average response time.
Disadvantage
Long waiting time for requests for locations just visited by disk arm.

4 .C-SCAN Scheduling (Circular-SCAN)


• In SCAN algorithm the disk arm again scans the path that has been scanned, after reversing its
direction. So it may be possible that too many requests are waiting at the other end or they may be
zero or few requests pending at the scanned area.
• In C-SCAN moves the head from one end of the disk to the other, servicing requests along the
way.
• When head reaches the other end, it immediately returns to the beginning of the disk with not
servicing any requests on the return trip.
• This algorithm treats the cylinders as a circular list that wraps around from the final cylinders to
the first one.
SCAN, C-SCAN performs better for systems that place heavy load on the disk.
➢ Less starvation problem. →C SCAN provides uniform waiting time compared SCAN

21

5. Look scheduling
It is like SCAN scheduling algorithm to some extent except the difference that, in this scheduling
algorithm, the arm of disk stops moving inwards (or outwards) when no more request in that
direction on exists.
This algorithm tries to overcome the overhead of SCAN algorithm which forces disk arm to move
in one direction till the end regardless of knowing if any request exists in the direction or not.
EX

6. C-Look scheduling
C-look algorithm is similar to c-SCAN algorithm to some extent In this, the arm of the disk moves
outwards servicing requests until it reaches the highest request cylinder, then it jumps to the lowest
request cylinder without servicing any request then it again start moving outwards servicing the
remaining requests.
➢ C-SCAN force the disk arm to move till the last cylinder regardless to knowing whether any
request is to be serviced on that cylinder or not.

22
RAID
➢ RAID is a variety of disk organization technique. RAID means redundant arrays of
Independent Disks.
Originally, the term RAID was defined as redundant array of inexpensive disks.
➢ RAID is a way of storing the same data in different places on multiple hard disks to protect
data in the case of a drive failure.
➢ RAID provides higher reliability and higher data transfer rate. The main methods of storing
data in the RAID are mirroring striping.
➢ If we store only one copy of data, then each disk failure will result in loss of significant amount
of data. The solution to the problem of reliability is to introduce redundancy. Introducing
redundancy is to duplicate every disk. This technique is called mirroring.
➢ To the OS, the array of disks can be presented as a single disk with mirroring 2physical disks
can be appeared as one logical disk. Every write is carried out on both disks. If one of disk is
failed, the data can be read from other. Mirroring provides high reliability, but it is expensive.
➢ With multiple disks, we can improve the transfer rate as well by stripping data across the
disks.
➢ Stripping __means splitting the flow of data in to bits/blocks of certain size, than writing into
multiple disks stripping consists of splitting bits of each byte across multiple disks. such
stripping is called bit-level stripping. Every disk participates in every access read/write.
➢ In block-level stripping, blocks of file are striped across multiple disks. It is most common.
Striping results parallelism, increase in throughput, less reliability reduces the response time of
large accesses.
3. Parity is the storage technique which is utilized striping and checksum methods. In this, a
certain parity function is calculated for the data blocks. If a drive fails, the missing block is
recalculated from the check sum, providing the RAID fault tolerance.
➢ RAID can be created using hardware or software. Software RAID is the cheapest and is part of
OS.
RAID Level
Selecting a suitable raid level for an application depends on the following
Reliability—How many disk faults can the system tolerate.
Availability—what fraction of total session time is system in up time mode.
Performance—how good is the response time. How high is the throughput
Capacity: How much use full capacity is available to the user.

23
Levels
Then is different RAID level, each optimized for a specific situation. RAID can be classified to
different levels based on its operation& level of redundancy provided.
RAID Level 0. (Stripping)
➢ Blocks are stripped across disks. Instead of placing just one block into a disk at a time, we can
work with multiple.
➢ Does not provide any kind of redundancy. →It has no mirroring, no parity
➢ No fault tolerant. →Provides high performance.
➢ Easy to implement. →Reliability is 0.
➢ The entire disk space is used. →Minimum number of disks 2
RAID Level 1. (Mirroring)
➢ It implements heavy use of mirroring. All data in the drive is duplicated to another drive.
➢ Stripping and parity are not used. →It is capable of reliability
➢ Only half the space is being used to store state. The half is just a mirror to the already stored
data you need at least 2 drives
➢ Improves read & write speed. →Simple technology
➢ Software RAID1 solution does not allow a hot swap of failed drive. That the failed drive can
only be replaced after powering down the computer it is attached to.
RAID Level 2
➢ It is also known as memory style error correcting code (ECC) organization. Certain errors are
detected by using parity bits.
➢ ECC store 2/more extra bits and reconstruct the data it a single bit is damaged.
➢ Stripping of is used. →Level 2 is not in practice.
➢ Minimum number of disks 2
RAID Level 3 (Bit inter leaved parity organization)
➢ Improved version of level 2.
➢ In level 2 memory systems detect the error. Where as in level 3 disk controllers can detect the
errors.
➢ Only a single parity bit is used for error correction and detection. So it has reduced storage
overhead.
➢ Level 3 is less expensive, as it requires less extra disks
➢ RAID level 3 supports fewer I/O per second, since every disk has to participate in every I/O
request.
➢ Expensive in computing & writing the parity.
➢ Best for single user will long record applications.
➢ Data recovery is accomplished by calculating the XOR of information on other devices.
RAID Level 4 (Block inter leaved parity organization)
➢ User block level striping instead of bit level striping
➢ Parity blocks is in separate disk for corresponding blocks from N other disks
➢ If allows recovery of at most 1 disk failure. If more than are disk fails, there no way to recover
the data. .so reliability is 1.
➢ For a given sit of (N) disks, one disk is reserved for storing the parity & (N-1) disks available
for data storage.
➢ Data transfer rate for each access is slower.

RAID Level 5(Block interleaved distributed parity)


➢ It is the most common secure level.
➢ It requires at least 3 drives/disks more-32
➢ The parity data are not written to fixed drive, they are spread access all drives.
➢ For each block, one of the disk, stores the parity, and others store data.
➢ For example, with an array of 5 disks, the parity for nth block is stored in disk (n mod 5)+1; the
nth blocks of other 4 disks store actual data for that block.
➢ Parity block cannot store parity for blocks in the same disk.

24
➢ Reading rate is much better than writing. Because reading can be done by a combined rate of
all disks used
RAID Level 6 (P+Q redundancy scheme)
➢ Stores extra redundant information to guard against multiple disk failures. Double distributed
parity is used
➢ This is complex technology. Rebuilding an array in which one drive failed can take a long
time.
➢ It can sustain 2drive failures instead of 1.
➢ Uses striping block level
➢ Minimum number of disks 4
➢ Because of over head of parity, lower performance of large amount of write operations.
RAID 10
➢ It is combination of RAID 1 and RAID 0.
➢ It combines the redundancy and increased performance suitable for where both high
performance and security is required.
➢ Minimum disks are 4 →Fault tolerance
➢ Half of storage capacity goes to mirroring.
Hot swapping
Hot swapping is a term used to describe the ability to replace a failed disk drive without rebooting the
machine. Hot swapping enables you to replace a component without interrupting the normal operation
of a server machine.

FILE:
• A file is a named collection of related information that is recorded or secondary storage such as
magnetic disk, tape.
• From user’s perspective, a file is the smallest allotment of logical secondary storage that is data
cannot be written to secondary storage unless they are within a file.
• Files represent programs and data.
• Files may be free from such as text files or may be formatted.
• A file is a sequence of bits, bytes, lies or records whose meaning is defined by files creator and
user.
• Many different types of information may be stored in a file.
• A file has a certain defined structure, which depends on type.
• A text file is a sequence of characters organized into lines.
• A source file is a sequence of subroutines and functions, each of which is further organized as
declarations followed by executable statements.
• An object file is a sequence of bytes organized into blocks understandable by the system linker.
• An executable file is a series of code sections that loader can bring into memory and execute.
• The information about all files kept in the directory structure, which also resides on secondary
storage.
FILE NAME:
• A file name is named for the convenience of users and is referred by its name.
• A name is usually a string of characters.
• Some system may differentiate between uppercase, lowercase characters in names and some
systems do not.
• When a file is named, it becomes independent of the process, user and even the system that created
it.
• Name should begin with alphabet.

FILE ATTRIBUTES:
File attributes vary from one operating system to another
Name: the symbolic file name is the only information kept in human readable form
Identifier: this is unique tag, usually a number, identifies the file within the file system.
25
Type: this is needed for systems that support different types of files.
Location: this is pointer to a device and to the location of a file on that device.
Size: the current size of a file.
Protection: this control information determines who can do reading, writing, executing.
Time, date and user identification: this information may be kept for creation, last modification and
last use. These data is useful for protection, security and usage monitoring.

FILE TYPES:
• File type refers to the ability of operating system to distinguish different types of file such as text
files, source files, binary files etc.
• Many operating system support many types of files.
• A common technique for implementing file types is to include the type as a part of a file name.
o Ex: first.java
• In this way user and OS can tell from the name alone what type of a file is and the type of
operations OS can be done on that file.
• OS like ms-DOS and UNIX have the following types of files.

Ordinary files:
1. These are files that contain user information.
2. These may have text, databases or executable program.
3. The user can apply various operations on such files like add, modify, delete etc.
Directory files
These files contain list of file names and other information related to these files.
Special files:
1. These files are also known as device files.
2. These files represent physical device like disk, printer, terminal etc.

FILE STRUCTRE
File types can be used to indicate the internal structure of the file.
• Source & object files have structures that match the expectations of programs.
• Certain files must custom to required structure that is understood by OS Ex: Executable file.
• If OS supports multiple files structures, the size of OS is large; because it needs to contain the
code to support these file structures.

26
• Some OS support a minimal number of structures .All operating systems must Support at-least
one structure i.e. executable file. so that the system is able to load and run the program.

It is useful for operating system to support structures that will be used frequently so that saves the
programmer effort.
Too few structures make programming inconvenient where as too many cause Operating system
overburden & programmer confusion.
• Internally locating an offset within a file is done by defining block size. All disk i/o is
performed in units of one block & all blocks are the same size.
• In UNIX operating system defines all files to be simply streams of bytes. Each byte is
individually addressable by its offset from the beginning or end of the file.

FILE OPERATIONS:
Files are used to store the required information for its later use there are many file operations that can
be performed by operating system. Some of them are
1.Creating a file:
Creating a file needs 2 steps. First, finding space in the file system for the file. Secondly, an entry for
the new file must be in directory.

2. Writing a file:
To write a file, we need a system call specifying both the name of a file & information to be written to
the file the system searches the directory to find the file location. The system keep write pointer to the
location in the file where the next write is t take place
3. Reading a file:
To read a file, we use system call that specifies the name of the file & where it is in memory the next
block of the file should be put.
System searches the directory to find the file location in the file
System keeps the read pointer to the location on the file where the next read is to take place.
4. Repositioning within the file:
The directory is searched for the appropriate entry, and the current file position pointer is repositioned
to a given value
It need not involve any actual I/O.
This operation is also known as file seek
5. Deleting a file:
To delete a file, searches the directory for the named file. If it is found, releases all file space, so that it
can be reused by other files, and erase the directory entry
6. Truncating a file:
The operation deletes /erases the contents of the file but keeps its attributes.
So that file length reset to zero and its file space is released

FILE ACCESS METHODS


Files store information. When it is used, this information must be accessed & read into the computer
memory the information in the file can be accessed in several ways.
1. Sequential access:
• Information in the file is processed in order, one record after the other
• This is the most common & simple method
• Read operation causes a pointer to be moved ahead by one
• Write operation allocates space for the record & move the pointer to the new end of file
• It is suitable for tapes
• Editors & compilers access files in this method

27
2. Direct access or relative access:
• This method is useful for disks, it allows random access to any file block
• The file is viewed as a numbered sequence of blocks or records
• There are no restrictions on the order of reading or writing
• It is useful for immediate access for large amounts of information. Databases uses this type of
accessing
• The block number is relative block number i.e., an index relative to the beginning of the file
• Thus first relative block is zero, the next is 1 and so on.

3. Index sequential method:


• It involves the construction of an index for the file. The index is like, an index of a book
• Index contains pointers to the various blocks
• To find a record in a file, we first search the index and then use the pointer to access the file
directly & to find the desired record
• We can use binary search of the index
• This method allows to search large file doing little I/O
• It is built on top of sequential access

Directory and disk structure:


Storage structure:
• Files are stored in storage devices such as hard disk, optical disk
• A storage device can be used in its entirely for the file system
• A hard disk can be divided into the number of partitions of different sizes. Partitions are also
known as slices or mini disks
• A file system can be created in each of these parts of the disk for entity containing a file system is
generally known as volume.
• The volume may be a subset of a device, a whole device or multiple devices linked together into a
RAID set. Each volume can be thought of as a virtual disk
• Volumes can also store multiple operating systems allowing a system to boot & run more than one
operating system.
• Each volume that contains a file system must also contain information about the files in the
system. This information is kept in entries in a device directory on volume table of contents.
• Device directory records information such as name, location, size and type for all files on that
volume.

28
Operations on directory:
Directory can be defined as the listing of the related files on the disk. The directory may store some or
the entire file attributes.
A directory can be viewed as a file which contains the Meta data of the bunch of the files.
1. Search a file:
We need to be able to search a directory structure to find the entry for a particular file
2. Create a file:
New files need to be created and added to the directory

3. Delete a file:
When a file is no longer needed, we want to be able to remove it from the directory

4. List a directory:
We need to be able to list the files in a directory and the contents of the directory entry for each file in
the list
5. Rename a file:
the name of a file represents its contents to its users
we can change the name when the contents or use of the file changes
remaining a file may also allow its position within the directory structure to be changed
6. Traverse the file system:
For reliability, it is good to save the contents and structure of the entire file system at regular intervals
by copying into a magnetic tape
This technique provides a backup copy in case of system failure. If a file is no longer in use it can be
copied to the tape and space of file released in disk. That space can be reused by another

Directory structure
There are many types of directory structure in OS. They are as follows:
1.Single level directory:
• This is the simplest directory structure.
• All files contained in the same directory i.e., only one directory
• It is easy to support and understand
• Files are limited in length
• Keeping track of so many files is daunting task
• Since all files are in the same directory they must have unique name
• If 2 users call their data file ‘test’ then the unique name rule is violated
• Even a single user may find it difficult to remember the names of all files as the number of files
increases
• Protection cannot be implemented for multiple users
• There is no way to group same kind of files
• If directory is big, searching for file may take so much time

2. Two level directory:


• In this, we can create a separate directory for each user known as user file directory(UFD)
• There is one master file directory(MFD) which contains separate directories dedicated to each user
• for each user, there is a different directory present at the second level, containing group of users
file
• The system does not allow a user to enter in the other users directory without permission
29
• Different users can have same file name. name collision problem is solved
• Searching is efficient in this each user has own search path
• Due to two-level there is a path name( user name and file name) for every file to locate that file
• This structure effectively isolates one user from another. It is a disadvantage when users want to
cooperate on some task and to access one another files.
• It looks as a tree or inverted tree the root of the tree is MFD. UFD’s are descendants. The
descendants of UFD are files i.e., leaves of tree

3. Tree structured directories


• It allows users to create their own sub directories And to organize their files accordingly
• It is most directory structure
• Any directory entry can either be file or sub directory. One bit in each directory entry as a file or
directory(1)
• The similar kind of files can now be grouped in one directory, no naming collision
• Searching is more efficient in this. The concept of current directory is used and there is grouping
capability
• A file can be accessed by two types of path, either relative or absolute
• An absolute path name begins at the root and follows a path down to the specified file, giving the
directory names on the path
• A relative path name defines a path from the current directory
• In this, users can be allowed to access the files of other users by specifying its path name
• A path to a file in this structure is longer than a path in two-level directory structure
• Special calls are used to create and delete directories
prohibits the sharing of files or directories

4. Acyclic graph directory:


• An acyclic graph is a graph with no cycles. it allows directories to share sub directories and files.
The same file or sub directory may be in two different directories.
• With a shared file, only one actual file exists, so any changes made by one person are immediately
visible to other it is not same as two copies of file
• It is used in situations, when people are working as a team, all the files they want to share can be
put into one directory
• Other situation is, when a single user may require that some files be placed in different sub
directories

30
• Shared file/directories can be implemented in several ways. One way is to create a new directory
entry called a link. i.e., a pointer to another file/sub directory
• Another approach to implementing shared files is to duplicate all information about them in both
sharing directories this approach has problem when a file is modified
• An acyclic graph directory is more flexible and it is more complex
• A file may have multiple absolute path names it becomes problem when traverse the entire file
system to find a file, or to copy all files to back up storage
• The deletion of a link need not affect the original file, only the link is removed
• If the file entry itself is deleted, the space for the file is de-allocated, leaving the dangling pointers
• It needs garbage collection
• Searching is expensive

5. General graph directory


• In this directory, cycles are allowed with in a directory structure where multiple directories can be
derived from more than one parent directory.
• The main problem with this kind of directory structure is to calculate total size or space that have
been taken by the files & directories
• It allows cycles
• It is more flexible than other directory structure
• It is more costly than other directories
• It need garbage collections
• If cycles are allowed, search algorithms can go into infinite loops

FILE SYSTEM MOUNTING


• A file system must be mounted before file can be available to processes on the system
• The directory structure may be built out of multiple volumes, which must be mounted to make
them available within the file system name space
• The operating system is given the name of the device and the mount point the location with in the
file structure where the file system is to be attached
• Mounting is a process by which the operating system makes files and directories on a storage
device available for users to access via the computers file system
• The process of mounting comprises as a acquiring access to storage medium, recognizing, reading,
processing file system structure and metadata on it before registering them to the virtual file
system (VFS) component

31
• The exact location in VFS that the newly mounted medium got registered is called mount point.
When a mounting process is completed, the user can access files and directories on the medium
from here
• An opposite process of mounting is called un mounting, in which operating system cuts off all user
access to files and directories on the mount point, writes the remaining queue of user data to the
storage device, refreshes file system metadata. Then relinquishes access to the device so that
making the storage safe for removal
• Normally when the computer is shutting down every mounted storage will undergo an un
mounting process.
• The basic idea behind mounting file systems is to combine multiple file systems into one large tree
structure
• Un mounting of certain devices like CD’S, DVDS is done automatically once the derive is ejected

FILE SHARING
• File sharing is desirable for users who want to collaborate and to reduce the effort required to
achieve compiling goal
• To implement sharing in multiple users operating system, the system must maintain more file and
directory attributes such as owner, user and group
• The owner is the user who can change attributes and grant access and who has the most control
over the file. The group defines a subset of users who can share access to the file
• Networking allows sharing of resources around the world.
• The first implemented method involves, transferring files between machines via programs like FTP
• FTP is used for both anonymous and authenticated access. Anonymous access allows a user to
transfer files without having an account on the remote system
• The second major method uses distributed file system (DFS) in which remote directories are
visible from a local machine. It involves tighter integration between the machine that is accessing
remote files and the machine providing the files
• The third method is the world wide web (WWW) is reversion to the first. A browser is needed to
gain access to the remote files, and separate operations are used to transfer files. It uses anonymous
file exchange
Consistency of semantics specify how multiple users of a system are to access a shared file
simultaneously. It deals with consistency between the views of shared files on a networked system
when one user changes the file, when do other see the changes?

• these are directly related to the program to the process synchronization


• A successful implementation of complex sharing semantics can be found in the Andrew file
system(AFS)

32
• In AFS, writes to an open file are not immediately visible to other users once a field is closed, the
changes made to it are visible to the users who open the file at a later time

Protection
• It is needed to keep safe information that is stored in system from physical damage ( the issue of
reliability) and improper access
• File systems can be damaged by hardware problems, power failures, head crashes, dirt, and
temperature extremes
• Files may be deleted accidentally. Bugs on the file system software can also cause file contents to
be lost. To overcome this problem many systems are provided by duplicate copies of files i.e.,
reliability
• Protection mechanism provides controlled access by limiting the types of file access that can be
made. Several different types of operations may be controlled such as read, write, execute, append,
delete, list, renaming, copying, editing
• Common approach to the protection problem is to make access dependent on the identity of the
user.
• Maintaining access control list (ACL) specifying user names and the types of access allowed for
each user
• Most systems recognize 3 types of users on communication with each file i.e., owner (the user who
created the file), group (a set of users who are sharing the file), universe (all users in the system)
• Another approach to the protection problem is to associate a password with each file the number of
passwords that a user to remember may become large

File system structure


• Disks provide the bulk of secondary storage. A disk can be rewritten in place. A disk can access
directly any block of information. It allows accessing any file either sequentially or randomly
• To improve input, output efficiency, i/o transfers between memory and disk are performed in units
of blocks. The usual size of block is 512 bytes, it may vary
• File system is composed of different levels. Each level in the design uses the features of lower
levels to create new features to use higher levels.
1.
The lowest level is input, output control, consists of device drivers and interrupts handlers to transfer
information between the main memory and the disk system
2. Basic file system:
It needs only to issue generic commands to the appropriate device driver to read, write physical
blocks on the disk. Each physical block is identified by its numeric disk address
It also manages the memory buffers and caches that to hold various file system, directory and data
blocks
Caches are used to hold frequently used file system metadata to improve performance
3. File organization module:
This layer knows about files and their logical blocks as well as physical blocks
This layer translate logical block address to physical block address for the basic file system to transfer
It includes the free space manager, which tracks unallocated blocks and provides them to other
4. Logical file system:
It manages metadata information, the directory structure. It maintains file structure via file control
blocks (FCB)
FCB contains information about the file, its ownership, permissions and location of file contents
it is responsible for protection and security
5. Application programs:
When application program asks for file, the first request is directed to the logical file system. If the
application program does not have the required permissions of the file then throws an error
Layered structure minimizes the duplication code. But it introduce overhead on operating structure
decrease in performance.
33
FILE SYSTEM IMPLEMENTATION
• Several on disk and in memory structures are used to implement a file system. These structures
vary depending on the OS and the file system.
• On disk, the file system many contain information about how to boot an OS stored file there, the
total number of blocks, the number of location of free blocks, the directory structure and individual
files.
• A boot control block (per volume) can contain information needed by the file system to boot an OS
from that volume. If the disk does not contain an OS this block can be empty. It is typically the
first block of a volume. In UPS, it is called boot block; in NTFS, it is called partition boot sector.
• Volume control block ( per volume) contains volume/partition details such as the number of blocks
in the partition, the size of the blocks; a free block count and free block pointers and free FCB
count and FCB pointers. In UFS, it is called a super block; in NTFS, it is stored in the master file
table.
• A directory structure (per file system) is used to organize the files. In UFS, this includes file
names and associated in odd numbers in NTFS, it is used in the master file table.
• A per file FCB contains many details about the file it has a unique identifier number to allow
association with a directory entry. In NTFS, this information is actually stored within the master
file table, which uses a relational data base structure, with a row per file.

In-memory
The in-memory info is used for both file system management and performance improvement via
caching. The data are loaded at mount time, updated during file system operations, and discard at
dismount. Several types of structures may be included
An in-memory mount table contains information about each mounted volume.
An in memory directory structure cache holds the directory information of recently accessed
directories.
System wide open file table contains a copy of the FCB of each open file, as well as other information.
The per process open file table contains a pointer to the appropriate entry in the system wide open file
table, as well as other information.
Buffers hold file system blocks, when they are being read from disk or written to disk.

34
PARTITIONS AND MOUNTING
• The layout of disk can have many variations depending on the OS. A disk can be sliced in to
multiple partitions or a volume can span multiple partitions on multiple disks.
• Each partitions can be either raw, containing no file system or cooked containing a file system.
Raw disk is used where no file system is appropriate.
• Raw disk hold information needed by disk RAID systems. Boot information can be stored in a
separate partition having its own format because at boot time the system does not have the file
system code loaded. So boot information loaded as an image into memory.
• The boot loader able to find and load the kernel and start of executing.
• The disk can have multiple partitions each containing a different type of file system and a different
OS.
• The root partition which contains the OS kernel and sometimes other system files is mounted at
boot time. Other volumes can be automatically mounted at boot or manually mounted later,
depending on OS.
• As part of successful mount operation the OS verifies that the device contains a valid file system.
Finally OS notes in its in-memory mount table that file system is mounted, along with the type of
file system.

VIRTUAL FILE SYSTEM (VFS)


• A VFS is a programming that forms an interface between an OS’s kernel and a file system.
• VFS serves as an abstraction layer that gives applications access to different types of file systems,
and local & n/w storage devices in a uniform way.
• VFS also known as VFS switch.
• VFS maintains a cache of directory lookups to enable easy location of frequently accessed
directories.
• Through VFS, client applications can access different file systems.
• VFS works as manageable container that virtually provides the functionality of file system. During
each file system initialization, the file system registers itself with the VFS.
• VFS is kernel software layer that handles all system calls related to file system.
• Provides common interface to several kinds of file systems.

35
DIRECTORY IMPLEMENTATION
The selection of directory allocation and directory management algorithms affects the performance
of the file system.
These algorithms are classified according to the data structure they are using. There are mainly 2
algorithms:
1. Linear list:
In this algorithm, all the files in a directory are maintained as a single linked list. Each file contains
the pointers to the data blocks which are assigned to it and the next file in the directory.
1. When a new file is created, then the entire list is checked whether the new file name is matching to
an existing file name or not. In case, if doesn’t exist, the file can be created at the beginning or at the
end. Therefore searching for a unique name is big concern because traversing the whole list takes time.
2. The list needs to be traversed in case of every operation (creation, deletion, updating etc) on the files
therefore the systems become inefficient.

Disadvantages

• This method is simple to program, but time consuming to execute.


• Finding a file requires a linear search.

2. Hash table:
• To overcome the drawback of single linked list implementations of directories, there is an
alternative approach that is hash table.
• This approach suggests using hash table along with linked list.
• A key-value pair for each file in the directory gets generated and stored in the hash table. The key
can be determined by applying the hash function on the file name while the key points to the
corresponding file stored in the directory.
• Now searching becomes efficient because entire list will not be searched on every operation.
• Only hash table entries are checked using the key and if an entry found then the corresponding file
will be fetched using the value.

ALLOCATION METHODS
Many files are stored on the same disk. The allocation methods define how the files are stored in the
blocks. To provide efficient disk space utilization and fast access to the file blocks.

1.Contiguous allocation:
• In this, each file occupies a contiguous set of blocks on the disk. If the file is n blocks long and
starts at location b, then it occupies blocks b, b+1,……b+n-1
• This means, that given starting block and length of file determines the blocks occupied by the file
• the directory entry for each file in this method contains
o Address of starting block
o Length of the allocated for file

36
Eg; the file ‘mail’ in the following fig starts from the block 19 with length=6 blocks. Therefore it
occupies 19,20,21,22,23,24 blocks
Advantages:
• Both the sequential and direct accesses are supported by this for direct access the address of the k th
block of the file which starts at block b can easily be obtained as b+k
• This is extremely fast since the number of seeks are minimal because of contiguous allocation of
the blocks
Disadvantages:
• This method suffers from both internal, external fragmentation this means it inefficient in terms of
memory utilization
• Increasing file size is difficult because it depends on the availability of contiguous memory at a
particular instance

2. Linked allocation:
• In this scheme, each file is linked list of disk blocks which need not be contiguous. The disk blocks
can be scattered any where on the disk. The directory entry contains a pointer to the starting and
ending file. Each block contains a pointer to the next block occupied by the file.
• Thus if each block is 512 bytes in size, and a disk address requires 4 bytes then user sees blocks of
508 bytes
Advantages:
• This is very flexible in terms of file size; file size can be increased easily since the system does not
have to look for a contiguous chunk of memory
• This method does not suffer from external fragmentation
• This makes it relatively better in terms of memory utilization supports required access.
Disadvantages:
• Because the file blocks are distributed randomly on the disk so large number of seeks are needed to
access every block individually. It makes slower
• It does not support random direct access. We cannot directly access the blocks of a file. A block
‘k’ of a file can be accessed by traversing k blocks sequentially from the starting block of file via
block pointers
• Pointers required in this which requires extra space. It suffers from reliability when is lost or
damaged because of software or between failure

37
3. Indexed allocation:
• In this scheme, a special block known as the index block contains file pointers to all the blocks
occupied by a file
• Each file has its own index block. The ith entry in the index block contains the disk address of the
ith file block
• The directory entry contains the address of the index block
Advantages:
This supports direct access to the blocks occupied by the file and therefore provides fast access to the
file blocks
it overcomes the problem of external fragmentation
Disadvantages:
• The pointer overhead for indexed allocation is greater than linked allocation
• For very small files, say files that expand only 2/3 blocks the indexed allocation would keep one
entire block (index block) for the pointers which is inefficient in terms of memory utilization
• However in linked allocation we lose the space of only1 pointer per block
• For files that are very large, single index block may not be able to hold all the pointers
• It is more complex

Following mechanisms can be used to resolve this


1. Linked scheme:
• This scheme links 2 or more index blocks together for holding the pointers.
• Every index block would then contain a pointer or the address to the next index block
2. Multilevel index:
• In this policy, a first level index block is used to point to the second index block which in turn
points to the disk blocks occupied by the file
• This can be extended to 3 or more levels depending on the maximum file size
38
3. Combined scheme:
• In this scheme, a special block called the inode Information node contains all the info about the file
such as name, size, authority etc and the remaining space of inode is used to store the disk block
address which contain the actual file
• The first few of these pointers in inode point to the direct blocks ie; the pointers contain the
addresses of the disk blocks that contain data of the file
• The next few pointers point indirect blocks
• Indirect blocks may be single indirect, double indirect or triple indirect
• Single indirect block is the disk block that does not contain the file data but the disk address of the
blocks that contain the file data
• Double indirect blocks does not contain the file data but the disk address of the blocks containing
the file data

FREE SPACE MANAGEMENT


• Since disk space is limited we need to reuse the space from deleted files or new files if possible
• To keep track of free disk space, the system maintains a free space list. The free space list records
all free disk blocks those not allocated to some file or directory
• To create a new file, we search the free space list for the required amount of space and allocate that
space to new file this space is then removed from the free space list
• When a file is deleted, its disk space is added to the free space list
The free space list can be implemented mainly as
1. Bit vector:
Frequently the free space list is implemented as a bit map or bit vector. Each block is represented
by 1 bit. If the block is free the bit is 1; if block is allocated the bit is 0
Advantages:
• Simple to understand
• Finding first free block or n consecutive free blocks on the disk is efficient
• It requires scanning the words ( a group of 8 bits) in a bit map for a non zero word. The first free
block is then found by scanning for the first 1 bit in the non zero word
• The block number can be calculated as
• (number of bits per word)x(number of 0 value words + offset of first 1 bit in non zero word)
Disadvantages:
• Bit vectors are inefficient unless the entire vector is kept in main memory. Keeping it in main
memory is possible only for smaller disk only
It can be represented by bit map of 16 bits as 0000111000000110
The first group of 8 bits ( 00001110 ) constitute a non zero word since all bits are not 0 after the
non zero word is found, we look for the first 1 bit. This is the 5th bit of non zero word so offset=5
Therefore the first free block number=8X0+5=5

39
2. Linked list:
• In this approach the free disk blocks are linked together ie; a free block contains a pointer to
the next free block
• The block number of the very first disk block is stored at a separate location on disk and is also
cached in memory
Drawback:
It is not efficient to traverse the list we must read each block, which requires substantial input, output
time

3. Grouping:
• This approach stores the address of the free blocks in the first free block. The first free stores the
address of some, say n free blocks
• Out of these n blocks the first (n-1) blocks are actually free and the last block contains the address
of next free n blocks
Advantage:
The addresses of a large number of free blocks can be found easily

4. Counting:
This approach stores the address of the first free disk block and the number of n free contiguous disk
blocks that follow the first block
Every entry in the list would contain
1) Address of first free disk block
2) a number n ie; count
For eg; the first entry of free space list would be (address of block 5, 2) because 2 configures free
blocks follow block 5

40
UNIX OPERATING SYSTEM

The UNIX operating system has for many years formed the backbone of the Internet, especially for
large servers and most major university campuses. However, a free version of UNIX called Linux has
been making significant gains against Macintosh and the Microsoft Windows 95/98/NT environments,
so often associated with personal computers. UNIX commands can often be grouped together to make
even more powerful commands with capabilities known as I/O redirection ( < for getting input from a
file input and > for outputing to a file ) and piping using | to feed the output of one command as input
to the next. Please investigate manuals in the lab for more examples than the few offered here.

Unix Commands
Command Example Description

1. ls ls Lists files in current directory


ls -alF List in long format

2. cd cd tempdir Change directory to tempdir


cd .. Move back one directory
cd ~dhyatt/web-docs Move into dhyatt's web-docs
directory

3. mkdir mkdir graphics Make a directory called graphics

4. rmdir rmdir emptydir Remove directory (must be empty)

5. cp cp file1 web-docs Copy file into directory


cp file1 file1.bak Make backup of file1

6. rm rm file1.bak Remove or delete file


rm *.tmp Remove all file

7. mv mv old.html new.html Move or rename files

8. more more index.html Look at file, one page at a time

9. lpr lpr index.html Send file to printer

10. man man ls Online manual (help) about


command

11 grep <str><files> grep "bad word" * Find which files contain a certain
word

12. chmod <opt> <file> chmod 644 *.html Change file permissions read only
chmod 755 file.exe Change file permissions to
executable

13. passwd passwd Change passwd

14. ps <opt> ps aux List all running processes by


ps aux | grep dhyatt #ID
List process #ID's running by
dhyatt
41
15. kill <opt> <ID> kill -9 8453 Kill process with ID #8453

. Command Example Description

who who Lists who is logged on your machine

. history history Lists commands you've done recently

. date date Print out current date

cal <mo> <yr> cal 9 2000 Print calendar for September 2000

logout (exit) logout or How to quit a UNIX shell.


exit

Use the 'sed' command - it looks for a pattern and then you can 'delete' the line by
preventing the input line from going to the output (sed is a filter program). For example,
sed -e '/word/d' file1 file2 file3 > file.out
will remove any line containing the word 'word' from the three files by not copying it to the
output file 'file.out'

What is vi?
The default editor that comes with the UNIX operating system is called vi (visual editor). [Alternate
editors for UNIX environments include pico and emacs, a product of GNU.]

The UNIX vi editor is a full screen editor and has two modes of operation:

1. Command mode commands which cause action to be taken on the file, and
2. Insert mode in which entered text is inserted into the file.
In the command mode, every character typed is a command that does something to the text file being
edited; a character typed in the command mode may even cause the vi editor to enter the insert
mode. In the insert mode, every character typed is added to the text in the file; pressing the <Esc>
(Escape) key turns off the Insert mode.
While there are a number of vi commands, just a handful of these is usually sufficient for beginning
vi users. To assist such users, this Web page contains a sampling of basic vi commands. The most
basic and useful commands are marked with an asterisk (* or star) in the tables below. With practice,
these commands should become automatic.

NOTE: Both UNIX and vi are case-sensitive. Be sure not to use a capital letter in place of a lowercase
letter; the results will not be what you expect.

To Start vi
To use vi on a file, type in vi filename. If the file named filename exists, then the first page (or
screen) of the file will be displayed; if the file does not exist, then an empty file and screen are created
into which you may enter text.
* vi filename edit filename starting at line 1
vi -r filename recover filename that was being edited when system crashed
To Exit vi
Usually the new or modified file is saved when you leave vi. However, it is also possible to quit vi
without saving the file.

42
Note: The cursor moves to bottom of screen whenever a colon (:) is typed. This type of command is
completed by hitting the <Return> (or <Enter>) key.
* :x<Return> quit vi, writing out modified file to file named in original invocation
:wq<Return> quit vi, writing out modified file to file named in original invocation
:q<Return> quit (or exit) vi
quit vi even though latest changes have not been saved for this vi
* :q!<Return>
call

Syntax case/esac Syntax if


case "test-string" in if testing_command_list ; then
patterns1 ) zero_command_list
command_list1 else
;; nonzero_command_list
patterns2 ) fi
command_list2
;; Syntax while
patterns3 ) while testing_command_list ; do
command_list3 zero_command_list
;; done
* ) #
the "default" if nothing else matches
command_list_default
;;
esac

Syntax for
for … do … done loop statement for name [ in word... ; ] do
command_list
done

43
PRACTICAL QUESTIONS
Q2 write shell program using ‘case’

# take a number from user


Output
echo "Enter number:"
read num
case $num in $ sh num.sh
1)echo "It's one!" Enter number:
;; 10
2) echo "It's two!" It's something else!
;; End of script.
3) echo "It's three!"
;; $ sh num.sh
*)echo "It's something else!" Enter number:
;; 2
esac It's two!
echo "End of script." End of script.

Q2 write shell program using if and else


echo Enter 3 numbers with spaces in between
read a b c
l=$a
if [ $b -gt $l ]
then
l=$b
fi
if [ $c -gt $l ]
then
l=$c
fi
echo Lagest of $a $b $c is $l

Q3 write shell program using while Q3 write shell program using for
# use of while loop echo "Using for loop "
echo "Using while loop..." for (( i=1; i<=10; i++ ))
j=1 do
while [ $j -le 10 ] echo -n "$i "
do done
echo -n "$j " echo ""
j=$(( j + 1 )) # increase number by 1
done
echo ""

4(a)write shell script that takes two integers as its arguments and compute the value of the first number
raised to the power of 2nd number
echo "Input number"
no=$1
echo "Input power"
power=$2

counter=0
ans=1
while [ $power -ne $counter ]
do
ans=`expr $ans \* $no`
counter=`expr $counter + 1`
done

echo "$no power of $power is $ans"

44
4(b) Write a shell script that takes a command–line argument and reports on whether it is directory, a file, or
something else.
PASSED=$1

if [ -d "${PASSED}" ] ; then
echo "$PASSED is a directory";
else
if [ -f "${PASSED}" ]; then
echo "${PASSED} is a file";
else
echo "${PASSED} is not valid";
exit 1
fi
fi

Q5 a)Write a Shell script that accepts a filename, starting and ending line numbers as
arguments and displays all the lines between the given line numbers.

echo "enter the filename"


read fname
echo "enter the starting line number"
read s
echo "enter the ending line number"
read n
sed -n $s,$n\p $fname | cat > newline
cat newline

output:
enter the filename
sales.dat
enter the starting line number
2
enter the ending line number
4

1 computers 9161
1 textbooks 21312 2 clothing 3252

Q 5(b) Write a Shell script that deletes all lines containing a specified word in one or more files
supplied as arguments to it.
if [ $# -eq 0 ]
then
echo "Please enter one or more filenames as argument"
exit
fi
echo "Enter the word to be searched in files"
read word
for file in $*
do
sed "/$word/d" $file | tee tmp
mv tmp $file
done

45
Q6 Write a Shell script that displays list of all the files in the current directory to which the user has read,
write and execute permissions.
for File in *
doif [ -r $File -a -w $File -a -x $File ]
then
echo $File
fi
done

Q7. Write a program to simulate the UNIX commands like ls, mv, cp.
#copying
echo -n "Enter soruce file name : "
read src
echo -n "Enter target file name : "
read targ

if [ ! -f $src ]
then
echo "File $src does not exists"
exit 1
elif [ -f $targ ]
then
echo "File $targ exist, cannot overwrite"
exit 2
fi

# copy file
cp $src $targ

# store exit status of above cp command. It is use to


# determine if shell command operations is successful or not
status=$?

if [ $status -eq 0 ]
then
echo 'File copied successfully'
else
echo 'Problem copuing file'
fi

Q8 Write a program to convert upper case to lower case letters of a given ASCII file
clear
echo "Enter the File :\c"
output:
read f1
if [ -f $f1 ]
enter the File :HELLO
then
echo "Converting Upper case to Lower Case to "
Converting Upper case to
tr '[A-Z]''[a-z]' <$f1
Lower Case to
how r u ....
else
nice meeting u.
echo "$f1 file does not exist "
bye
fi

Q 9. Write a program to program to search the given pattern in a file.

if ( (grep -q '<Pattern>' '<file>' && echo $?)==0 ) then


echo "Pattern found"
else
46
echo "Pattern not found"
endif

Q10. Write a program to demonstrate FCFS process schedules on the given data.
#include<iostream>
;
int main()
{
int n,bt[20],wt[20],tat[20],avwt=0,avtat=0,i,j;
cout<<"Enter total number of processes(maximum 20):";
cin>>n;

cout<<"\nEnter Process Burst Time\n";


for(i=0;i<n;i++)
{
cout<<"P["<<i+1<<"]:";
cin>>bt[i];
}

wt[0]=0; //waiting time for first process is 0

//calculating waiting time


for(i=1;i<n;i++)
{
wt[i]=0;
for(j=0;j<i;j++)
wt[i]+=bt[j];
}

cout<<"\nProcess\t\tBurst Time\tWaiting Time\tTurnaround Time";

//calculating turnaround time


for(i=0;i<n;i++)
{
tat[i]=bt[i]+wt[i];
avwt+=wt[i];
avtat+=tat[i];
cout<<"\nP["<<i+1<<"]"<<"\t\t"<<bt[i]<<"\t\t"<<wt[i]<<"\t\t"<<tat[i];
}

avwt/=i;
avtat/=i;
cout<<"\n\nAverage Waiting Time:"<<avwt;
cout<<"\nAverage Turnaround Time:"<<avtat;

return 0;
}
Output
Enter total number of processes(maximum 20)=3
Enter process burst time
P[1]=24
P[2]=3
P[3]=3q
process Burst time Waiting time turnaaroundtime
P[1] 24 0 24
P[2] 3 24 27
P[3] 3 27 30

Average waiting time =17


47
Average turnaround time =27
Q 11 Write a program to demonstrate SJF process schedules on the given data.
#include <iostream>

void SJF_NP(int n, int burst[], int arrival[], int throughput)


{
cout << "Output for SJF_Non_Preemptive scheduling algorithm" << endl;
int i, j, temp, tot;
double avgwait, avgturnaround, avgresponse, tp;

//array instantiations
int start[n], end[n], wait[n];
//calculations
for(i=1;i<=n;i++)
{ for(j=i+1;j<=n;j++)
{
if (i>=2 && burst[i-1]>burst[j-1])
{
temp = burst[i-1];
burst[i-1]=burst[j-1];
arrival[i-1]=arrival[j-1];
burst[j-1]=temp;
}
}
if(i==1)
{
start[0]=0;
end[0]=0;
wait[0]=0;
}
else
{
start[i-1]=end[i-2];
end[i-1]=start[i-1]+burst[i-1];
wait[i-1]=start[i-1]+arrival[i-1];
}
//throughput
if (start[i+1] <= throughput)
tp = i+1;
}

//output
cout << "\n\nPROCESS \t BURST TIME\tARRIVAL TIME\tWAIT TIME\tSTART TIME\tEND TIME\n";
for (i=0;i<n;i++){
cout << "\nP[" << i + 1 << "]" << "\t\t" << burst[i] << "\t\t" << arrival[i] << "\t\t" << wait[i] << "\t\t" <<
start[i] << "\t\t" << end[i];
}
//avg wait time
for(i=1,tot=0;i<n;i++){
tot+=wait[i-1];
avgwait=tot/n;
}
//avg turnaround time
for(i=1,tot=0;i<n;i++){
tot+=end[i-1];
avgturnaround=tot/n;
}
//avg response time
for(i=1,tot=0;i<n;i++){

48
tot+=start[i-1];
avgresponse=tot/n;
}
cout << "\n\nAverage Wait Time: " << avgwait;
cout << "\nAverage Response Time: " << avgturnaround;
cout << "\nAverage Turnaround Time: " << avgresponse;
cout << "\nThroughput for (" << throughput << "): " << tp << endl;
}

12.Write a program to demonstrate Priority Scheduling on the given burst time and arrival times.

#include<iostream.h>
int main()
{
int bt[20],p[20],wt[20],tat[20],pr[20],i,j,n,total=0,pos,temp,avg_wt,avg_tat;
cout<<"Enter Total Number of Process:";
cin>>n;
cout<<"\nEnter Burst Time and Priority\n";
for(i=0;i<n;i++)
{
cout<<"\nP["<<i+1<<"]\n";
cout<<"Burst Time:";
cin>>bt[i];
cout<<"Priority:";
cin>>pr[i];
p[i]=i+1; //contains process number
}

//sorting burst time, priority and process number in ascending order using selection sort
for(i=0;i<n;i++)
{
pos=i;
for(j=i+1;j<n;j++)
{
if(pr[j]<pr[pos])
pos=j;
}

temp=pr[i];
pr[i]=pr[pos];
pr[pos]=temp;

temp=bt[i];
bt[i]=bt[pos];
bt[pos]=temp;

temp=p[i];
p[i]=p[pos];
p[pos]=temp;
}
wt[0]=0; //waiting time for first process is zero
//calculate waiting time
for(i=1;i<n;i++)
{
wt[i]=0;
49
for(j=0;j<i;j++)
wt[i]+=bt[j];

total+=wt[i];
}
avg_wt=total/n; //average waiting time
total=0;
cout<<"\nProcess\t Burst Time \tWaiting Time\tTurnaround Time";
for(i=0;i<n;i++)
{
tat[i]=bt[i]+wt[i]; //calculate turnaround time
total+=tat[i];
cout<<"\nP["<<p[i]<<"]\t\t "<<bt[i]<<"\t\t "<<wt[i]<<"\t\t\t"<<tat[i];
}

avg_tat=total/n; //average turnaround time


cout<<"\n\nAverage Waiting Time="<<avg_wt;
cout<<"\nAverage Turnaround Time="<<avg_tat;
return 0;
}

Output
Enter total number of processes :4
enter burst time and priorty
p[1]
burst time :6
priority :3
p[2]
burst time :2
priority :2
p[3]
burst time :14
priority :1
p[4]
burst time :6
priority :4

process Burst time Waiting time turnaaroundtime


P[3] 14 0 14
P[2] 2 14 16
P[1] 6 16 22
P[4] 6 22 28
Average waiting time =13
Average turnaround time =20

13.Write a program to demonstrate Round Robin Scheduling on the given burst time and arrival times.
#include<iostream.h>

// Function to find the waiting time for all


// processes
void findWaitingTime(int processes[], int n,
int bt[], int wt[], int quantum)
{
// Make a copy of burst times bt[] to store remaining
// burst times.
int rem_bt[n];

50
for (int i = 0 ; i < n ; i++)
rem_bt[i] = bt[i];

int t = 0; // Current time

// Keep traversing processes in round robin manner


// until all of them are not done.
while (1)
{
bool done = true;

// Traverse all processes one by one repeatedly


for (int i = 0 ; i < n; i++)
{
// If burst time of a process is greater than 0
// then only need to process further
if (rem_bt[i] > 0)
{
done = false; // There is a pending process

if (rem_bt[i] > quantum)


{
// Increase the value of t i.e. shows
// how much time a process has been processed
t += quantum;

// Decrease the burst_time of current process


// by quantum
rem_bt[i] -= quantum;
}

// If burst time is smaller than or equal to


// quantum. Last cycle for this process
else
{
// Increase the value of t i.e. shows
// how much time a process has been processed
t = t + rem_bt[i];

// Waiting time is current time minus time


// used by this process
wt[i] = t - bt[i];

// As the process gets fully executed


// make its remaining burst time = 0
rem_bt[i] = 0;
}
}
}

// If all processes are done


if (done == true)
break;
}
}

// Function to calculate turn around time


void findTurnAroundTime(int processes[], int n,
int bt[], int wt[], int tat[])
{
// calculating turnaround time by adding
// bt[i] + wt[i]
for (int i = 0; i < n ; i++)
tat[i] = bt[i] + wt[i];
51
}

// Function to calculate average time


void findavgTime(int processes[], int n, int bt[],
int quantum)
{
int wt[n], tat[n], total_wt = 0, total_tat = 0;

// Function to find waiting time of all processes


findWaitingTime(processes, n, bt, wt, quantum);

// Function to find turn around time for all processes


findTurnAroundTime(processes, n, bt, wt, tat);

// Display processes along with all details


cout << "Processes "<< " Burst time "
<< " Waiting time " << " Turn around time\n";

// Calculate total waiting time and total turn


// around time
for (int i=0; i<n; i++)
{
total_wt = total_wt + wt[i];
total_tat = total_tat + tat[i];
cout << " " << i+1 << "\t\t" << bt[i] <<"\t "
<< wt[i] <<"\t\t " << tat[i] <<endl;
}

cout << "Average waiting time = "


<< (float)total_wt / (float)n;
cout << "\nAverage turn around time = "
<< (float)total_tat / (float)n;
}

// Driver code
int main()
{
// process id's
int processes[] = { 1, 2, 3};
int n = sizeof processes / sizeof processes[0];

// Burst time of all processes


int burst_time[] = {10, 5, 8};
// Time quantum
int quantum = 2;
findavgTime(processes, n, burst_time, quantum);
return 0;
}

Output:
Processes Burst time Waiting time Turn around time
1 10 13 23
2 5 10 15
3 8 13 21
Average waiting time = 12
Average turn around time = 19.6667

14. Write a program to implementing Producer and Consumer problem using Semaphores.

#include<iostream.h>
int mutex=1,full=0,empty=3,x=0;
int main()

52
{
int n;
void producer();
void consumer();
int wait(int);
int signal(int);
cout<<"\n1.Producer\n2.Consumer\n3.Exit";
while(1) Output
{ 1.Producer
2.Consumer
Cout<<"\nEnter your choice:"; 3.Exit
Cin>>n; Enter your choice:1
switch(n) Producer produces the item 1
{ Enter your choice:2
case 1: if((mutex==1)&&(empty!=0)) Consumer consumes item 1
Enter your choice:2
producer(); Buffer is empty!!
else Enter your choice:1
cout<<"Buffer is full!!"; Producer produces the item 1
break; Enter your choice:1
case 2: if((mutex==1)&&(full!=0)) Producer produces the item 2
Enter your choice:1
consumer(); Producer produces the item 3
else Enter your choice:1
cout<<"Buffer is empty!!"; Buffer is full!!
break; Enter your choice:3
case 3:
exit(0);
break;
}
}
return 0;
}

int wait(int s)
{
return (--s);
}

int signal(int s)
{
return(++s);
}

void producer()
{
mutex=wait(mutex);
full=signal(full);
empty=wait(empty);
x++;
cout<<"\nProducer produces the item “<<x;
mutex=signal(mutex);
}

void consumer()
{
mutex=wait(mutex);

53
full=wait(full);
empty=signal(empty);
cout<<"\nConsumer consumes item <<x);
x--;
mutex=signal(mutex);
}
Q15 Write a program to simulate FIFO, LRU, LFU Page replacement algorithms
#include<iostream.h>
int n,nf;
int in[100];
int p[50];
int hit=0;
int i,j,k;
int pgfaultcnt=0;

void getData()
{
cout<<"\nEnter length of page reference sequence:";
cin>>n;
cout<<"\nEnter the page reference sequence:";
for(i=0; i<n; i++)
cin>>in[i];
cout<<"\nEnter no of frames:";
cin>>nf;
}

void initialize()
{
pgfaultcnt=0;
for(i=0; i<nf; i++)
p[i]=9999;
}

int isHit(int data)


{
hit=0;
for(j=0; j<nf; j++)
{
if(p[j]==data)
{
hit=1;
break;
}
}
return hit;
}

int getHitIndex(int data)


{
int hitind;
for(k=0; k<nf; k++)
{
if(p[k]==data)
{
hitind=k;
break;
}
}
return hitind;
}

void dispPages()
{

54
for (k=0; k<nf; k++)
{
if(p[k]!=9999)
cout<<p[k];
}

void dispPgFaultCnt()
{
cout<<"\nTotal no of page faults:"<<pgfaultcnt;
}

void fifo()
{
initialize();
for(i=0; i<n; i++)
{
cout<<"\nFor :"<<in[i]);

if(isHit(in[i])==0)
{ OUTPUT:

for(k=0; k<nf-1; k++) Page Replacement Algorithms


p[k]=p[k+1]; 1.Enter data
2.FIFO
3.LRU
p[k]=in[i]; 4.Exit
Enter your choice:1
pgfaultcnt++; Enter length of page reference
dispPages(); sequence:8
} Enter the page reference sequence:2
else 3
4
cout<<"No page fault"; 2
} 3
dispPgFaultCnt(); 5
} 6
2
void lru() Enter no of frames:3
{ initialize(); Page Replacement Algorithms
1.Enter data
int least[50]; 2.FIFO
for(i=0; i<n; i++) 3.LRU
{ 4.Exit
cout<<"\nFor :"<<in[i]; Enter your choice:2
For 2 : 2
if(isHit(in[i])==0) For 3 : 2 3
{ For 4 : 2 3 4
For 2 :No page fault
for(j=0; j<nf; j++) For 3 :No page fault
{ For 5 : 3 4 5
int pg=p[j]; For 6 : 4 5 6
For 2 : 5 6 2
int found=0; Total no of page faults:6
for(k=i-1; k>=0; k--) Page Replacement Algorithms
{ 1.Enter data
if(pg==in[k]) 2.FIFO
3.LRU
{ 4.Exit
least[j]=k; Enter your choice:3
found=1; For 2 : 2
break; For 3 : 2 3
For 4 : 2 3 4
} For 2 :No page fault!
else For 3 :No page fault!
found=0; For 5 : 2 3 5
} For 6 : 6 3 5
For 2 : 6 2 5
if(!found) Total no of page faults:6
least[j]=-9999;
}
int min=9999;
int repindex;
for(j=0; j<nf; j++)

55
{
if(least[j]<min)
{
min=least[j];
repindex=j;
}
}
p[repindex]=in[i];
pgfaultcnt++;
dispPages();
}
else
cout<<"No page fault!";
}
dispPgFaultCnt();
}
int main() {
{ case 1:
int choice; getData();
while(1) break;
{ case 2:
cout<<"\nPage Replacement fifo();
Algorithms\n break;
1.Enter data\n2.FIFO\n3. LRU\n case 3:
4.Exit lru();
\nEnter your choice:"); break;
cin>>choice; default:
switch(choice) return 0;
break;
}
}}

56

You might also like