Professional Documents
Culture Documents
Os 4 Units - Rotated
Os 4 Units - Rotated
Unit II
CPU Scheduling: Concepts, Scheduling Criteria, Scheduling Algorithms.
Process Synchronization: Critical-Section Problem, Peterson’s Solution, Synchronization,Semaphores, Monitors.
Deadlocks: System Model, Deadlock Characterization, Methods for Handling Deadlocks,Deadlock Prevention,
Deadlock Avoidance, Deadlock Detection, Recovery from Deadlock.
Unit III
Main Memory: Introduction, Swapping, Contiguous Memory Allocation, Segmentation, Paging.
Virtual Memory: Introduction, Demand Paging, Page Replacement, Allocation of Frames,Thrashing.
Unit IV
Mass Storage Structure: Overview, Disk Scheduling, RAID Structure.
File Systems: File Concept, Access Methods, Directory and Disk Structure, File-System Mounting,Protection.File
System Implementation, Directory Implementation, Allocation Methods, Free-Space Management.
1
UNIT-1
Operating system:
• An operating system is a program that manages the computer hardware.
• It provides basis for application programs and acts as an intermediate between the computer user and computer
hardware.
• A computer system can be divided into 4 components they are hardware, the operating system, the application
programs and user.
• The hardware means CPU, memory and I/O devices which provides the basic computing resources for the
system.
• The application programs such as word processors, spread sheets, compilers and web browsers define the way
in which these resources are use to solve users computing problems .
• The operating system controls the hardware and coordinate its use among the various application programs for
the various users.
OS AND KERNEL
The OS is the software package that communicate directly to the hardware and our application.
The kernel is the lowest level of operating system. It is the main part of the operating system and is responsible
for translating the command in to something that can be understand by the computer.
Kernel is internal core of the OS.
Bootstrap program
It is a program that initializes the operating system during start up. It is the first code that is executed when the
computer system is started.
It is stored in ROM or EEPROM which is non-volatile memory. The OS is loaded into RAM by the bootstrap
program when the system powered up or rebooted.
It doesn’t require any outside input to start. Bootstrapping process involves self tests , loading BIOS ,
configuration settings etc.
2
When CPU is in user mode, the programs don’t have direct access to memory and hardware resources. In user
mode, if any program crashes only that particular program is halted. That means the system is in a safe state even
if a program in user mode crashes.
Hence most programs in an OS runs in user mode.
Advantages
i) Increased throughput:
By increasing the number of processors, we expect to get more work done in less time.
ii) Economy of scale:
They cost less than equivalent multiple single processor systems, because they share peripherals, mass storage
and power supplies.
iii) Increased reliability:
If function can be distributed properly among several processors, then the failure of one processor will not halt
the system, only show it down.
3. Clustered systems:
3
→Clustering can be structured asymmetrically or symmetrically.
→In asymmetric clustering one machine is in hot stand by mode which the other is running the application hot
stand by host machine does nothing but monitor the active server.
→In symmetric clustering 2 or more hosts are running applications and are monitoring each other it is efficient as
it uses all of the available hardware.
Advantages
→It provides high availability service i.e service will continue even if one or more systems in the cluster fail.
→It provides high performance computing environment because they are capable of running an application
concurrently on all computers in the cluster.
Computing Environments
1. Personal computing environment
In this a single computer is used by single person. Such computer is called as personal computer.
All hardware devices are present at single location and are packed as a single unit.
3. Client-server computing:
In client server computing the client requests a resource and the server provides that resources. A server may
serve multiple clients at the same time while a client is in contact with only one server.
Both the client and server usually communicate via a computer network but sometimes they may reside in the
same system.
4. Peer-to-peer computing:
In this model, all nodes with in the system are considered peers, and each may not act as either a client or a
server , depending on whether it is requesting or providing a service.
→In this services can be provided by several nodes distributed through out the network.
→It is a distributed application architecture.
→The drawback of this model is, difficult to backup the data as it is stored in different computer systems and there
are no central server.
→It is difficult to provide security.
2. Program execution:
The system must be able to load a program into memory and to run that program.
→The program must be able to end its execution, either normally or abnormally.
3. I/O operations:
→A running program may require i/o, which may involve a file or an i/o device.
→For efficiency and protection, users usually cannot control i/o devices directly so OS provides a means to do i/o.
4. File-system manipulation:
→Programs need to read and write files and directories
→They also need to create and delete them by name, search for a given file and list file information.
→Some programs include permissions management to allow or deny access to files or directories based on a file
ownership.
→OS provides a variety of file systems to do all these tasks.
5. Communications:
→There are many situations to make communication between processes that are executing on the same computer
or between processes that are executing on different systems.
→Communications may be implemented via shared memory or through message passing.
→Packets of information are moved between processes by OS.
6. Error detection:
→Errors may occur in the CPU and memory hardware in I/O devices and in the user program.
→For each type of error, the OS should take an appropriate action to ensure correct and consistant computing.
5
7. Resource allocation:
Many different types of resources are managed by OS they are CPU, cycles, main memory, file storage, I/O
devices, printers, modems, USB storage devices etc.
→When there are multiple users or multiple jobs running at the same time, resources must be allocated to each of
them by OS using scheduling routines.
8. Accounting:
Os keeps track of which users use how much and what kinds of computer resources.
→This statistics is a valuable tool for researches who wish to reconfigure the system to improve computing
services.
System calls
• System calls provide an interface between a process and operating system. These calls are generally available
as routines written in c and c++ languages.
• When a program makes a system call, the mode is switched from user mode to kernel mode. This is called
context switch.
• When a program in user mode requires access to RAM or hardware resource, it must ask the kernel to provide
access to that resource. This is done by via system call.
• Generally system calls are made by user level programs in the following situations:
o Creating,opening,closing and deleting files.
o Creating a connection in the network, sending and receiving packets.
o Requesting access to hardware device.
6
Benefits of using API than system call by programmer:
1. Programs portability:
A program is designed using an API can be compiled, run on any system that supports the same API.
2. Actual system calls are more detailed and difficult to work with an API.
8
Os structure
1. Monolithic systems
• In this, the entire OS runs as a single program in kernel mode ie; not divided into modules
• The OS is written as a collection of procedure, linked together into a single large executable binary
program
• Each procedure in system is free to call any other one
• In this approach, every procedure is visible to every other procedure
• It is difficult to implement and maintain, limited by hardware functionality
• Used in MSDOS, earlier version of UNIX
2. Layered approach
• In this OS is broken into a number of layers. The bottom layer is the hardware & the highest layer is the
user interface
• Each layer consists of data structures & operations which are invoked by higher level layers
• A layers does not need to know how these operations are implemented, but needs to know what these
operations.
• If layer 0 i.e hardware is running correctly, then its services can be used by layer 1. Now the layer 1 is
debugged and if any bug is found , the error must be in that layer because the layer below it is already
debugged
• The design & implementation of system are simplified
• In this, construction & debugging is simple
• It simplifies system verification
• Adding new functionalities or removing is very easy
Limitations:
• It needs careful planning because a layer can use only lower level layers
• This approach has less efficiency than other types because as parameters are passed from one layer to other
layer and at each layer the parameters may be modified, data may need to be passed and so on.
• Each layer adds overhead to the system call, then it takes longer time to run
• It is always not possible to divide the functionality
• Large number of functionalities need more layers which leads degradation in performance.
• No communications betweem non adjacent layers
Eg; os/2, windows NT
3. MicorKernels
• The kernel is broken down into separate processes. This approach removes all non-essential components from
the kernel and implementing them as system and user level programs. So that some are run in kernel space and
some run in user space.
• The main function of the micro kernel is to provide a communication facility between them
• Communication is provided by message passing.
• Micro kernels provide minimal process and memory management.
• The advantage of this is that it provides flexibility and extendibility.
• Any new service can be added to user space without modification of kernel
• It also increases portability of OS from one machine to another
• It provides more security and reliability , since most services are running at user level than at kernel processes.
• If a service fails, the rest of OS remains untouched i.e. other servers can still work efficiently.
• Micro kernel can suffer from performance decreases due to increased system function overhead
9
Ex. : Tru64, UNIX, QNX real time OS
4. Modules
• The best current methodology for OS design involves using object oriented programming techniques to create
a modular kernel
• In this, kernel has a set of core components & links in additional services either during boot time or during
runtime
• This strategy uses dynamically loadable modules
• In this any module can call any other module which is not possible in layered approach
• The primary module has only core functions & knowledge of how to load & communicate with other modules.
• It is more efficient because modules do not need to invoke message passing in order to communicate like in
micro approach.
Process concept
• Process is the fundamental concept of OS structure. A process is an instance of an executing program
• Each process has its own virtual CPU.
• Process is an active entity
• Two processes may be associated with the same program & considered as 2 separate execution sequences
• A process includes the current values of program counter & processor registers
• A process includes the following:
o Stack - contains temporay data such as function parameters, return addresses & local variables
o Data section - contains global, static variables
o Heap - memory that is allocated dynamically
o Text - includes current activity represented by the value of program counter & contents of processor
registers
• Each process is given an integer identifier termed as process identifier PID
10
Process states
As a process executes, it changes state. The state of a process is defined as the current activity of the process.
1.New:
A newly created process is one which has not even loaded into main memory through its associated (PCB) process
control block has been created
2.Ready:
A process in ready state is waiting for an opportunity to be executed. All the ready processes are placed in the
ready queue
3.Running:
A process is said to be in running state if it is being executed by the processor
4.Blocked:
Here the processor Waits for the occurrence of an event in order to be executed until that event is completed , it
can’t proceed further
5.Exit
A process is said to be in exit state if it is aborted or halted due to some reason. An exit process must be freed from
the pool of executable processes by the OS.
Only one process can be running on any processor at any instant. Many processes may be ready and waiting.
Program Vs process
Program Process
1.It is a passive entity 1. It is a active entity
2.It is a set of instruction written in Computer language 2 It is a program in execution
3.It does nothing until it gets executed 3 It is an instance of Executing program & Perform
specific action
4. It has a longer life span because it is stored in the 4 Process has a shorter and limited life span because it
memory until it is not manually deleted gets terminated after the completion of the task
5. the resource requirement is memory on disk to store 5 the resources requirement for process are CPU,
program as file and not required any other resources memory address, disk , I/O etc.
11
• Process ID : Unique identification for each of process in OS.
• Process state : Current state of the process i.e. ready, running, writing etc
• Pointer : A pointer to parent process
• Program counter : A pointer to the address of next instruction to be executed for this process
• CPU registers : The registers vary in number & type, depending on computer architecture, they include
accumulations, index registers, general purpose register etc they are used by process to store for execution in
running state
• Process privileges: This is required to allow/disallow access to system resources
• CPU scheduling information: This information includes a process priority, pointers to scheduling queues &
any other scheduling parameters
• Memory management information : This information includes the value of base & limit registers, page tables
or the segment tables, depending on the memory system used by OS
• Accounting information: This information includes the amount of CPU & real time used, time limits, account
numbers, job/process numbers & so on..
• I/O status information: This information includes the list of I/O devices allocated to the process, a list of
open files& so on..
Thread
Single thread of control allows the process to perform any one task at one time
Many modern OSs have extended the process concept to allow a process to have multiple threads of execution &
thus to perform more than one task at a time.
On a system that supports threads, the PCB is expanded to include Information for each thread.
Thread is a light weight process
Thread shares the resources of parent process.
12
Process scheduling
The objective of multiprogramming is to have some process running at all times to maximize CPU utilization.
The objective of time sharing is to switch the CPU among processes so frequently that users can interact with each
program while it is running.
To meet these objectives, the process scheduling selects an available process for program execution on CPU.
Scheduling queues
1.Job queue:
As process enter the system , they are put into a job queue which consists of all processes in the system
2.Ready queue:
The process that are residing in main memory & are ready and waiting to execute are kept on a list called ready
queue. It is stored as linked list.
3.Device queue:
It contains list of process waiting for a particular I/O device. Each device has its own queue.
Queuing diagram:
• Each rectangular box in this represents a queue. The circles represents the resources that serve the queues, &
arrows indicates the flow of process in the system.
• A new process is initially put in the ready queue. It waits there until it is selected for execution or is
dispatched. Once the process is allocated the CPU and is executing, one of several events could occur.
o The process could issue an I/O request and then be placed in an I/O queue
o The process could create a new sub process & wait for the sub process termination
o The process could be removed forcibly from CPU , as a result of an interrupt, & be put back in the
ready queue.
• In first, second, the process the eventually switches from the waiting state to the ready state & is then out back
in the ready queue.
• A process continues this cycle until it terminates, at which time it is removed from all queues & has its PCB &
resources deal located.
Schedulers Types
In general , processes can be 2 types : I/O bound and CPU bound.
1. I/O bound Process
It is one that spends most of its time doing I/O than it spends doing computations. If all processes are I/O bound,
then ready queue is almost empty.
2. CPU bound process
It generates I/O requests infrequently and it spends most of time doing computations.
• Distinction between these two schedulers lies in frequency of execution. The Short Term scheduler must
select a new process for the CPU frequently. A Process may execute for only a few mille seconds. Because of
the short time between process executions , it is fast
13
• The long Term Scheduler executes much less frequently . the gap between 2 processes is in minutes. It
controls the degree of multiprogramming. i.e. the number of processes in memory. It is need to invoked only
when a process leaves the system.
• On some systems , long term scheduler may be absent or minimal. For example Unix, Windows.
Context Switch
• Interrupts cause the OS to change a CPU from its current task and to run kernel runtime.
• Then system needs to save current context of the process running on the CPU so that it can restore that context
when its processing is done, essentially suspending the process and then resuming it.
• The context represented in the PCB of the process, it includes the value of CPU registers, the process state,
memory management information etc.
• Switching the CPU to another process requires performing a state save of the current process and a state
restore of a different process. This task is known as context Switch.
• When a context switch occurs, the kernel saves the content of the old process in its PCB and loads the saved
context of the new process scheduled to run.
• If OS provides multiple sets of registers, then context switch simply requires changing the pointer to the
current register set.
• Context switch times are highly dependent on hardware support
Operations on processes
The processes in systems can execute concurrently and they may be created and deleted dynamically. Thus systems
provide a mechanism for process creation and termination.
1. Process Creation
• There are 4 possibilities to create a process
o when the system is initialized
o when execution of process creation system call by a running process
o when user requests to create a new process
o when a batch job is initiated
• A process may crate several new processes via system call during execution. The creating process is called
parent process and the new processes are called child processes. Each of these new processes may in turn
create other processes , forms a tree of processes
• When a new process is created, OS assigns a unique process identifier (PID) to it and inserts a new entry in
process table.
• A process will need certain resources like CPU time, memory, files I/O devices to perform its task.
• Child process may obtain its resources directly from OS or it may use / share resources of the parent process.
• In addition to physical and logical resources that a process obtains when it is created, initialization data may be
passed along by parent to child process.
• When a process creates new process,
• The parent continues to concurrently with its children or
14
• The parent waits until some or all of its children have terminated.
Process Termination
• A process terminates when it finishes executing its final statement and OS delete it by using exit() system call
• Then all the resources of the process like physical and virtual memory, open files and I/O buffers are deal
located by OS
• A process can be terminated either by OS or by parent process.
• A parent may terminate a child process due to one of the following reasons.
• When task given to the child is not required now
• When child has taken more resources than its limit
• The parent is exiting , as a result all children are deleted. This is called cascaded termination.
Interprocess communication
• Processes executing concurrently in the OS may be either independent processes or co-operating processes.
• Any process that does not share data with any other process is called Independent process. It can not affect or
affected by other processes.
• Any process that shares data with other processes is a co-operating process.
• Reasons for using cooperating processes
o Information sharing for several users who need same
o Speed up computation by task into subtasks , executing them in parallel
o Modularity i.e. dividing system functions into separate processes or threads
o Convenience i,e, even an individual user may work on many tasks at same time. For example, a user
may be editing, printing and compiling in parallel
• Cooperating processes require an inter process communication (IPC) mechanism that will allow them to
exchange data and information.
• There are 2 models of inter process communication
1. Shared memory 2. Message passing
1. Shared memory
• In this model, a region of memory that is shared by cooperating processes is established
• Processes can then exchange information by reading and writing data to the shared region
• Shared region resides in the address space of the process creating the shared memory segment.
• It allows maximum speed and convenience of communication
• It is faster than message passing
• System calls are required only to establish shared memory regions
• Kernel assistance is not required after shared memory is established
2. Message Passing
• In this model, communication takes place by means of messages exchanged between the cooperating
processes.
• It provides communication and to synchronization between processes
15
• It is useful where the communicating processes may reside on different computers connected by network.
• A message passing facility provides at least 2 operations. They are send message and receive message.
• It is useful for exchanging smaller amount of data
• It is easier to implement
• It is implemented using many number of system calls than the first method
• It is more time consuming task, having kernel intervention.
16
UNIT-2
CPU Scheduling:
When a computer is multi programmed, 2 or more processes are in the ready state, then situation
occurs for competing for the CPU at the same time. Then OS makes the choice is called scheduler,
which process to run next. The algorithm it uses is called scheduling algorithm. Scheduler picks right
process to run and makes efficient use of CPU.
When to schedule:
→When a new process is created, a decision needs to be made whether to run parent / child process.
Sine both are in ready state.
→When a process exits i.e, no longer run, so some other process must be choosen from the number of
ready processes.
→When a process blocks on I/O or for some reason, another process has to be selected to run.
→When an I/O interrupt occurs, a scheduling decision may be made. (i.e, running state to ready state)
Scheduling schemes
Scheduling algorithms can be divided into 2 types with respect to how they deal with clock
interrupts because scheduling decision can be made at each clock interrupt.
i) Non-preemptive or Cooperative:
This algorithm picks a process to run and then just lets it run until it blocks (either on I/O or
waiting for another process) or until it voluntarily releases the CPU. It will not forcibly suspended. So
no scheduling decisions are made during clock interrupts.
ii) Preemptive scheduling:
This algorithm picks a process and lets it run for a maximum of some fixed time. If it is still
running, at the end of the time interval, it is suspended and scheduler picks another process to run.
1
Dispatcher:
One of the components involved in the CPU scheduling function is the dispatcher. The dispatcher is
the module that gives control of the CPU to the process selected by the short term scheduler. This
function involves the following
→ Switching context
→Switching to users mode
→Jumping to the proper location in the user program to restart the program
The dispatcher should be as fast as possible, since it is invoked during every process switch. The time it
takes for the dispatcher to stop one process and start another running is known as dispatch latency.
Scheduling criteria
Different CPU scheduling algorithms have different properties. In choosing which algorithm to use
in a particular situations, based on some criteria. The criteria includes the following
a) CPU utilization:
The selected algorithm should keep the CPU as busy as possible.
b) Throughput:
The number of processes that are completed per time unit, called Throughput. Using this, we
can measure the work is being done by CPU.
c) Turnaround time:
The interval from the time of submission of a process to the time of compilation is the
turnaround time.
→Turnaround time is the sum of the periods spent waiting to get into memory, waiting in the ready
queue, executing on the CPU and doing I/O.
d) Waiting time:
The amount of time that a process spends waiting in the ready queue,
→Waiting time is the sum of the periods spent waiting in the ready queue.
e) Response time:
The amount of time taken from the submission of a request until the first response is produced.
Scheduling Algorithms
1. First-come, First-served scheduling:
→This is the simplest of all CPU-scheduling algorithms.
→The process that requests the CPU first is allocated the CPU first.
→The implementation is managed with FIFO queue.
→The code for FCFS scheduling is simple to write and understand.
→When the running process blocks, the first process on the queue is run next. when a blocked process
become ready, it is put on the end of queue like a new one.
→This algorithm is non pre-emptive.
2
Example
Process no. Arrival time burst time complete time TurnAroundTime Waitingtime
1 0 4 4 4 0
2 1 3 7 6 3
3 2 1 8 6 5
4 3 2 10 7 5
5 4 5 15 11 6
Gantt chart
P1 P2 P3 P4 P5
0 4 7 8 10 15
Drawback:
There is a convoy effect as all the other processor wait for the one big process to get off the CPU.
This effect results in lower CPU and device utilization.
Example
Process no. Arrival time burst time complete time TurnAroundTime Waitingtime
1 1 7 4 7 0
2 2 5 16 14 9
3 3 1 9 6 5
4 4 2 11 7 5
5 5 8 24 19 11
Gantt chart
idle P1 P3 P4 P2 P5
0 1 8 9 11 16 24
3
Example
Process no. Arrival time burst time complete time TurnAroundTime Waitingtime
1 0 7 19
2 1 5 12
3 2 3 4
4 3 1 1
5 4 2 5
6 5 2 2
Gantt chart
P1 P2 P3 P4 P3 P3 P6 P5 P2 P1
0 1 2 3 4 5 6 7 9 13 19
4. Priority scheduling:
→A priority is associated with each process, and the CPU is allocated to the process with the highest
priority.
→Equal priority processes are scheduled in FCFS order.
→The large CPU-burst, has lower priority and vice-versa.
→Priorities are indicated by some fixed rang of numbers.
→Priorities can be defined either internally or externally i.e, statically/dynamically.
→It can be either pre-emptive or non pre-emptive.
→Major problem with this algorithm is indefinite blocking or starvation.
→A solution to the problem of indefinite blocking of low priority processes aging i.e, gradually
increasing the priority of processes that wait for a long time
Example (non – pre-emptive) (assume that higher no. Represents high priority)
Process ArrivalTime BurstTime Priority CompleteTime TurnAroundTime Waitingtime
1 0 4 4 14 4 0
2 1 5 5 16 15 10
3 2 1 (high) 7 5 3 2
4 3 2 2 18 15 13
5 4 3 1 21 17 14
6 5 6 6 11 6 0
Gantt chart
P1 P3 P6 P2 P4 P5
0 4 5 11 16 18 21
GANTT CHART
P1 P2 P3 P1 P4 P5 P2 P4 P5 P2 P4 P5
0 2 4 6 7 9 11 13 15 17 18 19 20
5
→For example, separate queues might be used for foreground and background processes. The
foreground queue scheduled by RR algorithm, while the background queue in scheduled by FCFS
algorithm.
→In addition, there must be scheduling among the queues, which is commonly implemented as fixed-
priority pre-emptive scheduling.
Process Synchronization
Race condition:
A situation, where several processes access and manipulate the same data concurrently and the
outcome of the execution depends on the particular order in which the access takes place is called race
condition.
To guard against the race condition above, we need to ensure that only one process at a time can
be manipulate common variable
To make such a guarantee, we require that the processes be synchronized.
Peterson’s solution:
→A classic software based solution to the critical section problem is known as peterson’s solution.
→It is restricted to 2 processes that alternate execution between their critical solutions and remainder
sections.
→Let us assume that Pi and Pj are 2 processes.
→Petersons solution requires 2 data items to share two processes.
int turn;
Boolean flag[2];
→ The variable turn indicates whose turn it is to either its critical section, flag array used to indicate if
a process is ready to enter its critical section.
→To enter the critical section, process pi first sets flag[i] to true and then sets to the value j for turn.
→If both processes try to enter at the same time, turn will be set to both i and j, but it will be
overwritten by any of them
→To prove property1, Pi enters critical section only if either flag[j]=false or turn=i. Two processes
cannot execute at the same time. Since the value of turn can be either 0 or 1 but cannot be both.
7
→To prove properties of 2 and 3, we note that Pi can be prevented from entering the critical solution
only if it is struck in the while loop with condition flag[j]=true and turn=j.
If Pj is not ready to enter critical section, then flag[j]=false, then Pi can enter its critical section.
Once Pi exists critical section, it will reset flag[i]=false allowing Pj to enter critical section.
If Pi reset flag[i]=true, it must also set turn to j thus, it provides progress and bounded waiting.
Synchronization hardware:
Software based solution i.e Peterson’s solution is suitable on modern computer architectures.
→Another solution for critical section problem requires a tool lock. Race conditions are prevented by
acquiring a lock before a process entering into critical section and release the lock when it exits.
→In uniprocessor environments the problem can be solved by preventing interrupts that occurred while
shared variable is modified. This is implemented by non pre-emptive kernels.
→In multiprocessor environments, disabling interrupts is time consuming and decreases system
efficiency. Therefore, they provide special hardware instructions.
(i)in one of the algorithm, provides 2 instructions TestAndSet() on and swap() executed automatically.
→If 2 TestAndSet() instructions are executed simultaneously (each on different CPU), they can
implement mutual exclusion by declaring a Boolean variable lock, initialized to false
→The swap() instruction operates in 2 words. Each process has local variable key Boolean
They do not satisfy bounded waiting requirement.
To satisfy all the requirements, use data structures
Boolean waiting[n];
Boolean lock;
→Process Pi can enter its critical section only if either waiting[i]=false or key=false. The value of key
can become false only if the TestAndSet() is executed.
The waiting[i] becomes false only if another process leaves the critical section remaining all are in true,
thus provides mutual exclusion.
→A process exits the critical section by setting lock to false or sets waiting[j]=false. Both allow a
process that is waiting to enter its critical section to proceed.
→To prove the bounded waiting is met, note that when a process leaves its critical section, it scans the
array waiting[] in the cyclic ordering(i+1,i+2.........,n-1,0........i-1). It designates the 1st process in this, is
in entry section as the next one to enter critical section.
8
Semaphores
In 1965, Dijkstra proposed a new and very significant technique for managing concurrent processes by
using the value of simple integer variable to synchronize the progress of interacting processes. This
integer variable is called semaphore.
So it is basically a synchronizing tool and is accessed only through 2 low standard atomic operations,
wait and signal designated by P(S) and V(S) respectively.
Semaphore is a synchronization tool.
Semaphore is a variable which can hold only a non-negative integer value, shred between all the
threads, with operations wait and signal.
A semaphore S is an integer variable that, apart from initialization, is accessed only through 2
standard atomic operations wait() and signal().
The wait() operation was originally termed P, signal () was termed as V.
→All modifications to the integer value of semaphores in the wait() and signal() operations must be
executed indivisibly. That is, when one process modifies semaphore value, no other process can
simultaneously modify that same semaphore value.
Wait( ) decrements the value of its argument S, as soon as it would become non negative.
Signal() increments the value of its argument S, as there is no more process blocked on the queue
→In addition , in the case of wait(s) , testing of integer value of S (s<=0), as well as its possible
modification (S--) must be executed without interruption.
→Semaphores can be 2 types. They are Counting semaphores and binary semaphores.
The value of counting semaphores can range
Properties
• It is simple and always have non-negative integer value.
• Works with many processes.
• Can have many different critical sections with different semaphores
• Each critical section has unique access semaphores
• Can permit multiple processes into the critical section at once, it desirable
• Less complicated
Types of semaphores
Semaphores are mainly of 2 types
1. Binary semaphore
It is a special form of semaphore used for implementing mutual exclusion , hence it is often called a
mutex. A binary semaphore is initialized to 1 and only takes the values 0 and 1 during execution of a
program.
It is used to deal with multiple processes.
9
2. Counting semaphores
These are used to implement bounded concurrency. It is used to control access to a given resource
consisting of finite number of instances. The semaphore is initialized to the number of resources
available.
Each process that wishes to use a resource performs wait() operation on the semaphore ( there by
decrementing the count)
When a process releases a resource, it performs a signal() operation (incrementing the count)
When the count for the semaphore goes to 0, all resources are being used. After that , processes that
wish to use a resource will block until the count becomes greater than 0.
Limitations of semaphores
Priority inversion is a big limitation of semaphores
Their use is not enforced, but is by convention only
With improper use , a process may block indefinitely. Such a situation is called Dead Lock.
Monitiors
Monitor is one the ways to achieve process synchronization
Monitor is supported by programming languages to achieve mutual exclusion between processes
It is the collection of condition variables and procedures combined together in a special kind of module
or a package.
The processes running out side the monitor can not access the internal variable of monitor but can call
procedures of the monitor.
Only one processor at a time can execute code inside monitors . monitor is an abstract data type.
Syntax
Monitor Demo
{
Variables ;
Condition variables;
Procedure p1 {---------}
Procedure p2{----------}
}
Condition variables
Two different operations are performed on the condition variables of the monitor they are wait() ,
signal()
Procedures in the monitors help the OS to synchronize the processes
Dead Locks
SYSTEM MODEL
A system consists of finite number of resources to be distributed among a number of competing
processes.
The resources are partitioned into several types. Memory space, CPU cycles, files and I/O devices are
examples of resource types.
A process must request a resource before using it and must release the resource after using it. A
process may request as many resources as it requires to carry out its designated tasks.
Under the normal mode of operation, a process may utilize a resource in request, use, release sequence.
10
A system table records whether each resource is free/ allocated . If a resource is allocated, then table
records which process it is allocated. If a process requests a resource, i.e. allocated another process, it
can be added to a queue of processes waiting for the resource.
A set of processes is in dead locked state when every process in the set is waiting for an event that can
be caused only by another process in the set.
In a deadlock , processes never finish executing , and system resources are tied up, preventing other
jobs from starting.
11
Fig 1 no cycle no deadlock
fig2 fig3
12
Example for non – sharable resource is printer. Example for sharable resource is read only file.
A process never needs to wait for sharable resource.
We can not prevent deadlocks by denying this condition, because some resources are non
sharable.
2. Hold and Wait
To avoid dead lock, ensure that the hold and wait condition never occurs in the system
One protocol can be used that each process to request and be allocated all its resources before it
begins execution. But resource utilization may be low using this.
An alternative protocol can be used that before process can request an additional resources, it
must release the resources that are allocated. It gives a problem of starvation
3. No preemeption
Ensure that this condition does not hold by using a protocol.
If a process is holding some resources and request another resource that can not be allocated,
then all the resources the process is holding are pre-empted. The process with be restarted only
when it can regain its all resources.
It can not be applied to resources such as printers , tape drives .
4. Circular wait
To avoid circular wait, assign a unique integer number to each resource type.
Each process can request resources only in an increasing order of enumeration
Deadlock avoidance
Deadlock prevention results low device utilization and reduced system throughput
Deadlock avoidance algorithms need every process to tell in advance the maximum number of
resources of each type that if may need. Based on all these information we may decide if a process
should wait for a resource or not, and thus avoid chances for circular wait.
a. Safe state
If a system is already in safe state, we can try to stay away from an unsafe state and avoid
deadlock.
Deadlocks can not be avoided in an unsafe state.
A system can be considered to be in safe state if it is not in a state of dead lock and can allocate
resources up to maximum available.
A safe sequence of processes and allocation of resources ensures a safe state.
These algorithms try not to allocate resources to a process if it will make the system in an unsafe
state.
In this method resource utilization may be low.
Because whenever a process requests a resource that is currently available, the system must decide
whether the resource can be allocated immediately or whether the process must wait. The request
is granted only if the allocation leaves the system in safe state.
13
And edge from a resource to process is allocation edge (Rj → Pi)
A claim edge denotes that a request may be made in future and is represented as dashed line(
Pi→Rj)
Based on claim edges, we can see if there is a chance for a cycle and then grant requests if the
system will again be in a safe state.
The resource allocation graph is not must useful if there are multiple instances for a resource.
c. Bankers algorithm
Banker’s algorithm is resource allocation and deadlock avoidance algorithm which test all the request
made by processes for resources.
It check for safe state, if after granting request system remains the safe state then it allows the request
If there is no safe state, it don’t allow the request made by the process
Inputs to Banker’s Algorithm
Maximum need of resources by each process
Currently allocated resources by each process
Maximum free available resources in the system
Request will only be granted under below condition
If the request made by process is less than or equal to maximum need to that process (request <= need)
If the request made by process less than or equal to freely available resource in the system (Request < =
available)
14
Example
Total resources in system
A B C D
6 5 7 6
Available system resources are
A B C D
3 1 1 2
Processes (currently allocated resources)
A B C D
P1 1 2 2 1
P2 1 0 3 3
P3 1 2 1 0
Process ( maximum resources)
A B C D
P1 3 3 2 2
P2 1 2 3 4
P3 1 3 5 0
A B C D
P1 2 1 0 1
P2 0 2 0 1
P3 0 1 4 0
EXAMPLE 2
The resources in the system
A B C
10 5 7
Allocation Maximum
A B C A B C
P0 0 1 0 7 5 3
P1 2 0 0 3 2 2
P2 3 0 2 9 0 2
P3 2 1 1 2 2 2
P4 0 0 2 4 3 3
Available Need
A B C A B C
3 3 2 P0 7 4 3
P1 1 2 2
P2 6 0 0
P3 0 1 1
P4 4 3 1
15
The system is currently in safe state. Indeed , the sequence < p1, p3,p4,p2,p0> satisfies the safety
criteria
Suppose p1 requests one additional resource of type A , 2 instances of Type C ( 1, 0, 2)
17
If pre-emption is used, then 3 issues need to be considered.
a. Selecting a Victim
we must determine the order of pre-emption to minimize the cost. Cost factors may include
number of resources a deadlocked process is holding and the amount of time so far process has
executed etc.
b. Roll back
Whenever a deadlock is detected, it is easy to see which resources are needed. To do the
recovery of deadlock , a process that owns a needed resource is rolled back to a point in time
before it acquired some other resource just by starting one of its earlier check points
c. Starvation
The resources will not always be pre-empted from the same process to avoid starvation
18
UNIT-3
MAIN MEMORY
➢ Main memory and registers built in the processor itself are the only storage that the CPU can
access directly.
➢ The machine instructions can take memory addresses as arguments, and none can take disk
addresses. If the data are not in memory, they must be moved there before the CPU can
operate on them.
➢ Registers are built into the CPU are generally accessible within one cycle of CPU clock.
➢ The data from main memory are accessible with many cycles of the CPU clock.
➢ The protection of memory space is accomplished by having CPU hardware we can provide this
protection by using 2 registers. The base register holds the smallest legal physical memory
address. The limit specifies the size of the range.
SWAPPING
➢ Swapping is mechanisms in which a process can be swapped temporarily out of main memory
to secondary storage and make that memory available to other processes. At some later time,
the system swaps back the process from the secondary storage to main memory.
➢ If multiprogramming environment with round robin scheduling algorithm, when each process
finishes its quantum, it will be swapped with another process.
➢ Swapping policy is also used for priority based scheduling algorithms if higher priority process
arrives, the memory manager can swap out the lower priority process and then load and
execute higher priority process.
➢ A process that is swapped out will be swapped back into the same memory space it occupied
previously. This restriction is dictated by the method of address binding.
1
➢ Swapping requires a backing store. The backing store is a fast disk. The system maintains a
ready queue consisting of all processes that are ready to run and are in disk.
➢ Whenever CPU scheduler decides to execute a process, it calls the dispatcher. The dispatcher
checks to see whether the next process in the queue is in memory. It if is not, and if there is no
free memory region, the dispatcher swaps out a process currently in memory and swaps in the
desired process.
➢ The context switch time in such a swapping system is high. The major part of the swap time is
transfer time and is directly proportional to the amount of memory swapped. If we want to
swap a process, we must be sure that is completely idle or only when there are no binding I/O
operations.
➢ Most modern OS no longer use swapping, because of is too slow and there are faster
alternatives available.
Eg: paging
Memory protection
With relocation (base) and limit registers, each logical address must be less than the limit register. The
MMU maps logical address dynamically by adding the value in the relocation register. This mapped
address is sent to memory.
The main aim of memory protection is to prevent a process from accessing memory that has not been
allocated to it.
Memory allocation can done in different strategies
1) First fit
Allocate the first hole that is big enough. Searching can start either at the beginning of the set of holes
or at location where the previous first fit search ended. It is faster.
2
2) Best fit
Allocate the smallest hole that is big enough. We must search the entire list, unless the list is ordered
by size.
3) Worst fit
Allocate the largest hole we must search the entire list unless it is ordered by size.
First fit and best fit are better than worst fit in terms of storage utilization.
FRAGMENTATION
➢ Fragmentation occurs in a dynamic memory allocation system while using first fit or best fit
strategies.
➢ It occurs when most of the free blocks are too small to satisfy any request. It is generally
termed as inability to use the available memory.
➢ As processes are loaded and removed from memory, the free memory space is broken in to
little pieces. As a result of this, free holes exists to satisfy a request but is non-contiguous. i.e.
the memory is fragmented in to large number of small holes. This phenomenon is known as
external fragmentation.
➢ Memory fragmentation can be external as well as internal.
➢ At times the physical memory is broken into fixed size blocks and memory is allocated in unit
of block sizes. The memory allocated to a process may be slightly larger than the requested
memory. The difference between allocated and required memory is known as internal
fragmentation. The memory that is internal to a partition but is of no use.
➢ One solution to the problem of external fragmentation is compaction. The goal is to shuffle the
memory contents so as to place all free memory together in one large block. It is possible only
if relocation is dynamic.
Compaction algorithm produces one large hole of available memory.
It is expensive.
➢ Another solution is to permit the logical address space of processes to be non-contiguous thus
allowing a process to be allocated physical memory whenever such memory is available.
3
PAGING
➢ Paging is a memory management scheme that permits the physical address space of a process
to be non-contiguous. Paging avoids external fragmentation and need of compaction. It solves
the problem if fitting memory chunks of varying sizes onto the backing store.
➢ The basic method for implementing paging involves breaking physical memory into fixed
sized blocks called frames and breaking logical memory into blocks of the same called pages.
➢ The backing store is divided into fixed sized blocks that are of the same size as the memory
frames.
➢ Paging has been handled by hardware.
➢ Every address generated by the CPU is divided into 2 parts: a page number and page offset.
The page number is used as an index into a page table. The page table contains the base
address of each page in physical memory.
➢ The page size is defined by hardware. The size of a page is typically of power of 2, varying
between 512 bytes and 16MB per page; depending on computer architecture.
➢ If the size of logical address space is 2m, page size is 2n units, then the high order m-n bits of a
logical address designate the page number, and the ‘n’ low order bits designate the page offset.
➢ When we use a paging scheme, we have no external fragmentation. Any frame can be allocated
to a process that needs it. But we may have some internal fragmentation.
➢ To avoid internal fragmentation, small page sizes are desirable. But it overheads in each page
table entry.
➢ When a process arrives in the system to be executed, its size, expressed in pages, is examined.
Each page of the process needs one frame. Thus, if the process requires ‘n’ pages, at least ‘n’
frames must be available in memory. If ‘n’ frames are available, they are allocated to a process.
The first page of the process is loaded into one of the allocated frames, and the frame number
is put into the page table and so on.
➢ The user program views memory as one single space, containing only this one program. But it
is scattered through net physical memory.
Frame Table
• The operating system manages the physical memory. So it must be aware of the allocation details
of physical memory-which frames are allocated, which frames are available, how many total
frames there are, and so on.
• This information is generally kept in data structure called a frame table.
4
• The frame table has one entry for each physical page frame, indicating whether the latter is free or
allocated and, if it is allocated, to which page of which process or processes.
• Os also maintains a copy of page table for each process.
Paging example for 32 byte memory with 4 byte pages
1) Hierarchical paging
➢ One way is to use 2-level paging algorithm, in which the page table itself is also paged.
➢ For example, a system with a 32-bit logical address space and a page size of 4 KB. A logical
address is divided into a page number consisting of 20 bits and a page offset consisting of 12 bits.
Because we page the page table, the page number is further divided into 10 bit page number and a
10 bit page off set. Thus, a logical address is as follows.
Where p1is an index into the outer page table and p2 is the displacement within the page of the
outer page table because address translation works from the outer page table inward, this scheme is
also known as a forward mapped page table.
5
2. Hashed page tables
➢ A common approach for handing a address spaces larger than 32 bits is to use a hashed page
table, with the hash value being the virtual page number.
➢ Each entry in the hash table contains a linked list of elements that hash to the same location.
➢ Each elements consists of 3 fields 1) the virtual page number, 2) the value of the mapped page
frame, 3)a pointer to the next element in the linked list.
➢ The virtual page number in the virtual address is hashed into the hash table. The virtual page
number is compared with field 1 in the first element in the linked list.
➢ If there is a match, the corresponding page frame is used to form the desired physical address.
If there is no match, subsequent entries in the linked list are searched for a matching virtual
number.
SEGMENTATION
➢ In operating systems, segmentation is a memory management technique in which, the memory
is divided into the variable size parts. Each part is known as segment which can be allocated to
a process.
➢ The details about each segment are stored in a table called as segment table. Segment table is
stored in one/many of the segments.
➢ Segment table contains mainly 2 information about segment.
1 .base- It is the base address of the segments.
2. Limit – It is the length of the segment.
➢ Paging is a memory management technique paging is more close to OS rather than the user. It
divides all the process in to the form of pages regardless of the fact that a process can have
some relative parts of functions which needs to be loaded on the same page.
➢ OS doesn’t care about the user’s view of the process. It may divide the same function into
different pages and those pages may or may not be loaded at the same time into the memory. It
decreases the efficiency of the system.
➢ It is better to have segmentation which divides the process into the segments. Each segment
contains same type of functions such as main function can be included in one segment and the
library function can be included in other segments.
➢ CPU generates a logical address which contains 2 parts.
1. Segment number 2.off set
➢ The segment number is mapped to the segment table. The limit of the respective segment is
compared with the offset.
➢ If the offset is less than the limit then the address is valid otherwise it throws an error as the
address is invalid.
➢ In the case of valid address, the base address of the segment is added to the offset to get the
physical address of actual word in the main memory.
Advantages
1. No internal fragmentation.
2. Average segment size is large than the actual page size.
3. Less overhead.
4. It is easier to relocate segments than entire address space.
5. The segment table is lesser size as compare to page table on paging.
Disadvantages
1. It can have external fragmentations.
2. It is difficult to allocate contiguous memory to variable sized partition.
7
3. Costly memory management algorithms.
Paging Vs segmentation
Paging segmentation
Non contiguous memory allocation Non contiguous memory allocation.
Paging divides program into fixed size pages Segmentation divides program into variable
size segments
OS is responsible Compiler is responsible
Paging is faster than segmentation It is slow
It is closer to OS It is closer to user
It suffers from internal fragmentation and no It suffers from external fragmentation and no
external fragmentation. internal fragmentation.
Logical address is divided in to page number Logical address is divided into segment
&page offset. number and segment offset.
Page table is used to maintains the page Segment table maintains the segment
information information.
Page table entry has the frame number and Segment table entry has the base address of
some flag bits to represent details about the segment and some protection bits for the
pages. segment
Page size is specified by hardware. Segment size is specified by user.
VIRTUAL MEMORY
A computer can address more memory than the amount physically installed on the system. This extra
memory is called virtual memory.
➢ Virtual memory is a space where large programs can store themselves in form of pages while
their execution and only the required pages or portions of processes are loaded onto the main
memory.
➢ This technique is useful as large virtual memory is provided for user programs when a very
small physical memory is there.
➢ Most processes never need all their pages at once. For the following reasons.
- Error handing code is not needed unless that specific error occurs, some of which
are quite rare.
- Array are often over-sized for worst case scenarios, and only a small fraction of the
arrays are actually used in practice
- Certain features of certain programme are rarely used.
8
Benefits of having virtual memory
1. Large programs can be written, as virtual space available is huge compared to physical memory.
2. Less I/0 required, leads to faster and easy swapping of processes
3. More physical memory is available, as programs are stored on virtual memory, so they occupy very
less space on actual physical memory.
4. Each user program take less physical memory, more programs could be run at some time ,so
increases CPU utilization and through put
Demand paging
➢ Demand paging is a technique which is used in virtual memory systems. With this, pages
are loaded when they are demanded during program execution pages that are never
accessed are thus never loaded into physical memory.
➢ A demand paging is similar to a paging system with swapping where processes reside in
secondary memory(disk)
➢ It is can be termed as lazy swapper because demand paging technique never swaps a page
into memory unless that page will be needed pager is more accurate term rather than
swapper.
➢ Initially, pager loads pages which will be required to the process immediately instead of
swapping whole process. So that decreases swapping time and the amount of physical
memory needed.
➢ This scheme, need some hardware to distinguish between valid& invalid pages. The pages
that are moved in to memory is set as valid pages. The pages that are not moved in to
memory are marked as invalid in page table.
9
Page fault
If the process tries to access a page that was not brought in to memory, causes a page fault .
Because that page marked as invalid causes a trap to the OS.
When a page fault trap is triggered following steps are followed.
Steps for handing page fault
1. The memory address which is requested by the process is first checked, to verify whether the
reference was a valid or invalid.
2. If it is invalid, the process is terminated.
3. We find a free frame
4. We schedule a disk operation to read the desired page into the newly allocated frame.
5. When the disk read is complete, we modify the internal table kept with the process and the page
table to indicate that the page is now in memory.
6. We restart the instruction that was interrupted by the trap. The process can now access the page as
though it has always been in memory.
10
Page Replacement
➢ Page replacement is basis to demand paging.
➢ Page replacement takes the following approach.
If no frame is free, we find one that is not currently being used and free it. We can free a frame
by writing its contents to swap space and changing the page table to indicate that the page is no
longer in memory.
➢ We can now use the freed frame to hold the page for which the process faulted.
11
Page Replacement Algorithms
Reference string
We evaluate an algorithm by running it on a particular string of memory references and computing
the number of page faults.
We can generate a reference string artificially by using random number generator or we can trace a
given system and record the address of each memory reference.
The latter choice produces a large number of data, where we note 2 things.
➢ For a given page size, we need to consider only the page number, not the entire address.
➢ If we have a reference to page p, then any immediately following references to page p will
never cause page fault.
Page p will be in memory after the first reference.
Ex :sequences of address -123,215,600,1234,76,96
If page size is 100,then the reference string is 1,2,6,12,0,0.
Ex 2
12
Belady’s anomaly
It is a phenomenon, in which increasing the number of page frames results in an increase in the
number of page faults for certain memory access patterns
This phenomenon is common is commonly experienced in first in first out (FIFO) page
replacement algorithm.
13
LRU Page Replacement
➢ This algorithm replaces the page that has not been used for the longest period of time.
➢ LRU associates with each page the time of that page’s last used.
➢ When a page must be replaced, LRU chooses the page that has not been used for the longest
period of time.
➢ It requires substantial hardware assistance.
➢ It never suffers from Belady’s anomaly.
➢ It can be implemented using counters or stack.
➢ Drawback of this is to identify the page to replace you need to find the minimum time stamp
values.
Ex1
Ex2
Allocation of Frames
➢ Virtual memory is implemented using demand paging. Demand paging necessitates the
development of page replacement algorithm and frame allocation algorithm.
➢ Frame allocation algorithms are used if you have multiple processes. It helps to decide how
many frames to allocate to each process.
➢ You cannot allocate more than the total no. Of available frames.
➢ At least minimum no. Of frames should be allocated to each process.
➢ If less no. Of frames are allocated, page fault rate increases, then slowing the process execution
➢ There should be enough frames to hold all the different pages that any single instruction
can reference.
15
For instance, if there are 93 frames and 5 processes, each process will get 18 frames. The 3 leftover
frames can be used as a free frame buffer pool.
Disadvantage
In systems with processes of varying sizes, if does not make much sense to give each process equal
frames.
Allocation of a large no. Of frames to a small process will lead to the wastage of large no. Of
allocated unused frames.
2) Proportional allocation
Frames are allocated to each process according to the process size.
For process p:of size Si, the no. Of allocated frames is ai = si/s*m.
Where m- is the number of frames in the system.
s- is the sum of the sizes of all the processes.
Ex:
If m=62 frames. Size of process1 is 10kb(10 pages)
Size of process 2 iis 127kb . Let page size 1 kb
Now no. Of frames allocated to p1 10/137
No. Of frames allocated p2 127/137*62=57
Like the processes share availabl2e frames according to their needs rather than equally.
a) Global Replacement
It allows a process to select a replacement frame from the set of all frames, even if that frame is
currently allocated to some other process. That is one process can take a frame another.
➢ It does not hinder a process and results in greater system through put and it is more common
method.
➢ Problem with it is that a process cannot control its own page fault rate.
➢ The set of pages in memory for a process depends not only on the paging behaviour of that
process but also on the paging behaviour of other processes
➢ A higher priority process can select frames from low priority processes
b) Local Replacement
When a process needs a page which is not in the memory, it can bring in new page and allocate
it a frame from only its own set of allocated frames only.
➢ In this, the set of pages in memory for a process is affected by the paging behaviour of only
that process.
➢ It hinders a process, by not making available to it other, so less used pages of memory.
16
Thrashing
A process that is spending more time paging than executing is said to thrashing. i.e. high paging
activity is called thrashing.
➢ When a process doesn’t have enough frames to hold all pages for its execution, so it is
swapping pages in& out very frequently to executing. Sometimes, the pages which will be
required in the near future have to be swapped out.
➢ Initially when the CPU utilization is low, the process scheduling mechanism, increase the
level of multi programming by loading new processes into the memory of the same time.
➢ The new processes get started by taking frames from running processes if system is
implementing global replacement algorithm.
➢ It may cause more page faults and a longer queue for the paging device.
➢ As a result, CPU utilization drops even further, and the CPU scheduler tries to increase the
degree of multiprogramming.
➢ Thrashing has occurred.
➢ Page fault rate increases tremendously. So effective memory access time increases. No work
is getting done, because the processes are spending all their time paging.
➢ To increase CPU utilization and stop thrashing, we must decrease the degree of multi
programming.
➢ We can limit the effect of thrashing by using local replacement algorithm.
➢ If processes are thrashing, they will be in the queue for the paging device most of the time.
The average service time for a page fault will increases.
➢ The effective access time will increases even for a process that is not thrashing.
➢ To prevent thrashing, we must provide a process with as many frames as it needs.
➢ The working set strategy, starts by looking at how many frames a process is actually using.
This approach defines the locality model of process execution.
Working-set model
➢ This approach defines the locality model of process execution. To prevent thrashing, process
should be provided with as frames as if needs.
Working set model is one of the techniques to know how many frames a process is using.
➢ The locality model states that as process executes, if moves from locality to locality. A locality is
a set of pages that are actively used together. A program is composed of several different
localities, which may overlap.
➢ For example, when a function is called, it defines a new locality, when exit the function, the
process leaves that locality
➢ We can allocate enough frames to a process to accommodate its current locality. It will fault for
the pages in its locality until these pages are in memory. Then it will not fault again until it
changes localities.
➢ If we do not allocate enough frames to accommodate the size of the current locality, the process
will thrash, since if cannot keep in memory all the pages that it is actively using.
17
Page fault frequency
• Thrashing has a high fault rate. When it is too high, we know that process needs more frames.
If the page fault rate is too low, then the process may have too many frames.
• We can establish upper and lower bounds on the desired page fault rate.
• If the actual page fault rate exceeds the upper limit, we allocate process another frames.
• If the page fault rate falls below the lower limit, we remove a frame from process. Thus we
can directly measure &control the page fault rate to prevent thrashing.
• If working set strategy implemented, we may have to suspend a process because if page fault
rate increases, and no free frames are available, we must select some process and suspend it.
18
UNIT- 4
Disk scheduling
The disk bandwidth is the total number of bytes transferred, divided by the total time between the first
request for service and the completion of the last transfer.
• We can improve both the access time and the bandwidth by managing the order in which disk
I/O requests are serviced.
• Whenever a process needs I/O to or from the disk, it issues a system call to the OS. The request
specifies
o Whether this operation is input or output.
19
o what the disk address for the transfer is
o what the memory address for the transfer is
o what the number of sectors to be transferred is
• If the desired disk drive and controller are available, the request can be serviced immediately.
If the drive /controller are busy, any new requests for service will be placed in the queue of
pending requests for that drive.
• For a multiprogramming system with many processes the disk queue may often have several
pending requests.
• The main purpose of disk scheduling algorithm is to select a disk request from the queue of I/O
requests and decide the schedule when this request will be processed.
• Goal of disk scheduling algorithm is fairness, high through put, minimal travelling head time.
1. FCFS scheduling (First come first served)
➢ It is the simplest form of disk scheduling algorithm
➢ It services the IO requests in the order in which they arrive.
➢ There is no starvation in this, every request is serviced . It is fair.
Disadvantages
➢ Does not optimize the seek time
➢ Does not provide fastest service
➢ May not provide best possible service.
EX
20
3. SCAN Scheduling
➢ In this, the disk arm starts at one end of the disk and moves towards the other end, servicing
requests as it reaches each cylinder, until if gets to the other end of the disk.
➢ At other end, the direction of head movement is reversed, and servicing continues.
➢ The head continuously scans back and forth across the disk. It is called as elevator algorithm.
➢ High through put &average response time.
Disadvantage
Long waiting time for requests for locations just visited by disk arm.
21
➢
5. Look scheduling
It is like SCAN scheduling algorithm to some extent except the difference that, in this scheduling
algorithm, the arm of disk stops moving inwards (or outwards) when no more request in that
direction on exists.
This algorithm tries to overcome the overhead of SCAN algorithm which forces disk arm to move
in one direction till the end regardless of knowing if any request exists in the direction or not.
EX
6. C-Look scheduling
C-look algorithm is similar to c-SCAN algorithm to some extent In this, the arm of the disk moves
outwards servicing requests until it reaches the highest request cylinder, then it jumps to the lowest
request cylinder without servicing any request then it again start moving outwards servicing the
remaining requests.
➢ C-SCAN force the disk arm to move till the last cylinder regardless to knowing whether any
request is to be serviced on that cylinder or not.
22
RAID
➢ RAID is a variety of disk organization technique. RAID means redundant arrays of
Independent Disks.
Originally, the term RAID was defined as redundant array of inexpensive disks.
➢ RAID is a way of storing the same data in different places on multiple hard disks to protect
data in the case of a drive failure.
➢ RAID provides higher reliability and higher data transfer rate. The main methods of storing
data in the RAID are mirroring striping.
➢ If we store only one copy of data, then each disk failure will result in loss of significant amount
of data. The solution to the problem of reliability is to introduce redundancy. Introducing
redundancy is to duplicate every disk. This technique is called mirroring.
➢ To the OS, the array of disks can be presented as a single disk with mirroring 2physical disks
can be appeared as one logical disk. Every write is carried out on both disks. If one of disk is
failed, the data can be read from other. Mirroring provides high reliability, but it is expensive.
➢ With multiple disks, we can improve the transfer rate as well by stripping data across the
disks.
➢ Stripping __means splitting the flow of data in to bits/blocks of certain size, than writing into
multiple disks stripping consists of splitting bits of each byte across multiple disks. such
stripping is called bit-level stripping. Every disk participates in every access read/write.
➢ In block-level stripping, blocks of file are striped across multiple disks. It is most common.
Striping results parallelism, increase in throughput, less reliability reduces the response time of
large accesses.
3. Parity is the storage technique which is utilized striping and checksum methods. In this, a
certain parity function is calculated for the data blocks. If a drive fails, the missing block is
recalculated from the check sum, providing the RAID fault tolerance.
➢ RAID can be created using hardware or software. Software RAID is the cheapest and is part of
OS.
RAID Level
Selecting a suitable raid level for an application depends on the following
Reliability—How many disk faults can the system tolerate.
Availability—what fraction of total session time is system in up time mode.
Performance—how good is the response time. How high is the throughput
Capacity: How much use full capacity is available to the user.
23
Levels
Then is different RAID level, each optimized for a specific situation. RAID can be classified to
different levels based on its operation& level of redundancy provided.
RAID Level 0. (Stripping)
➢ Blocks are stripped across disks. Instead of placing just one block into a disk at a time, we can
work with multiple.
➢ Does not provide any kind of redundancy. →It has no mirroring, no parity
➢ No fault tolerant. →Provides high performance.
➢ Easy to implement. →Reliability is 0.
➢ The entire disk space is used. →Minimum number of disks 2
RAID Level 1. (Mirroring)
➢ It implements heavy use of mirroring. All data in the drive is duplicated to another drive.
➢ Stripping and parity are not used. →It is capable of reliability
➢ Only half the space is being used to store state. The half is just a mirror to the already stored
data you need at least 2 drives
➢ Improves read & write speed. →Simple technology
➢ Software RAID1 solution does not allow a hot swap of failed drive. That the failed drive can
only be replaced after powering down the computer it is attached to.
RAID Level 2
➢ It is also known as memory style error correcting code (ECC) organization. Certain errors are
detected by using parity bits.
➢ ECC store 2/more extra bits and reconstruct the data it a single bit is damaged.
➢ Stripping of is used. →Level 2 is not in practice.
➢ Minimum number of disks 2
RAID Level 3 (Bit inter leaved parity organization)
➢ Improved version of level 2.
➢ In level 2 memory systems detect the error. Where as in level 3 disk controllers can detect the
errors.
➢ Only a single parity bit is used for error correction and detection. So it has reduced storage
overhead.
➢ Level 3 is less expensive, as it requires less extra disks
➢ RAID level 3 supports fewer I/O per second, since every disk has to participate in every I/O
request.
➢ Expensive in computing & writing the parity.
➢ Best for single user will long record applications.
➢ Data recovery is accomplished by calculating the XOR of information on other devices.
RAID Level 4 (Block inter leaved parity organization)
➢ User block level striping instead of bit level striping
➢ Parity blocks is in separate disk for corresponding blocks from N other disks
➢ If allows recovery of at most 1 disk failure. If more than are disk fails, there no way to recover
the data. .so reliability is 1.
➢ For a given sit of (N) disks, one disk is reserved for storing the parity & (N-1) disks available
for data storage.
➢ Data transfer rate for each access is slower.
24
➢ Reading rate is much better than writing. Because reading can be done by a combined rate of
all disks used
RAID Level 6 (P+Q redundancy scheme)
➢ Stores extra redundant information to guard against multiple disk failures. Double distributed
parity is used
➢ This is complex technology. Rebuilding an array in which one drive failed can take a long
time.
➢ It can sustain 2drive failures instead of 1.
➢ Uses striping block level
➢ Minimum number of disks 4
➢ Because of over head of parity, lower performance of large amount of write operations.
RAID 10
➢ It is combination of RAID 1 and RAID 0.
➢ It combines the redundancy and increased performance suitable for where both high
performance and security is required.
➢ Minimum disks are 4 →Fault tolerance
➢ Half of storage capacity goes to mirroring.
Hot swapping
Hot swapping is a term used to describe the ability to replace a failed disk drive without rebooting the
machine. Hot swapping enables you to replace a component without interrupting the normal operation
of a server machine.
FILE:
• A file is a named collection of related information that is recorded or secondary storage such as
magnetic disk, tape.
• From user’s perspective, a file is the smallest allotment of logical secondary storage that is data
cannot be written to secondary storage unless they are within a file.
• Files represent programs and data.
• Files may be free from such as text files or may be formatted.
• A file is a sequence of bits, bytes, lies or records whose meaning is defined by files creator and
user.
• Many different types of information may be stored in a file.
• A file has a certain defined structure, which depends on type.
• A text file is a sequence of characters organized into lines.
• A source file is a sequence of subroutines and functions, each of which is further organized as
declarations followed by executable statements.
• An object file is a sequence of bytes organized into blocks understandable by the system linker.
• An executable file is a series of code sections that loader can bring into memory and execute.
• The information about all files kept in the directory structure, which also resides on secondary
storage.
FILE NAME:
• A file name is named for the convenience of users and is referred by its name.
• A name is usually a string of characters.
• Some system may differentiate between uppercase, lowercase characters in names and some
systems do not.
• When a file is named, it becomes independent of the process, user and even the system that created
it.
• Name should begin with alphabet.
FILE ATTRIBUTES:
File attributes vary from one operating system to another
Name: the symbolic file name is the only information kept in human readable form
Identifier: this is unique tag, usually a number, identifies the file within the file system.
25
Type: this is needed for systems that support different types of files.
Location: this is pointer to a device and to the location of a file on that device.
Size: the current size of a file.
Protection: this control information determines who can do reading, writing, executing.
Time, date and user identification: this information may be kept for creation, last modification and
last use. These data is useful for protection, security and usage monitoring.
FILE TYPES:
• File type refers to the ability of operating system to distinguish different types of file such as text
files, source files, binary files etc.
• Many operating system support many types of files.
• A common technique for implementing file types is to include the type as a part of a file name.
o Ex: first.java
• In this way user and OS can tell from the name alone what type of a file is and the type of
operations OS can be done on that file.
• OS like ms-DOS and UNIX have the following types of files.
Ordinary files:
1. These are files that contain user information.
2. These may have text, databases or executable program.
3. The user can apply various operations on such files like add, modify, delete etc.
Directory files
These files contain list of file names and other information related to these files.
Special files:
1. These files are also known as device files.
2. These files represent physical device like disk, printer, terminal etc.
FILE STRUCTRE
File types can be used to indicate the internal structure of the file.
• Source & object files have structures that match the expectations of programs.
• Certain files must custom to required structure that is understood by OS Ex: Executable file.
• If OS supports multiple files structures, the size of OS is large; because it needs to contain the
code to support these file structures.
26
• Some OS support a minimal number of structures .All operating systems must Support at-least
one structure i.e. executable file. so that the system is able to load and run the program.
It is useful for operating system to support structures that will be used frequently so that saves the
programmer effort.
Too few structures make programming inconvenient where as too many cause Operating system
overburden & programmer confusion.
• Internally locating an offset within a file is done by defining block size. All disk i/o is
performed in units of one block & all blocks are the same size.
• In UNIX operating system defines all files to be simply streams of bytes. Each byte is
individually addressable by its offset from the beginning or end of the file.
FILE OPERATIONS:
Files are used to store the required information for its later use there are many file operations that can
be performed by operating system. Some of them are
1.Creating a file:
Creating a file needs 2 steps. First, finding space in the file system for the file. Secondly, an entry for
the new file must be in directory.
2. Writing a file:
To write a file, we need a system call specifying both the name of a file & information to be written to
the file the system searches the directory to find the file location. The system keep write pointer to the
location in the file where the next write is t take place
3. Reading a file:
To read a file, we use system call that specifies the name of the file & where it is in memory the next
block of the file should be put.
System searches the directory to find the file location in the file
System keeps the read pointer to the location on the file where the next read is to take place.
4. Repositioning within the file:
The directory is searched for the appropriate entry, and the current file position pointer is repositioned
to a given value
It need not involve any actual I/O.
This operation is also known as file seek
5. Deleting a file:
To delete a file, searches the directory for the named file. If it is found, releases all file space, so that it
can be reused by other files, and erase the directory entry
6. Truncating a file:
The operation deletes /erases the contents of the file but keeps its attributes.
So that file length reset to zero and its file space is released
27
2. Direct access or relative access:
• This method is useful for disks, it allows random access to any file block
• The file is viewed as a numbered sequence of blocks or records
• There are no restrictions on the order of reading or writing
• It is useful for immediate access for large amounts of information. Databases uses this type of
accessing
• The block number is relative block number i.e., an index relative to the beginning of the file
• Thus first relative block is zero, the next is 1 and so on.
28
Operations on directory:
Directory can be defined as the listing of the related files on the disk. The directory may store some or
the entire file attributes.
A directory can be viewed as a file which contains the Meta data of the bunch of the files.
1. Search a file:
We need to be able to search a directory structure to find the entry for a particular file
2. Create a file:
New files need to be created and added to the directory
3. Delete a file:
When a file is no longer needed, we want to be able to remove it from the directory
4. List a directory:
We need to be able to list the files in a directory and the contents of the directory entry for each file in
the list
5. Rename a file:
the name of a file represents its contents to its users
we can change the name when the contents or use of the file changes
remaining a file may also allow its position within the directory structure to be changed
6. Traverse the file system:
For reliability, it is good to save the contents and structure of the entire file system at regular intervals
by copying into a magnetic tape
This technique provides a backup copy in case of system failure. If a file is no longer in use it can be
copied to the tape and space of file released in disk. That space can be reused by another
Directory structure
There are many types of directory structure in OS. They are as follows:
1.Single level directory:
• This is the simplest directory structure.
• All files contained in the same directory i.e., only one directory
• It is easy to support and understand
• Files are limited in length
• Keeping track of so many files is daunting task
• Since all files are in the same directory they must have unique name
• If 2 users call their data file ‘test’ then the unique name rule is violated
• Even a single user may find it difficult to remember the names of all files as the number of files
increases
• Protection cannot be implemented for multiple users
• There is no way to group same kind of files
• If directory is big, searching for file may take so much time
30
• Shared file/directories can be implemented in several ways. One way is to create a new directory
entry called a link. i.e., a pointer to another file/sub directory
• Another approach to implementing shared files is to duplicate all information about them in both
sharing directories this approach has problem when a file is modified
• An acyclic graph directory is more flexible and it is more complex
• A file may have multiple absolute path names it becomes problem when traverse the entire file
system to find a file, or to copy all files to back up storage
• The deletion of a link need not affect the original file, only the link is removed
• If the file entry itself is deleted, the space for the file is de-allocated, leaving the dangling pointers
• It needs garbage collection
• Searching is expensive
31
• The exact location in VFS that the newly mounted medium got registered is called mount point.
When a mounting process is completed, the user can access files and directories on the medium
from here
• An opposite process of mounting is called un mounting, in which operating system cuts off all user
access to files and directories on the mount point, writes the remaining queue of user data to the
storage device, refreshes file system metadata. Then relinquishes access to the device so that
making the storage safe for removal
• Normally when the computer is shutting down every mounted storage will undergo an un
mounting process.
• The basic idea behind mounting file systems is to combine multiple file systems into one large tree
structure
• Un mounting of certain devices like CD’S, DVDS is done automatically once the derive is ejected
FILE SHARING
• File sharing is desirable for users who want to collaborate and to reduce the effort required to
achieve compiling goal
• To implement sharing in multiple users operating system, the system must maintain more file and
directory attributes such as owner, user and group
• The owner is the user who can change attributes and grant access and who has the most control
over the file. The group defines a subset of users who can share access to the file
• Networking allows sharing of resources around the world.
• The first implemented method involves, transferring files between machines via programs like FTP
• FTP is used for both anonymous and authenticated access. Anonymous access allows a user to
transfer files without having an account on the remote system
• The second major method uses distributed file system (DFS) in which remote directories are
visible from a local machine. It involves tighter integration between the machine that is accessing
remote files and the machine providing the files
• The third method is the world wide web (WWW) is reversion to the first. A browser is needed to
gain access to the remote files, and separate operations are used to transfer files. It uses anonymous
file exchange
Consistency of semantics specify how multiple users of a system are to access a shared file
simultaneously. It deals with consistency between the views of shared files on a networked system
when one user changes the file, when do other see the changes?
32
• In AFS, writes to an open file are not immediately visible to other users once a field is closed, the
changes made to it are visible to the users who open the file at a later time
Protection
• It is needed to keep safe information that is stored in system from physical damage ( the issue of
reliability) and improper access
• File systems can be damaged by hardware problems, power failures, head crashes, dirt, and
temperature extremes
• Files may be deleted accidentally. Bugs on the file system software can also cause file contents to
be lost. To overcome this problem many systems are provided by duplicate copies of files i.e.,
reliability
• Protection mechanism provides controlled access by limiting the types of file access that can be
made. Several different types of operations may be controlled such as read, write, execute, append,
delete, list, renaming, copying, editing
• Common approach to the protection problem is to make access dependent on the identity of the
user.
• Maintaining access control list (ACL) specifying user names and the types of access allowed for
each user
• Most systems recognize 3 types of users on communication with each file i.e., owner (the user who
created the file), group (a set of users who are sharing the file), universe (all users in the system)
• Another approach to the protection problem is to associate a password with each file the number of
passwords that a user to remember may become large
In-memory
The in-memory info is used for both file system management and performance improvement via
caching. The data are loaded at mount time, updated during file system operations, and discard at
dismount. Several types of structures may be included
An in-memory mount table contains information about each mounted volume.
An in memory directory structure cache holds the directory information of recently accessed
directories.
System wide open file table contains a copy of the FCB of each open file, as well as other information.
The per process open file table contains a pointer to the appropriate entry in the system wide open file
table, as well as other information.
Buffers hold file system blocks, when they are being read from disk or written to disk.
34
PARTITIONS AND MOUNTING
• The layout of disk can have many variations depending on the OS. A disk can be sliced in to
multiple partitions or a volume can span multiple partitions on multiple disks.
• Each partitions can be either raw, containing no file system or cooked containing a file system.
Raw disk is used where no file system is appropriate.
• Raw disk hold information needed by disk RAID systems. Boot information can be stored in a
separate partition having its own format because at boot time the system does not have the file
system code loaded. So boot information loaded as an image into memory.
• The boot loader able to find and load the kernel and start of executing.
• The disk can have multiple partitions each containing a different type of file system and a different
OS.
• The root partition which contains the OS kernel and sometimes other system files is mounted at
boot time. Other volumes can be automatically mounted at boot or manually mounted later,
depending on OS.
• As part of successful mount operation the OS verifies that the device contains a valid file system.
Finally OS notes in its in-memory mount table that file system is mounted, along with the type of
file system.
35
DIRECTORY IMPLEMENTATION
The selection of directory allocation and directory management algorithms affects the performance
of the file system.
These algorithms are classified according to the data structure they are using. There are mainly 2
algorithms:
1. Linear list:
In this algorithm, all the files in a directory are maintained as a single linked list. Each file contains
the pointers to the data blocks which are assigned to it and the next file in the directory.
1. When a new file is created, then the entire list is checked whether the new file name is matching to
an existing file name or not. In case, if doesn’t exist, the file can be created at the beginning or at the
end. Therefore searching for a unique name is big concern because traversing the whole list takes time.
2. The list needs to be traversed in case of every operation (creation, deletion, updating etc) on the files
therefore the systems become inefficient.
Disadvantages
2. Hash table:
• To overcome the drawback of single linked list implementations of directories, there is an
alternative approach that is hash table.
• This approach suggests using hash table along with linked list.
• A key-value pair for each file in the directory gets generated and stored in the hash table. The key
can be determined by applying the hash function on the file name while the key points to the
corresponding file stored in the directory.
• Now searching becomes efficient because entire list will not be searched on every operation.
• Only hash table entries are checked using the key and if an entry found then the corresponding file
will be fetched using the value.
ALLOCATION METHODS
Many files are stored on the same disk. The allocation methods define how the files are stored in the
blocks. To provide efficient disk space utilization and fast access to the file blocks.
1.Contiguous allocation:
• In this, each file occupies a contiguous set of blocks on the disk. If the file is n blocks long and
starts at location b, then it occupies blocks b, b+1,……b+n-1
• This means, that given starting block and length of file determines the blocks occupied by the file
• the directory entry for each file in this method contains
o Address of starting block
o Length of the allocated for file
36
Eg; the file ‘mail’ in the following fig starts from the block 19 with length=6 blocks. Therefore it
occupies 19,20,21,22,23,24 blocks
Advantages:
• Both the sequential and direct accesses are supported by this for direct access the address of the k th
block of the file which starts at block b can easily be obtained as b+k
• This is extremely fast since the number of seeks are minimal because of contiguous allocation of
the blocks
Disadvantages:
• This method suffers from both internal, external fragmentation this means it inefficient in terms of
memory utilization
• Increasing file size is difficult because it depends on the availability of contiguous memory at a
particular instance
2. Linked allocation:
• In this scheme, each file is linked list of disk blocks which need not be contiguous. The disk blocks
can be scattered any where on the disk. The directory entry contains a pointer to the starting and
ending file. Each block contains a pointer to the next block occupied by the file.
• Thus if each block is 512 bytes in size, and a disk address requires 4 bytes then user sees blocks of
508 bytes
Advantages:
• This is very flexible in terms of file size; file size can be increased easily since the system does not
have to look for a contiguous chunk of memory
• This method does not suffer from external fragmentation
• This makes it relatively better in terms of memory utilization supports required access.
Disadvantages:
• Because the file blocks are distributed randomly on the disk so large number of seeks are needed to
access every block individually. It makes slower
• It does not support random direct access. We cannot directly access the blocks of a file. A block
‘k’ of a file can be accessed by traversing k blocks sequentially from the starting block of file via
block pointers
• Pointers required in this which requires extra space. It suffers from reliability when is lost or
damaged because of software or between failure
37
3. Indexed allocation:
• In this scheme, a special block known as the index block contains file pointers to all the blocks
occupied by a file
• Each file has its own index block. The ith entry in the index block contains the disk address of the
ith file block
• The directory entry contains the address of the index block
Advantages:
This supports direct access to the blocks occupied by the file and therefore provides fast access to the
file blocks
it overcomes the problem of external fragmentation
Disadvantages:
• The pointer overhead for indexed allocation is greater than linked allocation
• For very small files, say files that expand only 2/3 blocks the indexed allocation would keep one
entire block (index block) for the pointers which is inefficient in terms of memory utilization
• However in linked allocation we lose the space of only1 pointer per block
• For files that are very large, single index block may not be able to hold all the pointers
• It is more complex
39
2. Linked list:
• In this approach the free disk blocks are linked together ie; a free block contains a pointer to
the next free block
• The block number of the very first disk block is stored at a separate location on disk and is also
cached in memory
Drawback:
It is not efficient to traverse the list we must read each block, which requires substantial input, output
time
3. Grouping:
• This approach stores the address of the free blocks in the first free block. The first free stores the
address of some, say n free blocks
• Out of these n blocks the first (n-1) blocks are actually free and the last block contains the address
of next free n blocks
Advantage:
The addresses of a large number of free blocks can be found easily
4. Counting:
This approach stores the address of the first free disk block and the number of n free contiguous disk
blocks that follow the first block
Every entry in the list would contain
1) Address of first free disk block
2) a number n ie; count
For eg; the first entry of free space list would be (address of block 5, 2) because 2 configures free
blocks follow block 5
40
UNIX OPERATING SYSTEM
The UNIX operating system has for many years formed the backbone of the Internet, especially for
large servers and most major university campuses. However, a free version of UNIX called Linux has
been making significant gains against Macintosh and the Microsoft Windows 95/98/NT environments,
so often associated with personal computers. UNIX commands can often be grouped together to make
even more powerful commands with capabilities known as I/O redirection ( < for getting input from a
file input and > for outputing to a file ) and piping using | to feed the output of one command as input
to the next. Please investigate manuals in the lab for more examples than the few offered here.
Unix Commands
Command Example Description
11 grep <str><files> grep "bad word" * Find which files contain a certain
word
12. chmod <opt> <file> chmod 644 *.html Change file permissions read only
chmod 755 file.exe Change file permissions to
executable
cal <mo> <yr> cal 9 2000 Print calendar for September 2000
Use the 'sed' command - it looks for a pattern and then you can 'delete' the line by
preventing the input line from going to the output (sed is a filter program). For example,
sed -e '/word/d' file1 file2 file3 > file.out
will remove any line containing the word 'word' from the three files by not copying it to the
output file 'file.out'
What is vi?
The default editor that comes with the UNIX operating system is called vi (visual editor). [Alternate
editors for UNIX environments include pico and emacs, a product of GNU.]
The UNIX vi editor is a full screen editor and has two modes of operation:
1. Command mode commands which cause action to be taken on the file, and
2. Insert mode in which entered text is inserted into the file.
In the command mode, every character typed is a command that does something to the text file being
edited; a character typed in the command mode may even cause the vi editor to enter the insert
mode. In the insert mode, every character typed is added to the text in the file; pressing the <Esc>
(Escape) key turns off the Insert mode.
While there are a number of vi commands, just a handful of these is usually sufficient for beginning
vi users. To assist such users, this Web page contains a sampling of basic vi commands. The most
basic and useful commands are marked with an asterisk (* or star) in the tables below. With practice,
these commands should become automatic.
NOTE: Both UNIX and vi are case-sensitive. Be sure not to use a capital letter in place of a lowercase
letter; the results will not be what you expect.
To Start vi
To use vi on a file, type in vi filename. If the file named filename exists, then the first page (or
screen) of the file will be displayed; if the file does not exist, then an empty file and screen are created
into which you may enter text.
* vi filename edit filename starting at line 1
vi -r filename recover filename that was being edited when system crashed
To Exit vi
Usually the new or modified file is saved when you leave vi. However, it is also possible to quit vi
without saving the file.
42
Note: The cursor moves to bottom of screen whenever a colon (:) is typed. This type of command is
completed by hitting the <Return> (or <Enter>) key.
* :x<Return> quit vi, writing out modified file to file named in original invocation
:wq<Return> quit vi, writing out modified file to file named in original invocation
:q<Return> quit (or exit) vi
quit vi even though latest changes have not been saved for this vi
* :q!<Return>
call
Syntax for
for … do … done loop statement for name [ in word... ; ] do
command_list
done
43
PRACTICAL QUESTIONS
Q2 write shell program using ‘case’
Q3 write shell program using while Q3 write shell program using for
# use of while loop echo "Using for loop "
echo "Using while loop..." for (( i=1; i<=10; i++ ))
j=1 do
while [ $j -le 10 ] echo -n "$i "
do done
echo -n "$j " echo ""
j=$(( j + 1 )) # increase number by 1
done
echo ""
4(a)write shell script that takes two integers as its arguments and compute the value of the first number
raised to the power of 2nd number
echo "Input number"
no=$1
echo "Input power"
power=$2
counter=0
ans=1
while [ $power -ne $counter ]
do
ans=`expr $ans \* $no`
counter=`expr $counter + 1`
done
44
4(b) Write a shell script that takes a command–line argument and reports on whether it is directory, a file, or
something else.
PASSED=$1
if [ -d "${PASSED}" ] ; then
echo "$PASSED is a directory";
else
if [ -f "${PASSED}" ]; then
echo "${PASSED} is a file";
else
echo "${PASSED} is not valid";
exit 1
fi
fi
Q5 a)Write a Shell script that accepts a filename, starting and ending line numbers as
arguments and displays all the lines between the given line numbers.
output:
enter the filename
sales.dat
enter the starting line number
2
enter the ending line number
4
1 computers 9161
1 textbooks 21312 2 clothing 3252
Q 5(b) Write a Shell script that deletes all lines containing a specified word in one or more files
supplied as arguments to it.
if [ $# -eq 0 ]
then
echo "Please enter one or more filenames as argument"
exit
fi
echo "Enter the word to be searched in files"
read word
for file in $*
do
sed "/$word/d" $file | tee tmp
mv tmp $file
done
45
Q6 Write a Shell script that displays list of all the files in the current directory to which the user has read,
write and execute permissions.
for File in *
doif [ -r $File -a -w $File -a -x $File ]
then
echo $File
fi
done
Q7. Write a program to simulate the UNIX commands like ls, mv, cp.
#copying
echo -n "Enter soruce file name : "
read src
echo -n "Enter target file name : "
read targ
if [ ! -f $src ]
then
echo "File $src does not exists"
exit 1
elif [ -f $targ ]
then
echo "File $targ exist, cannot overwrite"
exit 2
fi
# copy file
cp $src $targ
if [ $status -eq 0 ]
then
echo 'File copied successfully'
else
echo 'Problem copuing file'
fi
Q8 Write a program to convert upper case to lower case letters of a given ASCII file
clear
echo "Enter the File :\c"
output:
read f1
if [ -f $f1 ]
enter the File :HELLO
then
echo "Converting Upper case to Lower Case to "
Converting Upper case to
tr '[A-Z]''[a-z]' <$f1
Lower Case to
how r u ....
else
nice meeting u.
echo "$f1 file does not exist "
bye
fi
Q10. Write a program to demonstrate FCFS process schedules on the given data.
#include<iostream>
;
int main()
{
int n,bt[20],wt[20],tat[20],avwt=0,avtat=0,i,j;
cout<<"Enter total number of processes(maximum 20):";
cin>>n;
avwt/=i;
avtat/=i;
cout<<"\n\nAverage Waiting Time:"<<avwt;
cout<<"\nAverage Turnaround Time:"<<avtat;
return 0;
}
Output
Enter total number of processes(maximum 20)=3
Enter process burst time
P[1]=24
P[2]=3
P[3]=3q
process Burst time Waiting time turnaaroundtime
P[1] 24 0 24
P[2] 3 24 27
P[3] 3 27 30
//array instantiations
int start[n], end[n], wait[n];
//calculations
for(i=1;i<=n;i++)
{ for(j=i+1;j<=n;j++)
{
if (i>=2 && burst[i-1]>burst[j-1])
{
temp = burst[i-1];
burst[i-1]=burst[j-1];
arrival[i-1]=arrival[j-1];
burst[j-1]=temp;
}
}
if(i==1)
{
start[0]=0;
end[0]=0;
wait[0]=0;
}
else
{
start[i-1]=end[i-2];
end[i-1]=start[i-1]+burst[i-1];
wait[i-1]=start[i-1]+arrival[i-1];
}
//throughput
if (start[i+1] <= throughput)
tp = i+1;
}
//output
cout << "\n\nPROCESS \t BURST TIME\tARRIVAL TIME\tWAIT TIME\tSTART TIME\tEND TIME\n";
for (i=0;i<n;i++){
cout << "\nP[" << i + 1 << "]" << "\t\t" << burst[i] << "\t\t" << arrival[i] << "\t\t" << wait[i] << "\t\t" <<
start[i] << "\t\t" << end[i];
}
//avg wait time
for(i=1,tot=0;i<n;i++){
tot+=wait[i-1];
avgwait=tot/n;
}
//avg turnaround time
for(i=1,tot=0;i<n;i++){
tot+=end[i-1];
avgturnaround=tot/n;
}
//avg response time
for(i=1,tot=0;i<n;i++){
48
tot+=start[i-1];
avgresponse=tot/n;
}
cout << "\n\nAverage Wait Time: " << avgwait;
cout << "\nAverage Response Time: " << avgturnaround;
cout << "\nAverage Turnaround Time: " << avgresponse;
cout << "\nThroughput for (" << throughput << "): " << tp << endl;
}
12.Write a program to demonstrate Priority Scheduling on the given burst time and arrival times.
#include<iostream.h>
int main()
{
int bt[20],p[20],wt[20],tat[20],pr[20],i,j,n,total=0,pos,temp,avg_wt,avg_tat;
cout<<"Enter Total Number of Process:";
cin>>n;
cout<<"\nEnter Burst Time and Priority\n";
for(i=0;i<n;i++)
{
cout<<"\nP["<<i+1<<"]\n";
cout<<"Burst Time:";
cin>>bt[i];
cout<<"Priority:";
cin>>pr[i];
p[i]=i+1; //contains process number
}
//sorting burst time, priority and process number in ascending order using selection sort
for(i=0;i<n;i++)
{
pos=i;
for(j=i+1;j<n;j++)
{
if(pr[j]<pr[pos])
pos=j;
}
temp=pr[i];
pr[i]=pr[pos];
pr[pos]=temp;
temp=bt[i];
bt[i]=bt[pos];
bt[pos]=temp;
temp=p[i];
p[i]=p[pos];
p[pos]=temp;
}
wt[0]=0; //waiting time for first process is zero
//calculate waiting time
for(i=1;i<n;i++)
{
wt[i]=0;
49
for(j=0;j<i;j++)
wt[i]+=bt[j];
total+=wt[i];
}
avg_wt=total/n; //average waiting time
total=0;
cout<<"\nProcess\t Burst Time \tWaiting Time\tTurnaround Time";
for(i=0;i<n;i++)
{
tat[i]=bt[i]+wt[i]; //calculate turnaround time
total+=tat[i];
cout<<"\nP["<<p[i]<<"]\t\t "<<bt[i]<<"\t\t "<<wt[i]<<"\t\t\t"<<tat[i];
}
Output
Enter total number of processes :4
enter burst time and priorty
p[1]
burst time :6
priority :3
p[2]
burst time :2
priority :2
p[3]
burst time :14
priority :1
p[4]
burst time :6
priority :4
13.Write a program to demonstrate Round Robin Scheduling on the given burst time and arrival times.
#include<iostream.h>
50
for (int i = 0 ; i < n ; i++)
rem_bt[i] = bt[i];
// Driver code
int main()
{
// process id's
int processes[] = { 1, 2, 3};
int n = sizeof processes / sizeof processes[0];
Output:
Processes Burst time Waiting time Turn around time
1 10 13 23
2 5 10 15
3 8 13 21
Average waiting time = 12
Average turn around time = 19.6667
14. Write a program to implementing Producer and Consumer problem using Semaphores.
#include<iostream.h>
int mutex=1,full=0,empty=3,x=0;
int main()
52
{
int n;
void producer();
void consumer();
int wait(int);
int signal(int);
cout<<"\n1.Producer\n2.Consumer\n3.Exit";
while(1) Output
{ 1.Producer
2.Consumer
Cout<<"\nEnter your choice:"; 3.Exit
Cin>>n; Enter your choice:1
switch(n) Producer produces the item 1
{ Enter your choice:2
case 1: if((mutex==1)&&(empty!=0)) Consumer consumes item 1
Enter your choice:2
producer(); Buffer is empty!!
else Enter your choice:1
cout<<"Buffer is full!!"; Producer produces the item 1
break; Enter your choice:1
case 2: if((mutex==1)&&(full!=0)) Producer produces the item 2
Enter your choice:1
consumer(); Producer produces the item 3
else Enter your choice:1
cout<<"Buffer is empty!!"; Buffer is full!!
break; Enter your choice:3
case 3:
exit(0);
break;
}
}
return 0;
}
int wait(int s)
{
return (--s);
}
int signal(int s)
{
return(++s);
}
void producer()
{
mutex=wait(mutex);
full=signal(full);
empty=wait(empty);
x++;
cout<<"\nProducer produces the item “<<x;
mutex=signal(mutex);
}
void consumer()
{
mutex=wait(mutex);
53
full=wait(full);
empty=signal(empty);
cout<<"\nConsumer consumes item <<x);
x--;
mutex=signal(mutex);
}
Q15 Write a program to simulate FIFO, LRU, LFU Page replacement algorithms
#include<iostream.h>
int n,nf;
int in[100];
int p[50];
int hit=0;
int i,j,k;
int pgfaultcnt=0;
void getData()
{
cout<<"\nEnter length of page reference sequence:";
cin>>n;
cout<<"\nEnter the page reference sequence:";
for(i=0; i<n; i++)
cin>>in[i];
cout<<"\nEnter no of frames:";
cin>>nf;
}
void initialize()
{
pgfaultcnt=0;
for(i=0; i<nf; i++)
p[i]=9999;
}
void dispPages()
{
54
for (k=0; k<nf; k++)
{
if(p[k]!=9999)
cout<<p[k];
}
void dispPgFaultCnt()
{
cout<<"\nTotal no of page faults:"<<pgfaultcnt;
}
void fifo()
{
initialize();
for(i=0; i<n; i++)
{
cout<<"\nFor :"<<in[i]);
if(isHit(in[i])==0)
{ OUTPUT:
55
{
if(least[j]<min)
{
min=least[j];
repindex=j;
}
}
p[repindex]=in[i];
pgfaultcnt++;
dispPages();
}
else
cout<<"No page fault!";
}
dispPgFaultCnt();
}
int main() {
{ case 1:
int choice; getData();
while(1) break;
{ case 2:
cout<<"\nPage Replacement fifo();
Algorithms\n break;
1.Enter data\n2.FIFO\n3. LRU\n case 3:
4.Exit lru();
\nEnter your choice:"); break;
cin>>choice; default:
switch(choice) return 0;
break;
}
}}
56