Professional Documents
Culture Documents
Unit 2
Unit 2
Unit 2
Chapter 3: Processes
Process Concept
Process Scheduling
Operations on Processes
Inter process Communication
Process Concept
An operating system executes a variety of programs:
Batch system – jobs
Time-shared systems – user programs or tasks
Textbook uses the terms job and process almost interchangeably
Process – a program in execution; process execution must progress in sequential fashion
Multiple parts
The program code, also called text section
Current activity including program counter, processor registers
Stack containing temporary data
Function parameters, return addresses, local variables
Data section containing global variables
Heap containing memory dynamically allocated during run time
Process Concept (Cont.)
Program is passive entity stored on disk (executable file), process is active
Program becomes process when executable file loaded into memory
Execution of program started via GUI mouse clicks, command line entry of its name, etc
One program can be several processes
Consider multiple users executing the same program
Process in Memory
Process State
Maximize CPU use, quickly switch processes onto CPU for time sharing
Process scheduler selects among available processes for next execution on
CPU
Maintains scheduling queues of processes
Job queue – set of all processes in the system
Ready queue – set of all processes residing in main memory, ready and waiting to
execute.
A ready-queue header contains pointers to the first and final PCBs in the list. Each
PCB includes a pointer field that points to the next PCB in the ready queue.
Device queues – set of processes waiting for an I/O device
Processes migrate among the various queues
The stages a process goes through are:
•A new process first goes in the Ready queue, where it waits for execution or to be
dispatched.
•The CPU gets allocated to one of the processes for execution.
•The process issues an I/O request, after which an OS places it in the I/O queue.
•The process then creates a new subprocess and waits for its termination.
•If removed forcefully, the process creates an interrupt. Thus, once this interrupt
completes, the process goes back to the ready queue.
Two-State Process Model
There are two states in the two-state process model, namely,
running state and non-running state.
1. Running: A new process enters into the system in running state,
after creation.
2. Not Running: The not running processes are stored in a queue
until their turn to get executed arrives and each entry in the queue
points to a particular process. The queue can be implemented using
a linked list.
If a process has been completed or aborted, the OS discards it and if
it is interrupted, then the OS transfers it to the waiting queue.
Irrespective of either case, the dispatcher then selects a process from
the queue to execute.
Schedulers in OS
A scheduler is a special type of system software that handles process
scheduling in numerous ways.
It mainly selects the jobs that are to be submitted into the system and decides
whether the currently running process should keep running or not. If not then
which process should be the next one to run. A scheduler makes a decision:
When the state of the current process changes from running to waiting due to
an I/O request or some unsatisfied OS.
If the current process terminates.
When the scheduler needs to move a process from running to ready state as it
has already run for its allotted interval of time.
When the requested I/O operation is completed, a process moves from the
waiting state to the ready state. So, the scheduler can decide to replace the
currently-running process with a newly-ready one.
Schedulers
Short-term scheduler (or CPU scheduler) – selects which process should be executed next and
allocates CPU
Sometimes the only scheduler in a system
Short-term scheduler is invoked frequently (milliseconds) (must be fast)
Long-term scheduler (or job scheduler) – selects which processes should be brought into the ready
queue
Long-term scheduler is invoked infrequently (seconds, minutes) (may be slow)
The long-term scheduler controls the degree of multiprogramming
Processes can be described as either:
I/O-bound process – spends more time doing I/O than computations, many short CPU bursts
CPU-bound process – spends more time doing computations; few very long CPU bursts
Long-term scheduler strives for good process mix
Medium Term Scheduler:
Medium-term scheduling removes processes from the memory and is a part of
swapping. It reduces the degree of multiprogramming and is in-charge of handling the
swapped out processes. Swapping is necessary to improve the process mix.
S.No. Long-Term Scheduler Short-Term Scheduler Medium-Term Scheduler
A process swapping
1. A job scheduler A CPU scheduler
scheduler
Speed is between the other
2. Slowest speed Fastest Speed
two
Controls the degree of Provides less control over the Reduces the degree of
3.
multiprogramming degree of multiprogramming multiprogramming
Some operating systems do not allow child to exists if its parent has
terminated. If a process terminates, then all its children must also be
terminated.
cascading termination. All children, grandchildren, etc. are
terminated.
The termination is initiated by the operating system.
The parent process may wait for termination of a child process by
using the wait()system call. The call returns status information
and the pid of the terminated process
pid = wait(&status);
If no parent waiting (did not invoke wait()) process is a zombie
If parent terminated without invoking wait , process is an orphan
Multiprocess Architecture – Chrome Browser
Message system – processes communicate with each other without resorting to shared
variables
message next_produced;
while (true) {
/* produce an item in next produced */
send(next_produced);
}
message next_consumed;
while (true) {
receive(next_consumed);
Sockets
Remote Procedure Calls
Pipes
Remote Method Invocation (Java)
Sockets
A socket is defined as an endpoint for communication
All ports below 1024 are well known, used for standard services
Non-Preemptive
Pre-emptive
Pre-emptive
Non-Preemptive
Pre-emptive : Non-preemptive:
In this case, the OS assigns resources to a process for a In this case, a process’s resource cannot be taken before the
predetermined period of time. process has finished running.
The process switches from running state to ready state or When a running process finishes and transitions to a waiting
from waiting state to ready state during resource allocation. state, resources are switched.
This switching happens because the CPU may give other
processes priority and substitute the currently active process for
the higher priority process.
Scheduling criteria
Maximize:
CPU utilization - It makes sure that the CPU is operating at
its peak and is busy.
Throughput - It is the number of processes that complete
their execution per unit of time.
Minimize:
Waiting time- It is the amount of waiting time in the queue.
Response time- Time retired for generating the first request
after submission.
Turnaround time- It is the amount of time required to
execute a specific process.
TurnAround Time=Compilation time − Arrival time.
Types of scheduling algorithms
FIRST COME FIRST SERVED SCHEDULING
• The process requests the CPU first is allocated the CPU first.
PROCESS BURST TIME
• FIFO QUEUE. P1 24
• When the CPU is free it is allocated to the process at the head of the
queue. P2 3
In case of a tie, process with smaller process id is executed first. P3 3
It is always non-preemptive in nature.
It suffers from convoy effect.
P1 P2
P3
0 24 27 30
FCFS SCHEDULING
ADVANTAGES DISADVANTAGES
It is an easy algorithm to implement since it FCFS results in convoy effect which means if a
does not include any complex way. process with higher burst time comes first in the
ready queue then the processes with lower burst
time may get blocked and that processes with
Every task should be executed lower burst time may not be able to get the CPU
if the higher burst time task takes time forever.
simultaneously as it follows FIFO queue.
If a process with long burst time comes in the line
first then the other short burst time process have
FCFS does not give priority to any random to wait for a long time, so it is not much good as
important tasks first so it’s a fair time-sharing systems.
scheduling. Since it is non-preemptive, it does not release the
CPU before it completes its task execution
completely.
SHORTEST JOB FIRST
This algorithm associates with each process the length of the process’s next CPU burst.
When the CPU is available, it is assigned to the process that has the smallest next CPU burst.
If the next CPU bursts of two processes are the same, FCFS scheduling is used to break the tie.
P5 12 12 – 2 = 10 10 – 3 = 7
P4 0 6
P5 2 3
Pr
oc
es
Arrival
Burst time TAT WT
s
time CT CT-AT TAT-BT
Id
P1 0 9
P2 1 4
P3 2 9
Consider the set of 5 processes whose arrival time and burst time are given
below- If the CPU scheduling policy is SJF preemptive, calculate the average
waiting time and average turn around time.
Pr
oc
es
Arrival
Burst time TAT WT
s
time CT CT-AT TAT-BT
Id
P1 3 1
P2 1 4
P3 4 2
P4 0 6
P5 2 3
SJF
ADVANTAGES DISADVANTAGES
It can not be implemented practically since
burst time of the processes can not be known in
SRTF is optimal and guarantees the minimum
advance.
average waiting time.
It leads to starvation for processes with larger
It provides a standard for other algorithms
burst time.
since no other algorithm performs better than
it. Priorities can not be set for the processes.
Processes with larger burst time have poor
response time.
Round Robin scheduling algorithm
Round Robin is a CPU scheduling algorithm where each process is assigned a fixed time slot
in a cyclic way. It is basically the preemptive version of First come First Serve CPU Scheduling
algorithm.
Round Robin CPU Algorithm generally focuses on Time Sharing technique.
The period of time for which a process or job is allowed to run in a pre-emptive method is
called time quantum.
Each process or job present in the ready queue is assigned the CPU for that time quantum, if the
execution of the process is completed during that time then the process will end else the process
will go back to the waiting table and wait for its next turn to complete the execution.
Characteristics of Round Robin CPU Scheduling
It is simple, easy to implement, and starvation-free as all processes get fair share of CPU.
One of the most commonly used technique in CPU scheduling as a core.
It is preemptive as processes are assigned CPU only for a fixed slice of time at most.
The disadvantage of it is more overhead of context switching.
Advantages of Round Robin CPU Scheduling Algorithm:
There is fairness since every process gets equal share of CPU.
The newly created process is added to end of ready queue.
A round-robin scheduler generally employs time-sharing, giving each job a time slot or
quantum.
While performing a round-robin scheduling, a particular time quantum is allotted to different
jobs.
Each process get a chance to reschedule after a particular quantum time in this scheduling.
Disadvantages of Round Robin CPU Scheduling Algorithm:
There is Larger waiting time and Response time.
There is Low throughput.
There is Context Switches.
Gantt chart seems to come too big (if quantum time is less for scheduling. For Example:1 ms
for big scheduling.)
Time consuming scheduling for small quantum.
If the CPU scheduling policy is Round Robin with time quantum = 2 unit, calculate the average waiting time and average turn around time.
P1 0 5
P2 1 3
P3 2 1
P4 3 2
P5 4 3
Ready Queue-
P5, P1, P2, P5, P4, P1, P3, P2, P1
PRIORITY SCHDEULING
Priority scheduling is a preemptive algorithm or non-preemptive algorithm and one of the
most common scheduling algorithms in batch systems.
Each process is assigned a priority. Process with highest priority is to be executed first and so
on.
Processes with same priority are executed on first come first served basis.
Priority can be decided based on memory requirements, time requirements or any other
resource requirement.
1. Title
2. Introduction
3. Methodology
4. Description
5. Background information
6. Findings
7. Conclusion
SUBMIT BEFORE 25-09-2023
8. Reference
9. Appendix
Priority Scheduling
Preemptive Non - Preemptive
Arrival Burst
Process Priority
Time Time
P1 0 8 3
P2 1 1 1
P3 2 3 2
P4 3 2 3
P5 4 6 4
PRIORITY NON PREEMPTIVE
PRIORITY PREEMPTIVE SCHEDULING
SCHEDULING
If a process with higher priority than the process currently Once resources are allocated to a process, the process
being executed arrives, the CPU is preemeted and given to holds it till it completes its burst time even if a process
the higher priority process. with higher priority is added to the queue.
The waiting time for the process having the highest The waiting time for the process having the highest
priority will always be zero. priority may not be zero.
It is more expensive and difficult to implement. Also a lot It is cheaper to implement and faster as less switching is
of time is wasted in switching. required.
It is useful in applications where high priority processes It can be used in various hardware applications where
cannot be kept waiting. waiting will not cause any serious issues.
Deadlock
Every process needs some resources to complete its execution. However, the resource is granted in a
sequential order.
The process requests for some resource.
OS grant the resource if it is available otherwise let the process waits.
The process uses it and release on the completion.
A Deadlock is a situation where each of the computer process waits for a resource which is
being assigned to some another process. In this situation, none of the process gets executed
since the resource it needs, is held by some other process which is also waiting for some other
resource to be released.
Deadlock Starvation
Deadlock is a situation where no process got blocked and Starvation is a situation where the low priority process
no process proceeds got blocked and the high priority processes proceed.
The requested resource is blocked by the other process. The requested resource is continuously be used by the
higher priority processes.
Deadlock happens when Mutual exclusion, hold and It occurs due to the uncontrolled priority and resource
wait, No preemption and circular wait occurs management.
simultaneously.
Necessary conditions for Deadlocks
Mutual Exclusion
A resource can only be shared in mutually exclusive manner. It implies, if two process cannot use
the same resource at the same time.
Hold and Wait
A process waits for some resources while holding another resource at the same time.
No preemption
The process which once scheduled will be executed till the completion. No other process can be
scheduled by the scheduler meanwhile.
Circular Wait
All the processes must be waiting for the resources in a cyclic manner so that the last process is
waiting for the resource which is being held by the first process.
MUTUAL EXCLUSION
Consider the following scenario in which
process P1 and P2 both are having at least one
resource.
(P1 is having tape drive and P2 is having
printer) and both resources are non-
shareable.
If process P1 requires printer to complete its
task then it must have to wait till printer is
released but printer is allocated to process P2
which is also waiting for tapee drive to finish
its job so both process are waiting for each
other to finish so it will lead to the creation of
DEADLOCK.
HOLD AND WAIT
A process must have at least one resource and waiting for other
resources which are allocated to other waiting processes.
In our example P1 has tape drive and is waiting for printer
which is held by P2.
P1 can only finish if printer released by P2 which is not
possible as P2 is waiting for tape drive so here, hold and wait
condition becomes true and we can say system is in a deadlock
state.
If any process holds any resource but does not waiting for
any resource than that process can complete its execution
and will release its all resources that can be assigned to
other waiting processes later and hence all process will be
completed and no deadlock will be created.
So for the deadlock to occur both hold and wait condition must
be true simultaneously.
NO PREEMPTION
Resources cannot be preempted; that means
a resource cannot be force fully deallocated
and it can be released only by the process
voluntarily after completing its task.
If resources are preempted then it can be
forcefully deal located and given to the
process waiting for that and hence no
deadlock will exist in the system.
CIRCULAR WAIT
There must exist a set of waiting processes
(P0,P1,P2,…Pn) such that P0 is waiting for
resource held by P1, p1 is waiting for a resource
held by p2,…..Pn-1 is waiting for a resource held
by Pn and Pn is waiting for resource held by P0.
In our example, we can see that circular wait is
exist as P1 is waiting for P2 and P2 is waiting for
P1 to complete.
All four conditions must be true simultaneously
to occur a deadlock.
The circular wait condition also implies the hold
and wait condition so all four conditions are
interdependent.
METHODS HANDLING DEADLOCK
UNIT 4
FILE HANDLING SYSTEM
WHAT IS A FILE?
A file is the most important and basic entity for data storage.
Going to learn:
concept of a file
different attributes of file
types of file
how a file is useful for the operating system.
Introduction to file
File is a logical storage unit.
Computer store information on different storage area such as magnetic disk, tap drives etc.
These storage devices are usually of non-volatile type that means contents are stored despite of power failure.
It is a named collection of related information that is recorded on secondary storage. It is the smallest allotment of logical secondary
storage as per the user’s point of view.
File may represent programs or data.
Data files may be numeric, alphabetic, alphanumeric or binary.
In general a file is a sequence of bits, bytes, lines or records whose meaning is defined by the file creator.
A file can store different information such as source program, object program, executable program, numeric data/text, database
records, Graphic Image etc.
Each file has certain structure according to its type.
A text file is represented as a sequence of characters organized into lines.
A source file is a sequence of sub routines and functions.
An object file is a sequence of bytes organized into block which is understandable by system linker. Usually it has
the .obj extension.
Executable file is a series of binary code that can be executed by the CPU. It has the .exe extension.
Types of file
A text file is a sequence of characters organized into lines (and possibly pages).
A source file is a sequence of functions, each of which is further organized as declarations
followed by executable statements.
An executable file is a series of code sections that the loader can bring into memory and execute
More types of files
FILES EXTENSION FUNCTIONS
Executable .exe, .com, bin, none. Need to run m/c lang .prg
Source code .c, .cc, .java, .asm, .a, .pas Source code in various Lang
Library .lib, .a, .so, .dll, .mpeg, .mov, .rm Libraries routine for programs
Archive .arc, .zip, rar Related files
Binary file with audio, video
Multimedia .mpeg, .mov, .rm, mp4, mkv
information
File attributes
File Attributes
File attributes specifies properties of the file
It also defines behavior of any file.
A file can be referred by its name.
A name is usually a string of characters such as example.c
Some system treats upper and lower case character names differently while other systems
consider them to be equivalent.
A file has certain attributes that vary from the o/s to o/s.
Common attributes
Name: Symbolic file name is the only information which is in human readable form.
Type: It indicates the type of the file such as text, binary, object etc.
Location: It is a pointer to the location of the file where a file is stored.
Size: It determines the current size of the file. (bytes, blocks)
Protection: It provides access control information to a user who can read write and execute it.
Date, Time and User Identifier: It provides the information about the file creation date/time,
last modification and last used date/time. It is helpful for security and usage monitoring.
File operations
A file is an abstract data type. To define a file properly, we need to consider the operation
performed on the file.
Operating System can provide system calls to
Sequential Access
Most of the operating systems access the file sequentially. In other words,
we can say that most of the files need to be accessed sequentially by the
operating system.
In sequential access, the OS read the file word by word.
A pointer is maintained which initially points to the base address of the
file. If the user wants to read first word of the file then the pointer
provides that word to the user and increases its value by 1 word. This
process continues till the end of the file.
Modern word systems do provide the concept of direct access and indexed
access but the most used method is sequential access due to the fact that
most of the files such as text files, audio files, video files, etc need to be
sequentially accessed.
Direct Access
The Direct Access is mostly required in the case of database systems. In
most of the cases, we need filtered information from the database. The
sequential access can be very slow and inefficient in such cases.
Suppose every block of the storage stores 4 records and we know that the
record we needed is stored in 10th block. In that case, the sequential
access will not be implemented because it will traverse all the blocks in
order to access the needed record.
Direct access will give the required result despite of the fact that the
operating system has to perform some complex tasks such as determining
the desired block number. However, that is generally implemented in
database applications.
DIRECTORY STRUCTURE
What is a directory?
Directory can be defined as the listing of the related files on the disk. The
directory may store some or the entire file attributes.
To get the benefit of different file systems on the different operating
systems, A hard disk can be divided into the number of partitions of
different sizes. The partitions are also called volumes or mini disks.
Each partition must have at least one directory in which, all the files of the
partition can be listed. A directory entry is maintained for each file in the
directory which stores all the information related to that file.
Every Directory supports a number of common operations on the file:
File Creation
Search for the file
File deletion
Renaming the file
Traversing Files
Listing of files
SINGLE LEVEL DIRECTORY
Single Level Directory
The simplest method is to have one big list of all the files on the disk.
The entire system will contain only one directory which is supposed to mention all the files present in the
file system.
The directory contains one entry per each file present on the file system.
SINGLE LEVEL DIRECTORY
Two Level Directory
In two level directory systems, we can create a separate directory for each user.
There is one master directory which contains separate directories dedicated to each user.
For each user, there is a different directory present at the second level, containing group of user's file.
The system doesn't let a user to enter in the other user's directory without permission.
File Systems
File system is the part of the operating system which is responsible for file management. It provides a mechanism to
store the data and access to the file contents including data and programs. Some Operating systems treats everything
as a file for example Ubuntu.
The File system takes care of the following issues
File Structure
We have seen various data structures in which the file can be stored. The task of the file system is to maintain an
optimal file structure.
Recovering Free space
Whenever a file gets deleted from the hard disk, there is a free space created in the disk. There can be many such
spaces which need to be recovered in order to reallocate them to other files.
disk space assignment to the files
The major concern about the file is deciding where to store the files on the hard disk. There are various disks
scheduling algorithm which will be covered later in this tutorial.
tracking data location
A File may or may not be stored within only one block. It can be stored in the non contiguous blocks on the disk. We
Protection
Protection can be provided in many ways. For a single-user laptop system, we might provide
protection by locking the computer in a desk drawer or file cabinet. In a larger multiuser
system, however, other mechanisms are needed.
Types of Access
Read. Read from the file.
Write. Write or rewrite the file.
Execute. Load the file into memory and execute it.
Append. Write new information at the end of the file.
Delete. Delete the file and free its space for possible reuse.
List. List the name and attributes of the file
Goals of Protection in Operating System
3.Each domain comprises a collection of objects and the operations that may be implemented on them. A domain could be made
up of only one process, procedure, or user. If a domain is linked with a procedure, changing the domain would mean changing the
procedure ID. Objects may share one or more common operations.
Security
What is Operating System Security?