Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 115

Chapter Two

Processes and Process


Management

2.1 Process
05/16/2024 School of Computing, DDUIoT 1
Contents
 Process and Thread
 The concept of multi-threading
 Inter process communication
 Race conditioning
 Critical Sections and mutual exclusion
 Process Scheduling
 Preemptive and non preemptive scheduling
 Scheduling policies
 Dead lock
 Deadlock prevention
 Deadlock detection
 Deadlock avoidance
05/16/2024 School of Computing, DDUIoT 2
Process concept
• Early systems
– One program at a time was executed
and a single program has a complete
control.
• Modern OS allow multiple programs to be
loaded in to memory and to be executed
concurrently.
• This requires firm or strong control over
execution of programs.
• The notion of process emerged to control the
execution of programs.

05/16/2024 School of Computing, DDUIoT 3


Process concept ...Cont’d
 Process
 The entity that can be assigned to and executed
on a processor
 An activity of some kind which has a program,
input, output, and a state.
 a program in execution; process execution must
progress in sequential fashion
 Conceptually, each process has its own virtual
CPU.
 In reality, of course, the real CPU switches
back and forth from process to process.
 Provide the illusion of parallelism, which is
some times called pseudo parallelism.
05/16/2024 School of Computing, DDUIoT 4
Program Vs Process
• Program
o It is sequence of instructions defined
to perform some task
o It is a passive entity
• Process
o It is a program in execution
o It is an instance of a program
running on a computer
o It is an active entity
o A processor performs the actions
defined by a process
05/16/2024 School of Computing, DDUIoT 5
Program Vs Process
 Real life example: Consider a computer scientist who
is baking a birthday cake for her daughter and who
is interrupted by her daughter’s bleeding accident
• Sequence of actions
– Bringing ingredients i.e. flour, sugar, eggs, etc
– Placing the mixture into the oven
– Following the baking processes
– Hearing a cry and analyzing it to be because of
bleeding
– Recording baking processes state and switching
to provide first aid service
– Providing first aid service
– Coming back and resuming the baking process
05/16/2024 School of Computing, DDUIoT 6
Analysis
Processes: Baking Cake First Aid
Processor: Comp.Scientist Comp Scientist
Program: Recipe First Aid Book
Input : Ingredients First Aid Kit
Output: Cake First Aid Service
States : Running, Idle Running, Idle

05/16/2024 School of Computing, DDUIoT 7


Process creation
o In systems designed for running only a single
application, it may be possible to have all the
processes that will ever be needed be present
when the system comes up.
o In general-purpose systems some way is needed
to create processes as needed during operation.
o There are four principal events that cause
processes to be created:
1. System initialization.
2. Execution of a process creation system
call by a running process.
3. A user request to create a new process.
4. Initiation of a batch job.
05/16/2024 School of Computing, DDUIoT 8
Process creation ...Cont’d
1. System initialization:
 When an operating system is booted, typically several
processes are created.
 These processes can be:
 Foreground processes : processes that interact with
(human) users and perform work for them.
 Background processes: processes which are not
associated with particular users, but instead have
some specific function.
2. Execution of a process creation system call by a running
process
 Running process will issue system calls to create one or
more new processes to help it do its job.
 Creating new processes is particularly useful when the
work to be done can easily be formulated in terms of
several related, but otherwise independent interacting
processes.
05/16/2024 School of Computing, DDUIoT 9
Process creation ...Cont’d

3. A user request to create a new process.


 In interactive systems, users can start a program by
typing a command or(double) clicking an icon.
 Taking either of these actions starts a new process
and runs the selected program in it.
4. Initiation of a batch job.
 Users can submit batch jobs to the system (possibly
remotely).
 When the operating system decides that it has the
resources to run another job, it creates a new
process and runs the next job from the input queue
in it.
05/16/2024 School of Computing, DDUIoT 10
Process termination
o After a process has been created, it starts
running and does whatever its job is.
o However, nothing lasts forever, not even
processes.
o Sooner or later the new process will terminate,
usually due to one of the following conditions:
1. Normal exit (voluntary).
2. Error exit (voluntary).
3. Fatal error (involuntary).
4. Killed by another process (involuntary).

05/16/2024 School of Computing, DDUIoT 11


Process termination ...Cont’d
1. Normal exit (voluntary)
o Most processes terminate because they have done their
work.
 Example ,When a compiler has compiled the program, it executes a
system call to tell the operating system that it is finished. This call is
exit in UNIX and ExitProcess in Windows
o Screen-oriented programs also support voluntary
termination.
 Example Word processors, Internet browsers and similar programs
always have an icon or menu item that the user can click to tell the
process to remove any temporary files it has open and then terminate.
2. Error exit (voluntary)
o The second reason for termination is an error caused by
the process, often due to a program bug.
 Examples include executing an illegal instruction, referencing
nonexistent memory, or dividing by zero.
05/16/2024 School of Computing, DDUIoT 12
Process termination ...Cont’d
3. Fatal error (involuntary)
o A process is terminate if it is discovers a fatal
error.
 For example, if a user types the command cc
foo.c to compile the program foo.c and no
such file exists, the compiler simply exits.
4. Killed by another process (involuntary)
o The fourth reason a process might terminate is
that the process executes a system call telling
the operating system to kill some other process.
o In UNIX this call is kill. The corresponding
Win32 function is Terminate Process.
05/16/2024 School of Computing, DDUIoT 13
Process state
o The process state define the current activity of the process.
o As a process executes, it changes state
o The state in which may in is differ from one system to an
other.
o Below we see three states a process may be in:
1. Running : Instructions of program are being executed.
2. Ready: The process is waiting to be assigned to a processor.
3. Blocked :The process is waiting for some event to occur

05/16/2024 School of Computing, DDUIoTDDUIOT 14


Process state ...Cont’d
o Four transitions are possible among these three states.
 Transition 1: Process blocks for an event to occur.
 Transition 2: Scheduler picks another process to have
CPU time.
 Transition 3: Scheduler picks first process get the
CPU to run again
 Transition 4: event becomes occurred to awakened
for blocked process.

05/16/2024 School of Computing, DDUIoTDDUIOT 15


Process implementation
o To implement the process model, the operating
system maintains a table (an array of structures),
called the process table, with one entry per
process.
o these entries process control blocks(PCB) also
called task control block.
 PCB Contains information associated with each
process.
1. Process state:- can be ready, running, waiting,
and etc.
2. Program counter:- indicates the address of the
next instruction to be executed.
3. CPU registers:- includes general-purpose registers,
05/16/2024stack Pointers, index registers and accumulators.
School of Computing, DDUIoTDDUIOT 16
Process implementation
4. Memory-management information:- includes the
value of base and limit register. The information is
useful for reallocating the memory when the process
terminates.
5. CPU scheduling information:- includes the CPU
scheduling information for each and every
process(Eg. process priorities, pointers to scheduling
queues, etc.
6. Accounting information:- includes the amount of
CPU and real time used, time limits, job or process
numbers, account numbers etc.
7. I/O status information:- includes list of opened files
8. Event information:- for a process in the blocked
05/16/2024
(wait) state School
thisof Computing,
field contains
DDUIoTDDUIOT
information
17
concerning the event for which the process is
2.2 Thread

05/16/2024 School of Computing, DDUIoT 18


Thread concept
• process model is based on two independent concepts:
resource grouping and execution.
• One way of looking at a process is that it is a way
to group related resources together.
• A process has an address space containing program
text and data, as well as other resources. These
resource may include open files, child processes,
pending alarms, signal handlers, accounting
information, and more.
• By putting them together in the form of a process,
they can be managed more easily.
• The other concept a process has is a thread of
execution, usually shortened to just thread.
• The thread has a program counter that keeps track
of which instruction to execute next.
05/16/2024 School of Computing, DDUIoT 19
Thread concept(con’t..)

• It has registers, which hold its current working


variables.
• It has a stack, which contains the execution history,
with one frame for each procedure called but not yet
returned from.
• Processes are used to group resources together;
threads are the entities scheduled for execution on
the CPU.
• The term multithreading is also used to describe the
situation of allowing multiple threads in the same
process.

05/16/2024 School of Computing, DDUIoT 20


Thread concept(con’t..)
o A thread consists of:
• thread id
• program counter
• register set
• stack
o Threads belonging to the same process share:
• its code
• its data section
• other OS resources

05/16/2024
05/16/2024 School of Computing,
DDUIOT DDUIoT 21
Processes and Threads
Similarities Differences
• Both share CPU and only one • Unlike processes,
thread/process is active threads are not
(running) at a time. independent of one
another.
• Like processes, threads within
a process execute sequentially. • Unlike processes, all
threads can access every
• Like processes, thread can address in the task.
create children.
• Unlike processes,
• Like process, if one thread is thread are design to
blocked, another thread can assist one other.
run.
05/16/2024
05/16/2024 School of Computing,
DDUIOT DDUIoT 22
Thread usage
o several reasons for having multiple threads:
o many applications need multiple activities are going
on at once.
 decomposing such an application into multiple
sequential threads that run in quasi-parallel, the
programming model becomes simpler.
o they are lighter weight than processes, they are easier
(i.e., faster) to create and destroy than processes.
o Having multiple threads within an application provide
higher performance argument.
• If there is substantial computing and also substantial
I/0, having threads allows these activities to overlap,
thus speeding up the application.
o Threads are useful on systems with multiple CPUs

05/16/2024 School of Computing, DDUIoT 23


Thread library
o Thread libraries provide programmers an API to
create and manage threads
There are three basic libraries used:
POSIX pthreads
•They may be provided as either a user or kernel library, as an
extension to the POSIX standard
• Systems like Solaris, Linux and Mac OS X implement pthreads
specifications
WIN32 threads
• These are provided as a kernel-level library on Windows systems.
Java threads
» Since Java generally runs on a Java Virtual Machine, the
implementation of threads is based upon whatever OS and hardware
the JVM is running on, i.e. either Pthreads or Win32 threads
depending on the system.

05/16/2024 School of Computing, DDUIoT 24


Thread implementation
o There are two main ways to implement a threads package: in
user space and in the kernel.
Implementing Threads in User Space
 All code and data structure are reside in user space.
 Invoking a function in the library results in a local prodecuder
call in user space not system call.
 the kernel is not aware of the existence of threads.
Advantage:
 To do thread switching, it calls a run-time system procedure,
which is least an order of magnitude-may be more-faster than
trapping to the kernel
 They allow each process to have its own customized scheduling
algorithm.
Disadvantage:
 problem of how blocking system calls are implemented
 Problem of page faults
 no other thread in that process will ever run unless the first
thread voluntarily gives
05/16/2024 up
School of the DDUIoT
Computing, CPU. 25
Thread implementation
Implementing Threads in kernel Space
o All code and data structure are reside in kernels pace.
o Invoking a function in the library results system call.
o the kernel is aware of the existence of threads.
Advantage:
 All calls that might block a thread are implemented as system
calls
 if one thread in a process causes a page fault, the kernel can
easily check to see if the process has any other runnable
threads, and if so, run one of them while waiting for the
required page to be brought in from the disk.
o kernel threads solve some While problems, they do not solve
all problem
o what happens when a multithreaded process forks?
 In many cases, the best choice depends on what the
process is planning to do next.
05/16/2024 School of Computing, DDUIoT 26
Implementing Threads in kernel Space(con’t..)
o When a signal comes in, which thread should handle
it?
 Possibly threads could register their interest in certain
signals but there may be two or more threads register
for the same signal.
Hybrid Implementations
o use kernel-level threads and then multiplexes user-
level threads onto some or all of the kernel threads.

05/16/2024 School of Computing, DDUIoT 27


2.3 Inter Process Communication

05/16/2024 School of Computing, DDUIoT 28


Process communication
o The processes executing on multiprogramming can be independent or
cooperating processes.
o Independent process cannot affect or be affected by the
execution of another process.
o Cooperating process can affect or be affected by the execution
of another process
o A process need to cooperate should have a facility in which the are
communicate and synchronize their action.
o Advantages of process cooperation
 Information sharing
 Computation speed-up
 Break into several subtasks and run in parallel
 Modularity
 Constructing the system in modular fashion.
 Convenience
 User will have many tasks to work in parallel (Editing,
compiling, printing)
05/16/2024 School of Computing, DDUIoT 29
Process communication(con’t….)
o IPC facility provides a mechanism to allow processes to
communicate and synchronize their actions.
o Processes can communicate through shared memory or
message passing.
Both schemes may exist in OS.
o The Shared-memory method requires communication
processes to share some variables.
o The responsibility for providing communication rests with
the programmer.
The OS only provides shared memory.
Example: producer-consumer problem.

05/16/2024 School of Computing, DDUIoT 30


Communication model

Message Passing Shared Memory


05/16/2024 School of Computing, DDUIoT 31
Process communication(con’t…)
o Message system – processes communicate with each other
without resorting to shared variables.
o If P and Q want to communicate, a communication link exists
between them.
o OS provides this facility.
o IPC facility provides two operations:
 send(message) – message size fixed or variable
 receive(message)
o If P and Q wish to communicate, they need to:
 establish a communication link between them
 exchange messages via send/receive
o Implementation of communication link
 physical (e.g., shared memory, hardware bus)
 logical (e.g., logical properties)
05/16/2024 School of Computing, DDUIoT 32
Message passing systems
o Direct or Indirect communication

o Synchronous or asynchronous
communication

o Automatic or explicit buffering

05/16/2024 School of Computing, DDUIoT 33


Direct Communication
o Processes must name each other explicitly:
• send (P, message) – send a message to process P
• receive(Q, message) – receive a message from process Q
o Properties of communication link
• Links are established automatically.
• A link is associated with exactly one pair of communicating
processes.
• Between each pair there exists exactly one link.
• The link may be unidirectional, but is usually bi-directional.
o This exhibits both symmetry and asymmetry in addressing
• Symmetry: Both the sender and the receiver processes must name
the other to communicate.
• Asymmetry: Only sender names the recipient, the recipient is not
required to name the sender.
 The send and receive primitives are as follows.
 Send (P, message)– send a message to process P.
 Receive(id, message)– receive a message from any
05/16/2024 process.
School of Computing, DDUIoT 34
Indirect Communication
o The messages are sent and received from mailboxes (also
referred to as ports).
o A mailbox is an object
Process can place messages
Process can remove messages.
o Two processes can communicate only if they have a
shared mailbox.
o Operations
create a new mailbox
send and receive messages through mailbox
destroy a mailbox
o Primitives are defined as:
 send(A, message) – send a message to mailbox A
 receive(A, message) – receive a message from mailbox A
05/16/2024 School of Computing, DDUIoT 35
Indirect Communication(con’t…)
• Mailbox sharing
• P1, P2, and P3 share mailbox A.
• P1, sends; P2 and P3 receive.
• Who gets a message ?
• Properties of a link:
• A link is established if they have a shared mailbox
• A link may be associated with more than two boxes.
• Between a pair of processes they may be number of links
• A link may be either unidirectional or bi-directional.
• OS provides a facility
• To create a mailbox
• Send and receive messages through mailbox
• To destroy a mail box.
• The process that creates mailbox is a owner of that mailbox
• The ownership and send and receive privileges can be passed to
other processes through system calls.

05/16/2024 School of Computing, DDUIoT 36


Synchronous or asynchronous
o Message passing may be either blocking or non-blocking.
o Blocking is considered synchronous
o Non-blocking is considered asynchronous
o send and receive primitives may be either blocking or
non-blocking.
 Blocking send: The sending process is blocked until
the message is received by the receiving process or
by the mailbox.
 Non-blocking send: The sending process sends the
message and resumes operation.
 Blocking receive: The receiver blocks until a message
is available.
 Non-blocking receive: The receiver receives either a
valid message or a null.

05/16/2024 School of Computing, DDUIoT 37


Automatic and explicit buffering
 A link has some capacity that determines the number of messages that can
reside in it temporarily.
 Queue of messages is attached to the link; implemented in one of three
ways.
1. Zero capacity – 0 messages, Sender must wait for receiver (rendezvous)
2. Bounded capacity – finite length of n messages , Sender must wait if
link full.
3. Unbounded capacity – infinite length, Sender never waits.
 In non-zero capacity cases a process does not know whether a message has
arrived after the send operation.
 The sender must communicate explicitly with receiver to find out whether
the later received the message.
 Example: Suppose P sends a message to Q and executes only after the
message has arrived.
 Process P:
 send (Q. message) : send message to process Q
 receive(Q,message) : Receive message from process Q
 Process Q
 Receive(P,message)
 Send(P,”ack”)

05/16/2024 School of Computing, DDUIoT 38


Process Synchronization
 Concurrent processes may have access to shared data and
resources.
 If there is no controlled access to shared data, some processes will
obtain an inconsistent view of the shared data.
 Consider two processes P1 and P2, accessing shared data. while
P1 is updating data, it is preempted (because of timeout, for
example) so that P2 can run. Then P2 try to read the data,
which are partly modified.
 Results in data inconsistency
In such cases, the outcome of the action performed by concurrent
processes will then depend on the order in which their execution is
interleaved.
Maintaining data consistency requires mechanisms to ensure the
orderly execution of cooperating processes
 Process Synchronization
o mechanisms to ensure the orderly execution of cooperating
processes that share a logical address space, so that data
consistency is maintained
o The Mechanisms of process synchronizations include:-
05/16/2024 School of Computing,
DDUIOT DDUIoT 39 39
Race condition
 Race condition: The situation where several processes
access and manipulate shared data concurrently and
the final value of the shared data depends upon which
process finishes last.
 The key to preventing trouble here and in many other
situations involving shared memory, shared files, and
shared everything else is to find some way to prohibit
more than one process from reading and writing the
shared data at the same time .
 To prevent race conditions, concurrent processes must
coordinate or be synchronized.

05/16/2024 School of Computing,


DDUIOT DDUIoT 40 40
Critical-Section
 A critical section is a piece of code in which a process or
thread accesses a common shared resource.
The important features of the system is that – ensure
that when one process is executing in its CS, no other
process is allowed to execute in its CS. i.e no two
processes are executed in their critical sections at the
same time.
 When a process executes code that manipulates shared data
(or resource), we say that the process is in it’s Critical
Section (for that shared data).
 The execution of critical sections must be mutually
exclusive: at any time, only one process is allowed to
execute in its critical section (even with multiple
processors).
05/16/2024 DDUIOT
School of Computing, DDUIoT 41
The Critical-Section Problem
 process must first request permission to enter its critical section.
 The section of code implementing this request is called the Entry
Section (ES).
 The critical section (CS) might be followed by a Leave/Exit Section
(LS).
 The remaining code is the Remainder Section (RS).
 The critical section problem is to design a protocol that the
processes can use so that their action will not depend on the order
in which their execution is interleaved (possibly on many
processors).
General structure of process Pi (other process Pj)
do {
entry section
critical section
leave/exit section
reminder
DDUIOT
section
05/16/2024 School of Computing, DDUIoT 42
} while (1);
Solution to Critical-Section Problem
A solution to a critical –section problem must
satisfy the following four requirements.
1. No two processes may be simultaneously inside their
critical regions.
2. No assumptions may be made about speed or the
number of CPUs.
3. No process running outside its critical region may
block other processes.
4. No process should have to wait forever to enter its
critical region.

05/16/2024 School of Computing,


DDUIOT DDUIoT 43 43
Mutual Exclusion with busy waiting
Mutual Exclusion is a property of process synchronization that
states that “no two processes can exist in the critical section at any
given point of time”
1. Disabling interrupts:- here each process will disable all interrupts
just after entering its critical region and re-enable them just before
leaving it.
 With interrupts disabled, no clock interrupts can occur.
 The CPU is only switched from process to process as a result of
clock or other interrupts, after all, and with interrupts turned
off the CPU will not be switched to another process.
 Thus, once a process has disabled interrupts, it can examine
and update the shared memory without fear that any other
process will intervene. Process Pi:
repeat
disable interrupts
critical section
enable interrupts
remainder section
05/16/2024 School of Computing, forever
DDUIOT DDUIoT 44 44
Mutual Exclusion with busy waiting(cont…)
 Drawbacks of Disabling Interrupts:
1. If the user process did not turned off the interrupts, this
could be the end of the system.
2. If the system is a multiprocessor, with two or more
CPUs, disabling interrupts affects only the CPU that
executed the disable instruction.
 The other ones will continue running and can access
the shared memory. That is, critical section is now
atomic but not mutually exclusive (interrupts are not
disabled on other processors).
 In general, disabling interrupts is often a useful technique
within the operating system itself but is not appropriate as
a general mutual exclusion mechanism for user processes
05/16/2024 School of Computing,
DDUIOT DDUIoT 45 45
Mutual Exclusion with busy waiting(cont…)

2. Lock Variables:- is a software solution which uses a single,


shared (lock)variable, initially 0.
 When a process wants to enter its critical region, it first tests
the lock.
 If the lock is 0, the process sets it to 1 and enters the critical
region.
 If the lock is already 1, the process just waits until it becomes
0. Thus, a 0 means that no process is in its critical region,
and a 1 means that some process is in its critical region.
 Unfortunately, this idea contains exactly the same fatal flaw
that we saw in the spooler directory. Suppose that one process
reads the lock and sees that it is 0. Before it can set the lock
to 1, another process is scheduled, runs, and sets the lock to
1. When the first process runs again, it will also set the lock
to 1, and two processes will be in their critical regions at the
same time.
05/16/2024 School of Computing,
DDUIOT DDUIoT 46 46
Mutual Exclusion with busy waiting(cont…)

Lock variable
do {
acquire lock
critical section
release lock
remainder section
}
while (TRUE);

05/16/2024 School of Computing,


DDUIOT DDUIoT 47 47
Mutual Exclusion with busy waiting(cont… )

3. Strict Alternation:-the integer variable turn, initially


0, keeps track of whose turn it is to enter the critical
region and examine or update the shared memory.
 Initially, process 0 inspects turn, finds it to be 0,
and enters its critical region.
 Process 1 also finds it to be 0 and therefore sits
in a tight loop continually testing turn to see
when it becomes 1.
 Continuously testing a variable until some value
appears is called busy waiting.
 It should usually be avoided, since it wastes CPU
time. Only when there is a reasonable expectation
that the wait will be short is busy waiting used.
 A lock that uses busy waiting is called a spin
05/16/2024 School of Computing,
DDUIOT DDUIoT 48 48
lock.
2.4 CPU Scheduling

05/16/2024 School of Computing, DDUIoT 49


Introduction to scheduling

 When a computer is multiprogrammed, it frequently has


multiple processes competing for the CPU at the same time.
 When more processes are there in the ready state than the
number of available CPUs, the operating system must decide
which process to run first.
 The part of the operating system that makes the choice is
called the scheduler and the algorithm it uses is called the
scheduling algorithm.
 CPU scheduling utilizes CPU for one process while the other
processes are kept on hold, while process scheduling shares
the CPU for the multiple processes using time multiplexing

05/16/2024 School of Computing, DDUIoT 50


Process scheduling queues
o The objective of multi-programming
– To have some process running at all times.
o Timesharing: Switch the CPU frequently that users can
interact the program while it is running.
o If there are many processes, the rest have to wait until CPU
is free.
o Scheduling is to decide which process to execute and when.
o Scheduling queues:-Several queues used for scheduling:
a) Job queue – set of all processes in the system.
b) Ready queue – set of all processes residing in main
memory, ready and waiting to execute.
c) Device queues – set of processes waiting for an I/O
device.
• Each device has its own queue.
o Process migrates between the various queues during its life
05/16/2024 School of Computing, DDUIoT 51
time.
Schedulers
o A process in a job-queue is selected in some fashion
and assigned to memory/CPU.
o The selection process is carried out by a scheduler.
Schedulers are of three types:
1. Long-term scheduler (or job scheduler) – selects
which processes should be brought into the ready
queue from the job queue (determine the degree of
multi-programming)
2. Short-term scheduler (or CPU scheduler) – selects
which process should be executed next and
allocates CPU
3. Medium-term ( or Emergency) scheduler: swap out
the process from memory (ready queue) and
swapped in again later (it decrease the degree of
multiprogramming).
05/16/2024 School of Computing, DDUIoT 52
Passive
Programs

Disk Memory

Long Term Short Term


Scheduler Scheduler
Open
program
Process CPU
CPU
Select a Assign the
process CPU to a
from job to process
ready queue from ready
queue

Swap a Process
process
Job (input) from ready
to job queue
Queue
Medium Term
Scheduler School of Computing, DDUIoT
05/16/2024 DDUIOT 53
Degree of multi-programming is the number of processes
that are placed in the ready queue waiting for execution
by the CPU.

Process 1
Process 2
Process 3 Degree of
Process 4 Multi-Programming
Process 5

Memory

05/16/2024 School of Computing, DDUIoT 54


 Since Long term scheduler selects which processes to
brought to the ready queue, hence, it increases the degree
of multiprogramming.

Long Term
Process 1
Disk Scheduler Process 2
Process 3 Degree of
Process 4 Multi-Programming
Process 5

Memory
Job Queue

05/16/2024 School of Computing, DDUIoT 55


Since Medium term scheduler picks some processes from
the ready queue and swap them out of memory, hence, it
decreases the degree of multiprogramming.

Medium Term
Process 1
Disk Scheduler Process 2
Process 3 Degree of
Process 4 Multi-Programming
Process 5

Memory
Job Queue

05/16/2024 School of Computing, DDUIoT 56


Categories of Scheduling Algorithms
 For different environments different scheduling algorithms
are needed.
 This situation arises because different application areas
(and different kinds of operating systems) have different
goals.
 Three environments worth distinguishing are
Batch.
Interactive.
Real time.
 In batch systems, there are no users impatiently waiting
at their terminals for a quick response.
 This approach reduces process switches and thus
improves performance.

05/16/2024 School of Computing, DDUIoT 57


Categories of Scheduling Algorithms(con’t..)
o In an environment with interactive users, preemption
(temporary interruption of a task without its cooperation with
the intention of resuming it at later time) is essential to keep
one process from hogging the CPU and denying service to the
others.
o Even if no process intentionally ran forever, due to a
program bug, one process might shut out all the others
indefinitely.
o Preemption is needed to prevent this behavior.
o In systems with real-time constraints, preemption is, oddly
enough, sometimes not needed because the processes know
that they may not run for long periods of time and usually
do their work and block quickly.
o The difference with interactive systems is that real-time
systems run only programs that are intended to further the
application at hand. Interactive systems are general purpose
and may run arbitrary programs that are not cooperative or
even malicious.
05/16/2024 School of Computing, DDUIoT 58
Categories of Scheduling Algorithms(con’t..)
o Scheduling algorithms can be divided into two categories
with respect to how they deal with clock interrupts.
Preemptive scheduling: allows releasing the current
executing process from CPU when another process (which
has a higher priority) comes and need execution.

Non-preemptive scheduling: once the CPU has been


allocated to a process, the process keeps the CPU until it
release the CPU .

05/16/2024 School of Computing, DDUIoT 59


Preemptive
Scheduling

CPU

Non- Preemptive
Scheduling

CPU
05/16/2024 School of Computing, DDUIoT 60
CPU scheduling
CPU Scheduling is the method to select a process from the
ready queue to be executed by CPU when ever the CPU
becomes idle.
o CPU scheduling decisions may take place when a process:
1. Switches from running to waiting state
2. Switches from running to ready state
3. Switches from waiting to ready
4. Terminates

05/16/2024 61 DDUIOT DDUIoT


School of Computing, 05/16/2024 61
Scheduling Criteria
CPU Utilization:
The percentage of times while CPU is busy to the total time
(times CPU busy + times it is idle). Hence, it measures the
benefits from CPU.

To maximize utilization, keep CPU as busy as possible.

CPU utilization range from 40% (for lightly loaded systems) to


90% (for heavily loaded) (Explain why? CPU utilization can not
reach 100%, because of the context switch between active
processes).
Times CPU Busy
CPU Utilization  *100
Total Time

05/16/2024 62 DDUIOT DDUIoT


School of Computing, 05/16/2024 62
System Throughput:
 The number of process that are completed per time unit (hour)

Turnaround time:
 For a particular process, it is the total time needed for process
execution (from the time of submission to the time of completion).
 It is the sum of process execution time and its waiting times (to get
memory, perform I/O, ….).

Waiting time:
 The waiting time for a specific process is the sum of all periods it
spends waiting in the ready queue.

Response time.
 It is the time from the submission of a process until the first
response is produced (the time the process takes to start responding).

05/16/2024 School of Computing, DDUIoT 63


It is desirable to:

Maximize:
CPU utilization.
System throughput.

Minimize:
Turnaround time.
Waiting time.
Response time.

05/16/2024 School of Computing, DDUIoT 64


Scheduling Algorithms
First Come First Serviced (FCFS) algorithm
 The process that comes first will be executed first.
 Not preemptive(The first job is allowed to run as long
as it wants to be executed.).
 It is easy to understand and equally easy to program.
 With this algorithm, a single linked list keeps track of
all ready processes.
Weakness
 A single process may egoistically control the CPU
time.
 It is not good for time sharing tasks.
 FCFS—discriminates against short jobs since any
short jobs arriving after long jobs will have a longer
waiting time.
05/16/2024 School of Computing, DDUIoT 65
First Come First Serviced (FCFS) algorithm(con’t..)

Ready queue

FCFS Scheduling

CPU

05/16/2024 School of Computing, DDUIoT 66


Consider the following set of processes, with the length of the CPU
burst (Execution) time given in milliseconds:
Burst Time Process
The processes arrive in the order11 24 P1
P1, P2, P3. All at time 0. 22 3 P2
3 P3
33
 Gant chart:

 waiting times and turnaround times for each process are:

P3 P2 P1 Process
27 24 0 Waiting Time (WT)
+
30 27 24 Turnaround Time (TAT)
Execution
Time
 Hence, average waiting time= (0+24+27)/3=17 milliseconds
05/16/2024 67 DDUIOT DDUIoT
School of Computing, 05/16/2024 67
Repeat the previous example, assuming that the processes arrive in the
order P2, P3, P1. All at time 0.

Burst Time Process


33 24 P1
11 3 P2

22 3 P3

 Gant chart:

 waiting times and turnaround times for each process are:

P3 P2 P1 Process
3 0 6 Waiting Time (WT)
6 3 30 Turnaround Time (TAT)

 Hence, average waiting time= (6+0+3)/3=3 milliseconds

05/16/2024 68 DDUIOT DDUIoT


School of Computing, 05/16/2024 68
Shortest-Job-First (SJF) scheduling

 When CPU is available, it will be assigned to the process with


the smallest CPU burst (non preemptive).
 If two processes have the same next CPU burst, FCFS is used.
 Shortest job first is provably optimal when all the jobs are
available simultaneously.
 Mainly used in the long-term-scheduler.
SJF Scheduling

10 5 18 7
X
18 10 7 5
CPU
Note: numbers indicates the process execution time
05/16/2024 School of Computing, DDUIoT 69
Consider the following set of processes, with the length of the CPU burst
time given in milliseconds:
Burst Time Process
The processes arrive in the order 6 P1
P1, P2, P3, P4. All at time 0. 8 P2
7 P3
3 P4
1. Using FCFS
 Gant chart:

 waiting times and turnaround times for each process are:

P4 P3 P2 P1 Process
21 14 6 0 Waiting Time (WT)
24 21 14 6 Turnaround Time (TAT)

 Hence, average waiting time= (0+6+14+21)/4=10.25 milliseconds


05/16/2024 70 DDUIOT DDUIoT
School of Computing, 05/16/2024 70
2. Using SJF Burst Time Process
6 P1
8 P2
7 P3
3 P4

 Gant chart:

 waiting times and turnaround times for each process are:

P4 P3 P2 P1 Process
0 9 16 3 Waiting Time (WT)
3 16 24 9 Turnaround Time (TAT)

 Hence, average waiting time= (0+3+9+16)/4=7 milliseconds


05/16/2024 71 DDUIOT DDUIoT
School of Computing, 05/16/2024 71
Shortest-Remaining-Time-First (SRTF)
 It is a preemptive version of the Shortest Job First
 It allows a new process to gain the processor if its execution
time less than the remaining time of the currently processing
one.
 When a new job arrives, its total time is compared to the
current process' remaining time.
 If the new job needs less time to finish than the current
process, the current process is suspended and the new job
started
SRTF Scheduling

2 10 7 5 3
4

CPU

05/16/2024 School of Computing, DDUIoT 72


Consider the following set of processes, with the length of the CPU burst
time given in milliseconds:
Arrival Time Burst Time Process
The processes arrive in the order 0 7 P1
P1, P2, P3, P4. as shown in table. 2 4 P2
4 1 P3
5 4 P4

1. Using SJF
 Gant chart:

 waiting times and turnaround times for each process are:


P4 P3 P2 P1 Process
7 3 6 0 Waiting Time (WT)
11 4 10 7 Turnaround Time (TAT)

 Hence, average waiting time= (0+6+3+7)/4=4 milliseconds


05/16/2024 73 DDUIOT DDUIoT
School of Computing, 05/16/2024 73
2. Using SRTF Arrival Time Burst Time Process
0 7 P1
2 4 P2
 Gant chart: 4 1 P3
5 4 P4

 waiting times and turnaround times for each process are:


P4 P3 P2 P1 Process
2 0 1 9 Waiting Time (WT)
6 1 5 16 Turnaround Time (TAT)

 Hence, average
74 waiting time= (9+1+0+2)/4=3
DDUIOT DDUIoT milliseconds
05/16/2024
05/16/2024 School of Computing, 74
Round Robin scheduling
 Is one of the oldest, simplest, fairest, and most widely
used algorithms.
 Allocate the CPU for one Quantum time (also called time
slice) Q to each process in the ready queue.
 If the process has blocked or finished before the quantum
has elapsed, the CPU switching is done when the process
blocks, of course.
 This scheme is repeated until all processes are finished.
 A new process is added to the end of the ready queue.
 setting the quantum too short causes too many process
switches and lowers the CPU efficiency, but setting it too
long may cause poor response to short interactive requests.

05/16/2024 School of Computing, DDUIoT 75


Round Robin scheduling(con’t..)
• A quantum of around 20-50 msec is often a reasonable
compromise
• RR—treats all jobs equally (giving them equal bursts of CPU
time) so short jobs will be able to leave the system faster since
they will finish first.

Round Robin Scheduling

Q Q
Q Q

CPU
05/16/2024 School of Computing, DDUIoT 76
Consider the following set of processes, with the length of the CPU burst time given in
milliseconds:
Burst Time Process
The processes arrive in the order
24 P1
P1, P2, P3. All at time 0.
3 P2
use RR scheduling with Q=2 and Q=4
3 P3

RR with Q=4

 Gant chart:

 waiting times and turnaround times for each process are:


P3 P2 P1 Process
7 4 6 Waiting Time (WT)
10 7 30 Turnaround Time (TAT)

05/16/2024
Hence, average 77waiting time=School DDUIOT DDUIoT
(6+4+7)/3=5.66
of Computing, milliseconds 05/16/2024 77
RR with Q=2
Burst Time Process
24 P1
3 P2
3 P3

 Gant chart:

 waiting times and turnaround times for each process are:

P3 P2 P1 Process
7 6 6 Waiting Time (WT)
10 9 30 Turnaround Time (TAT)

 Hence, average waiting time= (6+6+7)/3=6.33 milliseconds


05/16/2024 78 DDUIOT DDUIoT
School of Computing, 05/16/2024 78
Explain why? If the quantum time decrease, this will
slow down the execution of the processes.

Sol:

Because decreasing the quantum time will increase


the context switch (the time needed by the processor
to switch between the processes in the ready queue)
which will increase the time needed to finish the
execution of the active processes, hence, this slow
down the system.

05/16/2024 School of Computing, DDUIoT 79


Priority scheduling
 A priority number (integer) is associated with each process
 The CPU is allocated to the process with the highest priority (smallest
integer).
 It is often convenient to group processes into priority classes and use
priority scheduling among the classes but round-robin scheduling within
each class.
 There are two types:
 Preemptive
 nonpreemptive
Priority Scheduling

10 5 18 7
X
18 10 7 5

Note: numbers indicates the process priority


CPU
05/16/2024 School of Computing, DDUIoT 80
Problems with Priority scheduling

Problem  Starvation (infinite blocking)– low priority


processes may never execute
Solution  Aging – as time progresses increase the priority of
the process

Very lowVery
priority
low process
priority process

8 28
26 30
8 5 4 2

Starvation
Aging
05/16/2024 School of Computing, DDUIoT 81
Consider the following set of processes, with the length of the CPU burst
time given in milliseconds:
priority Burst Time Process

The processes arrive in the order 3 10 P1


1 1 P2
P1, P2, P3, P4, P5. All at time 0.
4 2 P3
5 1 P4
2 5 P5
1. Using priority scheduling
 Gant chart:

 waiting times and turnaround times for each process are:


P5 P4 P3 P2 P1 Process
1 18 16 0 6 Waiting Time (WT)
6 19 18 1 16 Turnaround Time (TAT)

05/16/2024
Hence, average
82
waiting time=
School(6+0+16+18+1)/5=8.2
DDUIOT DDUIoT
of Computing, milliseconds
05/16/2024 82
Multi-level queuing scheduling
• Ready queue is partitioned into separate queues:
• foreground (interactive)
• background (batch)
• Each queue has its own scheduling algorithm,
• foreground – RR
• background – FCFS
• Scheduling must be done between the queues.
• Fixed priority scheduling: (i.e., serve all from foreground
then from background). Possibility of starvation.
• Time slice: each queue gets a certain amount of CPU
time which it can schedule amongst its processes; i.e.,
80% to foreground in RR;20% to background in FCFS
There are two types:
Without feedback: processes can not move between queues.
With feedback: processes can move between queues.
05/16/2024 School of Computing, DDUIoT 83
Multi-level queuing without feedback:
• Divide ready queue into several queues.
• Each queue has specific priority and its own
scheduling algorithm (FCFS, …)

High priority Queue

Low priority Queue

05/16/2024 School of Computing, DDUIoT 84


Multi-level queuing with feedback:
 Divide ready queue into several queues.
 Each queue has specific Quantum time as shown in figure.
 Allow processes to move between queues.

Queue 0

Queue 1

Queue 2

05/16/2024 School of Computing, DDUIoT 85


Multiple-Processor Scheduling
• CPU scheduling is more complex when multiple CPUs are
available.
• Symmetric Multiprocessors systems:- all CPUs can perform
scheduling independently(complex task).
• Asymmetric multiprocessor systems:- only one
processor(Master CPU) handles all the scheduling tasks.
• Asymmetric multiprocessing – only one processor accesses the
system data structures, alleviating the need for data sharing.
• Load sharing: Load must be fairly distributed among processors
to maximize processors use. Load balancing is especially
important when each processor has its own private queue.
• Two general approaches.
• Push migration:- keeping load balance by pushing
processes from overloaded processor to an idle one.
• Pull migration:- an idle processor pulls processes from an
overloaded one.
05/16/2024 School of Computing, DDUIoT 86
Thread scheduling
• Recall that there are two types of threads.
User level threads and kernel level threads.
• On OS systems supporting them, it is kernel-level-threads -not
processes- that are scheduled by the operating system.
• User level-threads are managed by the thread library, and the
kernel is unaware of them.
• To run on CPU, user-level threads must be mapped to an
associated kernel level thread
• On systems implementing many-to-one and many-to-many models,
the thread library schedules user level-thread threads on the
available resources this scheme is called process contention
scope(PCS)- (since threads of same process compete for CPU).
• To decide which kernel-thread to schedule to CPU the kernel uses
system-contention-schedule(SCS). Competition for CPU with SCS
takes pace among all threads in the system. Systems using one -
to-one models(such as windows XP, Solaris 9, Linux) uses only
SCS.
05/16/2024 School of Computing, DDUIoT 87
2.5 Deadlock

05/16/2024 School of Computing, DDUIoT 88


Introduction
• A set of processes is deadlocked if each process in the
set is waiting for an event that only another process in
the set can cause.
• all the processes are waiting, none of them will ever
cause any of the events that could wake up any of the
other members of the set, and all the processes continue
to wait forever.
• Each member of the set of deadlocked processes is
waiting for a resource that is owned by a deadlocked
process.
• None of the processes can run, none of them can release
any resources, and none of them can be awakened.
• This kind of deadlock is called a resource deadlock. It is
probably the most common kind, but it is not the only
kind.
05/16/2024 School of Computing, DDUIoT 89
Analogy

Deadlock

System Breakdown
05/16/2024 School of Computing, DDUIoT 90
Resource deadlock

Deadlock:

A set of blocked processes, each:


11 holding a resource

22 and waiting to use a resource held by another process in the set.

Give me your Give me your


recourse recourse first

Resource
Process A
Deadlock Process B Resource

Hence, blocked processes will never change state (Explain why?) because the resource it
has requested is held by another waiting
05/16/2024 School of process.
Computing, DDUIoT 91
Resource
o A major class of deadlocks involve resources.
o Deadlocks can occur when processes have been granted
exclusive access to devices, data records, files, and so forth.
o In general the objects granted to a process is referred to as
resources.
o A resource can be a hardware device (e.g., a tape drive) or
a piece of information (e.g., a locked record in a database).
o process must request a resource before using it and release
it after making use of it. Each process utilizes a resource as
follows:
 Request :A process requests for an instance of a
resource type. If the resource is free, the request will
be granted. Otherwise the process should wait until it
acquires the resource
 Use :The process uses the resource for its operations
 Release : The process releases the resource
05/16/2024 School of Computing, DDUIoT 92
Resource
Resource Types
Types

Preemptive Resources Non Preemptive Resources


A resource that can be taken A resource that cannot be taken
away from the process owning it away from its current owner without
with no ill effects causing the computation to fail.
Ex: Memory, is non Ex: CD recorder, is non preemptive
preemptive resource, resource, because ,If a process has
because, If a process has begun to burn a CD-ROM, suddenly
begun to burn a CD-ROM, taking the CD recorder away from
suddenly taking the CD it and giving it to another process
recorder away from it and will result in a bad CD
giving it to another process
will result in a bad CD.
05/16/2024 School of Computing, DDUIoT 93
 Assume a system with 32 kb memory size.
 5kb are used for OS, 10 kb for the low priority process.
 Hence, the available space is 17 kb.
 A higher priority process arrives and needs 20 kb.

Disk Swap out the low


high priority process Swap in the high
priority process to
finishes execution priority process disk

OS (5 kb) OS (5 kb) OS (5 kb) OS (5 kb)

Low Priority
High Priority process (10
process (20 kb)
Available Available
kb)
27 Kb 27 Kb
Available
17 Kb
Available
7 Kb

Swap in the low priority process again to High


Priority
20 Kb
05/16/2024 resume execution
School of Computing, DDUIoT 94
Deadlock characterization
Deadlock can arise if four conditions hold simultaneously in a system.

Deadlock conditions

Mutual Circular
Exclusion: Hold and Wait: No preemption: Wait: A set of
only one A process A resource is a processes
process can holding at least each waits for
released only by
use a one resource is the process another one
resource at a waiting for in a circular
holding it after it
time. additional fashion.
completed its task
resource held by Note: the four conditions must occur
another
to have a deadlock. If one condition
processesSchool of Computing, DDUIoT
05/16/2024
is absent, a deadlock may not exist. 95
Circular
wait

There exists a set {P0, P1, P2, ….., Pn} of waiting processes such that:

P0 waiting for a resource held by P1.


P1 waiting for a resource held by P2.
P0
P2 waiting for a resource held by P3.
Pn P1

Circular
P2
Wait
P3
P4
Pn waiting for a resource held by P0.

05/16/2024 School of Computing, DDUIoT 96


Deadlock handling mechanism
Deadlock problems can be handled in one of the following 4
ways:
1. Using a protocol avoids deadlock by ensuring that a system
will never enter a deadlock state (deadlock avoidance)
2. structurally negating one of the four required conditions
(deadlock prevention).
3. Allow the system to enter a deadlock state and then
recover(deadlock detection and recovery)
4. Ignore the problem and pretend that deadlocks never occur
in the system(ostrich algorithm); used by most operating
systems, including UNIX

05/16/2024 97 DDUIOT DDUIoT


School of Computing, 05/16/2024 97
Deadlock prevention
1. Mutual Exclusion – This is not required for sharable
resources; however to prevent a system from
deadlock, the mutual exclusion condition must hold
for non-sharable resources
2. Hold and Wait – in order to prevent the occurrence
of this condition in a system, we must guarantee
that whenever a process requests a resource, it does
not hold any other resources. Two protocols are
used to implement this:
1. Require a process to request and be
allocated all its resources before it begins
execution or
2. Allow a process to request resources only
05/16/2024 98when the process has none 05/16/2024
DDUIOT DDUIoT
School of Computing, 98
Deadlock prevention(con’t…)
 Both protocols have two main disadvantages:
o Since resources may be allocated but not used
for a long period, resource utilization will be
low
o A process that needs several popular
resources has to wait indefinitely because one
of the resources it needs is allocated to
another process. Hence starvation is possible.
3. No Preemption
• If a process holding certain resources is denied
further request, that process must release its
original resources allocated to it

05/16/2024 99 DDUIOT DDUIoT


School of Computing, 05/16/2024 99
Deadlock prevention(con’t…)
• If a process requests a resource allocated to
another process waiting for some additional
resources, and the requested resource is not being
used, then the resource will be preempted from
the waiting process and allocated to the
requesting process
» Preempted resources are added to the list of
resources for which the process is waiting
» Process will be restarted only when it can regain
its old resources, as well as the new ones that it
is requesting
» This approach is practical to resources whose state
can easily saved and retrieved easily
05/16/2024 100 DDUIOT DDUIoT
School of Computing, 05/16/2024 100
Deadlock prevention(con’t…)
4. Circular Wait
• A linear ordering of all resource types is defined
and each process requests resources in an
increasing order of enumeration.
• So, if a process initially is allocated instances of
resource type R, then it can subsequently request
instances of resources types following R in the
ordering

05/16/2024 101 DDUIOT DDUIoT


School of Computing, 05/16/2024 101
Deadlock detection and recovery
o The system does not attempt to prevent deadlocks from
occurring.
o It lets them occur, tries to detect when this happens,
and then takes some action to recover after the fact.
o In this mechanism, the system must provide :
 A deadlock detection algorithm that examines the
state of the system if there is an occurrence of
deadlock
 An algorithm to recover from the deadlock

05/16/2024 School of Computing, DDUIoT 102


Deadlock Detection with One Resource of Each Type
o only one resource of each type exists.
o If the resource allocation graph contains one or more
cycles, a deadlock exists.
o Any process that is part of a cycle is deadlocked. If no
cycles exist, the system is not deadlocked.
o Many algorithms for detecting cycles in directed graphs
are known.
o Below we will give a simple one that inspects a graph
and terminates either when it has found a cycle or when
it has shown that none exists.

05/16/2024 School of Computing, DDUIoT 103


Deadlock Detection with One Resource of Each Type
The algorithm operates by carrying out the following
steps as specified:
1. For each node, N in the graph, perform the following
five steps with N as the starting node.
2. Initialize L to the empty list, and designate all the arcs
as unmarked.
3. Add the current node to the end of L and check to see
if the node now appears in L two times. If it does, the
graph contains a cycle(listed in L) and the algorithm
terminates.
4. From the given node, see if there are any unmarked
outgoing arcs. If so, go to step 5 ; if not, go to step 6.

05/16/2024 School of Computing, DDUIoT 104


Deadlock Detection with One Resource of Each Type
5. Pick an unmarked outgoing arc at random and mark it.
Then follow it to the new current node and go to step
3.
6. If this node is the initial node, the graph does not
contain any cycles and the algorithm terminates.
Otherwise, we have now reached a dead end. Remove
it and go back to the previous node, that is, the one
that was current just before this one, make that one the
current node, and go to step 3.

05/16/2024 School of Computing, DDUIoT 105


Deadlock Detection with Multiple Resources of Each Type
o multiple copies of some of the resources exist.
o Matrix-based algorithm for detecting deadlock among n
processes, P 1 through P n is used.
o The algorithm uses several data structures similar to the
ones in banker’s algorithm.
 Existence: A vector of length m indicates the total
number of existing instance of each resources type.
 Available: A vector of length m indicates the number of
available resources of each type.
 Allocation: An n x m matrix defines the number of
resources of each type currently allocated to each process.
 Request: An n x m matrix indicates the current request
of each process. If Request [ij] = k, then process Pi is
requesting k more instances of resource type. Rj.
05/16/2024 School of Computing, DDUIoT 106
Recovery from Deadlock
o Once a deadlock has been detected, recovery strategy is
needed. There are two possible recovery approaches:
 Process termination
 Resource preemption
Process Termination
 Abort all deadlocked processes
 Abort one process at a time until the deadlock cycle is
eliminated
 In which order should we choose a process to abort? Chose
the process with
 Least amount of processor time consumed so far
 Least amount of output produced so far
 Most estimated time remaining
 Least total resources allocated so far

05/16/2024 Lowest priority School of Computing,
05/16/2024 DDUIOT DDUIoT 107
107
Recovery from Deadlock
Resource Preemption
o In this recovery strategy, we successively preempt resources and
allocate them to another process until the deadlock is broken
o While implementing this strategy, there are three issues to be
considered
 Selecting a victim – which resources and process should be
selected to minimize cost just like in process termination.
The cost factors may include parameters like the number of
resources a deadlocked process is holding, number of
resources it used so far
 Rollback – if a resource is preempted from a process, then
it can not continue its normal execution
• The process must be rolled back to some safe state
and started
 Starvation – same process may always be picked as victim
several times. As a result, starvation may occur. The best
solution to this problem is to only allow a process to be
05/16/2024
05/16/2024 picked as a victims
School of for a DDUIoT
Computing,
DDUIOT limited finite number of times.
108
108
Deadlock avoidance
• Deadlock avoidance scheme requires each process to
declare the maximum number of resources of each type
that it may need in advance
• The deadlock-avoidance algorithm dynamically examines
the resource-allocation state to ensure that there can
never be a circular-wait condition.
• Resource-allocation state is defined by the number of
available and allocated resources, and the maximum
demands of the processes.
• Simplest and most useful model requires that each
process declare the maximum number of resources of
each type that it may need.

05/16/2024
05/16/2024 School of Computing,
DDUIOT DDUIoT 109
109
Safe and Unsafe States
• A state is said to be safe if there is some scheduling
order in which every process can run to completion
even if all of them suddenly request their maximum
Number of resources immediately.
• A state said to be unsafe if there is no granted, given
to any of process to complete.
• If a system is in a safe state, then
there are no deadlocks.

• If a system is in unsafe state, then


there is a possibility of deadlock

• Deadlock avoidance method


ensures that a system will never
enter an unsafe state
05/16/2024
05/16/2024 School of Computing, DDUIoT
Schook DDUIOT 110
110
Deadlock Avoidance Algorithms
• Based on the concept of safe state, we can define
algorithms that ensures the system will never
deadlock.

• If there is a single instance of a resource type,


 Use a resource-allocation graph

• If there are multiple instances of a resource type,


 Use the Dijkstra’s banker’s algorithm

05/16/2024
05/16/2024 DDUIOT DDUIoT
School of Computing, 111
Deadlock Avoidance Algorithms (contd.)
Banker’s Algorithm

• This algorithm is used when there are multiple instances of


resources

• When a process enters a system, it must declare the maximum


number of each instance of resource types it may need
– The number however may not exceed the total number of

resource types in the system


• When a process requests a resource it may have to wait

• When a process gets all its resources it must return them in a


finite amount of time

05/16/2024
05/16/2024 DDUIOT DDUIoT
School of Computing, 112
Communication deadlock
o process wants something that another process has and must wait
until the first one gives it up.
o Another kind of deadlock can occur in communication systems
(e.g., networks), in which two or more processes communicate by
sending messages.
o A common arrangement is that process A sends a request message
to process B, and then blocks until B sends back a reply message.
o Communication deadlocks cannot be prevented by ordering the
resources (since there are none) or avoided by careful scheduling
(since there are no moments when a request could be postponed).
o The technique that can usually be employed to break
communication deadlocks: timeouts.
o In most network communication systems, whenever a message is
sent to which a reply is expected a timer is also started.
o If the timer goes off before the reply arrives, the sender of the
message assumes that the message has been lost and sends it
again (and again and again if needed).

05/16/2024
05/16/2024 DDUIOT DDUIoT
School of Computing, 113
Livelock
o In some situations, polling (busy waiting) is used to
enter a critical region or access a resource.
o Let pair of processes (process A and process B)using two
resources.
o Process A is use resource 1 and request resource 2 and
process 2 use resource 2 and request
o If the processes wait for required resource by spooling
rather than blocking, this situation is called livelock.
o Thus we do not have a deadlock (because no process is
blocked) but we have something functionally equivalent
to deadlock; will make no further progress.

05/16/2024
05/16/2024 DDUIOT DDUIoT
School of Computing, 114
05/16/2024 School of Computing, DDUIoT 115

You might also like