Unit 2 Part 2 - Process Synchronization and CPU Scheduling

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 35

Process Synchronization

• A cooperating process is one that can affect or be


affected by other processes executing in the
system.
• Cooperating processes can either directly share a
logical address space (that is, both code and data)
or be allowed to share data only through files or
messages.
• Concurrent access to shared data may result in
data inconsistency.
• various mechanisms to ensure the orderly
execution of cooperating processes that share a
logical address space, so that data consistency is
maintained- Synchronization

G ARUL SELVAN AP CSE EGSPEC 2


• A situation, where several processes access and
manipulate the same data concurrently and the
outcome of the execution depends on the
particular order in which the access takes place, is
called a race condition.
• To guard against the race condition- ensure that
only one process at a time can be manipulating
the variable counter.

G ARUL SELVAN AP CSE EGSPEC 3


• The Critical-Section Problem. Each process has a segment of
code, called a critical section, in which the process may be
changing common variables, updating a table, writing a file,
and so on.

• When one process is executing in its critical section, no


other process is allowed to execute in its critical section.
• That is, no two processes are executing in their critical
sections at the same time.

• Each process must request permission to enter its critical


section.
• The section of code implementing this request is the entry
section.
• The critical section may be followed by an exit section. The
remaining code is the remainder section.
G ARUL SELVAN AP CSE EGSPEC 4
solution to the critical-section problem
must satisfy the following three requirements:
1. Mutual exclusion.
If process Pi is executing in its critical section,
then no other processes can be executing in their
critical sections.
2. Progress.
If no process is executing in its critical section and some processes wish to enter
their critical sections, then only those processes that are not executing in their
remainder sections can participate in deciding which will enter its critical section
next, and this selection cannot be postponed indefinitely.
3. Bounded waiting.
There exists a bound, or limit, on the number of times that other processes are
allowed to enter their critical sections after a process has made a request to enter
its critical section and before that request is granted.

Two general approaches are used to handle critical sections in operating systems:
preemptive kernels and nonpreemptive kernels.

G ARUL SELVAN AP CSE EGSPEC 5


• Mutex Locks - software tools to solve the critical-section problem.
• Mutex lock. to protect critical regions and thus prevent race conditions.
• process must acquire the lock before entering a critical section; it releases the
lock when it exits the critical section.
• The acquire()function acquires the lock,
acquire() { while (!available)
; /* busy wait */
available = false;;
}
• the release() function releases the lock.
release()
{
available = true;
}
• Disadvantage- busy waiting. While a process is in its critical section, any other
process that tries to enter its critical section must loop continuously in the call to
acquire().
• this type of mutex lock is also called a spinlock because the process “spins” while
waiting for the lock to become available.

G ARUL SELVAN AP CSE EGSPEC 6


• Semaphores – Synchronization Tool. A semaphore S is an integer variable that,
apart from initialization, is accessed only through two standard atomic
operations: wait() and signal().
• The wait() operation was originally termed P (from the Dutch proberen, “to
• test”); signal() was originally called V (from verhogen, “to increment”).
The definition of wait() is as follows:
wait(S) { while (S <= 0)
; // busy wait
S--;
}
The definition of signal() is as follows:
signal(S) {
S++;
}
• when one process modifies the semaphore value, no other process can
simultaneously modify that same semaphore value.
Types of Semaphores:
• The value of a counting semaphore can range over an unrestricted domain.
• The value of a binary semaphore can range only between 0 and 1. (Similar to
MUTEX) G ARUL SELVAN AP CSE EGSPEC 7
Classic Problems of Synchronization
• The Bounded-Buffer Problem
• The Readers–Writers Problem
• The Dining-Philosophers Problem

The Bounded-Buffer Problem (the producer and consumer)


• N Buffers, each can hold one item
• processes share the following data structures:
int n; //variable
semaphore mutex = 1; //semaphore variable
semaphore empty = n; //semaphore variable
semaphore full = 0 //semaphore variable
G ARUL SELVAN AP CSE EGSPEC 8
Semaphore Mutex- Mutual exclusion, provide security to the
buffer.
• Only one process can enters into the critical section.
• Initial value=1. No process in the critical section, by executing
wait() operation, a process can go into a critical section by
decrementing mutex value to 0.

Semaphore Full- Specifies how many Buffers are full


• Initial value of full =0, initially all the buffers are empty.

Semaphore empty- How many buffers are empty.


• Initially all the buffers are empty. Initial value of empty is n.

• the producer producing full buffers for the consumer or as the


consumer producing empty buffers for the producer.
G ARUL SELVAN AP CSE EGSPEC 9
The code for the producer process

The code for the


consumer process

G ARUL SELVAN AP CSE EGSPEC 10


The Dining-Philosophers Problem
• Consider five philosophers who spend their lives thinking and eating.
• The philosophers share a circular table surrounded by five chairs, each belonging to
one philosopher.
• In the center of the table is a bowl of rice, and the table is laid
• with five single chopsticks.
• When a philosopher thinks, she does not interact with her colleagues. a
philosopher gets hungry and tries to pick up the two chopsticks that are closest to
her (the chopsticks that are between her and her left and right neighbours).
• A philosopher may pick up only one chopstick at a time.
• she cannot pick up a chopstick that is already in the hand of a neighbour.
• When a hungry philosopher has both her
chopsticks at the same time, she eats without
releasing the chopsticks.
• Whenshe is finished eating, she puts down
both chopsticks and starts thinking again.

G ARUL SELVAN AP CSE EGSPEC 11


• It is a simple representation of the need to allocate several resources among
several processes in a deadlock-free and starvation-free manner.
• One simple solution is to represent each chopstick with a semaphore.
• A philosopher tries to grab a chopstick by executing a wait() operation on that
semaphore.
• She releases her chopsticks by executing the signal() operation on the appropriate
semaphores.
• the shared data are semaphore chopstick[5];
• where all the elements of chopstick are initialized to 1.

G ARUL SELVAN AP CSE EGSPEC 12


• The solution guarantees that no two neighbours are eating simultaneously
• Drawback: it could create a deadlock.

• Suppose that all five philosophers become hungry at the same time and each grabs
her left chopstick. All the elements of chopstick will now be equal to 0.
• When each philosopher tries to grab her right chopstick, she will be delayed
forever.

• Several possible remedies to the deadlock problem are replaced by:


• Allow at most four philosophers to be sitting simultaneously at the table.
• Allow a philosopher to pick up her chopsticks only if both chopsticks are
available (to do this, she must pick them up in a critical section).
• Use an asymmetric solution—that is, an odd-numbered philosopher picks
up first her left chopstick and then her right chopstick, whereas an even numbered
philosopher picks up her right chopstick and then her left chopstick.

G ARUL SELVAN AP CSE EGSPEC 13


The Readers–Writers Problem
• a database (File) is to be shared among several concurrent
processes.
• Some of these processes may want only to read (readers)
the database, whereas others may want to update (writers)
(that is, to read and write) the database.
• if two readers access the shared data simultaneously, no
problem.
• if a writer and some other process (either a reader or a
writer) access the database simultaneously, problem arise.
• To solve this, the writers have exclusive access to the
shared database while writing to the database.
• no reader should wait for other readers to finish simply
because a writer is waiting.
G ARUL SELVAN AP CSE EGSPEC 14
• the reader processes share the following data structures:
semaphore rw mutex = 1;
semaphore mutex = 1;
int read count = 0;
• The semaphore rw mutex is common to both reader and
writer processes.
• The mutex semaphore is used to ensure mutual exclusion
when the variable read count is updated.
• The read count variable keeps track of how many processes
are currently reading the object.
• The semaphore rw mutex functions as a mutual exclusion
semaphore for the writers.
• It is also used by the first or last reader that enters or exits
the critical section. It is not used by readers who enter or
exit while other readers are
G ARUL inAP CSE
SELVAN their
EGSPEC critical sections. 15
• Writer Process

• Reader Process

G ARUL SELVAN AP CSE EGSPEC 16


Solution:
• Acquiring a reader–writer lock requires specifying the mode
of the lock: either read or write access.
• only to read shared data, it requests the reader–writer lock
in read mode.
• A process wishing to modify the shared data must request
the lock in write mode.
• Multiple processes are permitted to concurrently acquire a
reader–writer lock in read mode, but only one process may
acquire the lock for writing, as exclusive access is required
for writers.
• Reader–writer locks are most useful in the following
situations:
• In applications where it is easy to identify which processes
only read shared data and which processes only write
shared data.
• In applications that have more readers than writers.
G ARUL SELVAN AP CSE EGSPEC 17
CPU Scheduling

CPU scheduling is the basis of multi


programmed operating systems
• The objective of multiprogramming is to have some
process running at all times, to maximize CPU utilization.
• Several processes are kept in memory at one time. When
one process has to wait, the operating system takes the
CPU away from that process and gives the CPU to
another process. This pattern continues.
• process execution consists of a cycle of CPU execution
and I/O wait. Processes alternate between these two
states. Process execution begins with a CPU burst. That
is followed by an I/O burst, which is followed by
another CPU burst, then another I/O burst, and so on.
• Eventually, the final CPU burst ends with a system
request to terminate execution
G ARUL SELVAN AP CSE EGSPEC 19
G ARUL SELVAN AP CSE EGSPEC 20
CPU Scheduler
• Whenever the CPU becomes idle, the operating
system must select one of the processes in the
ready queue to be executed. The selection process
is carried out by the short-term scheduler, or CPU
scheduler.
• ready queue is not necessarily a first-in, first-out
(FIFO) queue.
• Ready queue can be implemented as a FIFO
queue, a priority queue, a tree, or simply an
unordered linked list.
• The records in the queues are generally process
control blocks (PCBs) of the processes.
G ARUL SELVAN AP CSE EGSPEC 21
CPU Scheduling can be Preemptive or Non Preemptive.
• Nonpreemptive scheduling or cooperative, once the CPU has been
allocated to a process, the process keeps the CPU until it releases the CPU
either by terminating or by switching to the waiting state.
• Preemptive scheduling is a method that may be used when a process
switches from a running state to a ready state or from a waiting state to
a ready

Dispatcher
• The dispatcher is the module that gives control of the CPUto the process
selected by the short-term scheduler.
• This function involves the following:
• Switching context
• Switching to user mode
• Jumping to the proper location in the user programt o restart that
program state.
• The time it takes for the dispatcher to stop one process and start another
running is known as the dispatch latency.
G ARUL SELVAN AP CSE EGSPEC 22
Scheduling Criteria
• CPU utilization. keep the CPU as busy as possible. Conceptually,
CPU utilization can range from 0 to 100 percent. In a real system, it
should range from 40 percent (for a lightly loaded system) to 90
percent (for a heavily loaded system).
• Throughput. If the CPU is busy executing processes, then work is
being done. One measure of work is the number of processes that
are completed per time unit, called throughput.
• Turnaround time. The important criterion is how long it takes to
execute that process. The interval from the time of submission of a
process to the time of completion is the turnaround time.
• Turnaround time is the sum of the periods spent waiting to get into
memory, waiting in the ready queue, executing on the CPU, and
doing I/O.
• Waiting time. Waiting time is the sum of the periods spent waiting
in the ready queue.
G ARUL SELVAN AP CSE EGSPEC 23
CPU Scheduling Algorithms
• CPU scheduling deals with the problem of deciding which of the
processes in the ready queue is to be allocated the CPU.
• First-Come, First-Served Scheduling (FCFS)
• Shortest Job First Scheduling (SJF).
• Priority Scheduling.
• Round-Robin Scheduling
• Multilevel Queue Scheduling

First-Come, First-Served Scheduling


• The process that requests the CPU first is allocated the CPU first.
• When a process enters the ready queue, its PCB is linked onto the tail of
the queue.
• When the CPU is free, it is allocated to the process at the head of the
queue. The running process is then removed from the queue.
• Disadvantage: the average waiting time under the FCFS policy is often
quite long.
G ARUL SELVAN AP CSE EGSPEC 24
• Consider the following set of processes that arrive at time 0, with the length of
the CPU burst given in milliseconds:
• If the processes arrive in the order P1, P2, P3, and
are served in FCFS order.

• Gantt chart, which is a bar chart that illustrates a particular schedule, including
the start and finish times of each of the participating processes:

The waiting time is 0 milliseconds for process P1,


• 24 milliseconds for process P2, and 27 milliseconds for process P3.
• Thus, the average waiting time is (0+ 24 + 27)/3 = 17 milliseconds.
Turnaround Time
• Process P1- 24 ms, P2- 27 ms, P3- 30 ms.
• Average Turn around Time – (24+27+30)/3 = 27 ms

G ARUL SELVAN AP CSE EGSPEC 25


Round-Robin Scheduling
• Designed especially for timesharing systems. It is similar to FCFS scheduling, but preemption is
added to enable the system to switch between processes.
• A small unit of time, called a time quantum or time slice, is defined. A time quantum is
generally from10 to 100 milliseconds in length.
• The ready queue is treated as a circular queue.
• The CPU scheduler goes around the ready queue, allocating the CPU to each process for a time
interval of up to 1 time quantum.
• ready queue as a FIFO queue of processes.
• The CPU scheduler picks the first process from the ready queue, sets a timer to interrupt after 1
time quantum, and dispatches the process.
• One of two things will then happen.
• The process may have a CPU burst of less than 1 time quantum. In this case, the process itself
will release the CPU voluntarily.
• The scheduler will then proceed to the next process in the ready queue.

• If the CPU burst of the currently running process is longer than 1 time quantum, the timer will
go off and will cause an interrupt to the operating system.
• A context switch will be executed, and the process will be put at the tail of the ready queue.
• The CPU scheduler will then select the next process in the ready queue.

The average waiting time under the RR policy is often long.


G ARUL SELVAN AP CSE EGSPEC 26
Consider the following set of processes that arrive at time 0, with the length of the
CPU burst given in milliseconds:
• time quantum of 4 milliseconds.
• Process P1 gets the first 4 milliseconds.
• Since it requires another 20 milliseconds, it is
preempted after the first time quantum.
• the CPU is given to the next process in the queue, process P2. Process P2 does
not need 4 milliseconds, so it quits before its time quantum expires.
• The CPU is then given to the next process, process P3. Process P3 does not need
4 milliseconds, so it quits before its time quantum expires.
• Once each process has received 1 time quantum, the CPU is returned to process
P1 for an additional time quantum.
• Gantt Chart:

• The average waiting time for this schedule. P1 waits for 6 milliseconds (10 - 4),
P2 waits for 4 milliseconds, and P3 waits for 7 milliseconds.
• Thus, the average waiting time is 17/3 = 5.66 milliseconds.
G ARUL SELVAN AP CSE EGSPEC 27
Shortest Job First Scheduling
• This algorithm associates with each process the length of the
process’s next CPU burst.
• When the CPU is available, it is assigned to the process that
has the smallest next CPU burst.
• If the next CPU bursts of two processes are the same, FCFS
scheduling is used to break the tie.
• It is also called shortest-next-CPU-burst algorithm, because
scheduling depends on the length of the next CPU burst of a
process, rather than its total length.
• The SJF scheduling algorithm is provably optimal, in that it
gives the minimum average waiting time for a given set of
processes.
• Moving a short process before a long one decreases the
waiting time of the short process more than it increases the
waiting time of the long process.
• Consequently, the average waiting time decreases.
G ARUL SELVAN AP CSE EGSPEC 28
Consider the following set of processes, with the length of the CPU burst given in
milliseconds
Gantt chart:

• The waiting time is 3 milliseconds for process P1,


• 16 milliseconds for process P2,
• 9milliseconds for process P3, and
• 0 milliseconds for process P4.
• Thus, the average waiting time is (3 + 16 + 9 + 0)/4 = 7 milliseconds.
• By comparison, if we were using the FCFS scheduling scheme, the average waiting
time would be 10.25 milliseconds.
• Disadvantage:
• The real difficulty with the SJF algorithm is knowing the length of the next CPU
request.
• SJF scheduling is used frequently in long-term scheduling.
• it cannot be implemented at the level of short-term CPU scheduling. With short-
term scheduling, there is no way to know the length of the next CPU burst.

G ARUL SELVAN AP CSE EGSPEC 29


The SJF algorithm can be either preemptive or nonpreemptive.
• A preemptive SJF algorithm will preempt the currently executing process, whereas a
nonpreemptive SJF algorithm will allow the currently running process to finish its CPU burst.

• Preemptive SJF scheduling is sometimes called shortest-remaining-time-first (SRTF)


scheduling.
• consider the following four processes, with the length of the CPU burst given in milliseconds:
• Process P1 is started at time 0, since it is the only process in the queue.

Gantt chart

• Process P2 arrives at time 1.


The remaining time for process P1 (7 milliseconds) is larger than the time required by process P2
(4 milliseconds), so process P1 is preempted, and process P2 is scheduled.
• The average waiting time for this example is
*(10 − 1) + (1 − 1) + (17 − 2) + (5 − 3)+/4 = 26/4 = 6.5 milliseconds.

• Nonpreemptive SJF scheduling would result in an average waiting time of 7.75 milliseconds.

G ARUL SELVAN AP CSE EGSPEC 30


Priority Scheduling
• A priority is associated with each process, and the CPU is allocated
to the process with the highest priority.
• Equal-priority processes are scheduled in FCFS order.
• Scheduling in terms of high priority and low priority.
• Low numbers represent high priority.

• Priority scheduling can be either preemptive or nonpreemptive.


• When a process arrives at the ready queue, its priority is
compared with the priority of the currently running process.

• A preemptive priority scheduling algorithm will preempt the CPU


if the priority of the newly arrived process is higher than the
priority of the currently running process.
• A nonpreemptive priority scheduling algorithm will simply put the
new process at the head of the ready queue.

G ARUL SELVAN AP CSE EGSPEC 31


consider the following set of processes, assumed to have arrived at time 0 in the
order P1, P2, · · ·, P5, with the length of the CPU burst given in milliseconds:
• Gantt chart:

• The average waiting time is 8.2 milliseconds


Disadvantage:
• A major problem with priority scheduling algorithms is indefinite blocking, or
starvation.
• A process that is ready to run but waiting for the CPU can be considered
blocked.
• A priority scheduling algorithm can leave some low priority processes waiting
indefinitely.
Solution to Indefinite Blocking or Starvation
• Aging : gradually increasing the priority of processes that wait in the system for
a long time
G ARUL SELVAN AP CSE EGSPEC 32
Multilevel Queue Scheduling
• A multilevel queue scheduling algorithm partitions the ready queue into
several separate queues
• The processes are permanently assigned to one queue, generally based
on some property of the process, such as memory
• size, process priority, or process type.
• Each queue has its own scheduling algorithm.

• example of a multilevel queue


scheduling algorithm with five queues,
listed below in order of priority:
1. System processes
2. Interactive processes
3. Interactive editing processes
4. Batch processes
5. Student processes

G ARUL SELVAN AP CSE EGSPEC 33


• Each queue has absolute priority over lower-priority queues.
• No process in the batch queue.
• time-slice among the queues. Here, each queue gets a certain portion of
the CPU time, which it can then schedule among its various processes.

Multilevel Feedback Queue Scheduling


• The multilevel feedback queue scheduling algorithm, allows a process to
move between queues.
• The idea is to separate processes according to the characteristics of their
CPU bursts.
• If a process uses too much CPU time, it will be moved to a lower-priority
queue. This scheme leaves I/O-bound and interactive processes in the
higher-priority queues.
• a process that waits too long in a lower-priority queue may be moved to a
higher-priority queue. This form of aging prevents starvation.

G ARUL SELVAN AP CSE EGSPEC 34


• a multilevel feedback queue scheduler is defined by the following
parameters:
• The number of queues
• The scheduling algorithm for each queue
• The method used to determine when to upgrade a process to a higher priority
queue
• The method used to determine when to demote a process to a lower priority
queue
• The method used to determine which queue a process will enter when that
process needs service

G ARUL SELVAN AP CSE EGSPEC 35

You might also like