Assignment 1 --Operating System

You might also like

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 137

OPERATING SYSTEMS

Q1. Discuss process scheduling including


pre-emptive and non-pre-emptive.

Process Schedulers in Operating System




In computing, a process is the instance of a computer program that is being executed by


one or many threads. Scheduling is important in many different computer environments.
One of the most important areas of scheduling is which programs will work on the CPU.
This task is handled by the Operating System (OS) of the computer and there are many
different ways in which we can choose to configure programs.
What is Process Scheduling?
Process scheduling is the activity of the process manager that handles the removal of the
running process from the CPU and the selection of another process based on a particular
strategy.
Process scheduling is an essential part of a Multiprogramming operating system. Such
operating systems allow more than one process to be loaded into the executable memory
at a time and the loaded process shares the CPU using time multiplexing.
Process scheduler

Categories of Scheduling
Scheduling falls into one of two categories:
Pause
 Non-preemptive: In this case, a process’s resource cannot be taken before the process
has finished running. When a running process finishes and transitions to a waiting
state, resources are switched.
 Preemptive: In this case, the OS assigns resources to a process for a predetermined
period. The process switches from running state to ready state or from waiting for
state to ready state during resource allocation. This switching happens because the
CPU may give other processes priority and substitute the currently active process for
the higher priority process.
Types of Process Schedulers
There are three types of process schedulers:
1. Long Term or Job Scheduler
It brings the new process to the ‘Ready State’. It controls the Degree of Multi-
programming, i.e., the number of processes present in a ready state at any point in time. It
is important that the long-term scheduler make a careful selection of both I/O and CPU-
bound processes. I/O-bound tasks are which use much of their time in input and output
operations while CPU-bound processes are which spend their time on the CPU. The job
scheduler increases efficiency by maintaining a balance between the two. They operate at
a high level and are typically used in batch-processing systems.
2. Short-Term or CPU Scheduler
It is responsible for selecting one process from the ready state for scheduling it on the
running state. Note: Short-term scheduler only selects the process to schedule it doesn’t
load the process on running. Here is when all the scheduling algorithms are used. The
CPU scheduler is responsible for ensuring no starvation due to high burst time processes.
Short Term Scheduler

The dispatcher is responsible for loading the process selected by the Short-term scheduler
on the CPU (Ready to Running State) Context switching is done by the dispatcher only.
A dispatcher does the following:
 Switching context.
 Switching to user mode.
 Jumping to the proper location in the newly loaded program.
3. Medium-Term Scheduler
It is responsible for suspending and resuming the process. It mainly does swapping
(moving processes from main memory to disk and vice versa). Swapping may be
necessary to improve the process mix or because a change in memory requirements has
overcommitted available memory, requiring memory to be freed up. It is helpful in
maintaining a perfect balance between the I/O bound and the CPU bound. It reduces the
degree of multiprogramming.
Medium Term Scheduler

Some Other Schedulers


 I/O schedulers: I/O schedulers are in charge of managing the execution of I/O
operations such as reading and writing to discs or networks. They can use various
algorithms to determine the order in which I/O operations are executed, such
as FCFS (First-Come, First-Served) or RR (Round Robin).
 Real-time schedulers: In real-time systems, real-time schedulers ensure that critical
tasks are completed within a specified time frame. They can prioritize and schedule
tasks using various algorithms such as EDF (Earliest Deadline First) or RM (Rate
Monotonic).
Comparison Among Scheduler
Long Term Scheduler Short term schedular Medium Term Scheduler

It is a process-swapping
It is a job scheduler It is a CPU scheduler
scheduler.

Speed lies in between both


Generally, Speed is lesser Speed is the fastest among
short and long-term
than short term scheduler all of them.
schedulers.

It controls the degree of It gives less control over It reduces the degree of
multiprogramming how much multiprogramming.
Long Term Scheduler Short term schedular Medium Term Scheduler

multiprogramming is done.

It is barely present or
It is a minimal time-sharing It is a component of
nonexistent in the time-
system. systems for time sharing.
sharing system.

It can re-enter the process


It can re-introduce the
into memory, allowing for It selects those processes
process into memory and
the continuation of which are ready to execute
execution can be continued.
execution.

Two-State Process Model Short-Term


The terms “running” and “non-running” states are used to describe the two-state process
model.
1. Running: A newly created process joins the system in a running state when it is
created.
2. Not running: Processes that are not currently running are kept in a queue and await
execution. A pointer to a specific process is contained in each entry in the queue.
Linked lists are used to implement the queue system. This is how the dispatcher is
used. When a process is stopped, it is moved to the back of the waiting queue. The
process is discarded depending on whether it succeeded or failed. The dispatcher then
chooses a process to run from the queue in either scenario.
Context Switching
In order for a process execution to be continued from the same point at a later time,
context switching is a mechanism to store and restore the state or context of a CPU in the
Process Control block. A context switcher makes it possible for multiple processes to
share a single CPU using this method. A multitasking operating system must include
context switching among its features.
The state of the currently running process is saved into the process control block when
the scheduler switches the CPU from executing one process to another. The state used to
set the computer, registers, etc. for the process that will run next is then loaded from its
own PCB. After that, the second can start processing.
Context Switching

In order for a process execution to be continued from the same point at a later time,
context switching is a mechanism to store and restore the state or context of a CPU in
the Process Control block. A context switcher makes it possible for multiple processes to
share a single CPU using this method. A multitasking operating system must include
context switching among its features.
 Program Counter
 Scheduling information
 The base and limit register value
 Currently used register
 Changed State
 I/O State information
 Accounting information

OR
Operating System - Process Scheduling
Previous
Next

Definition
The process scheduling is the activity of the process manager that
handles the removal of the running process from the CPU and the
selection of another process on the basis of a particular strategy.

Process scheduling is an essential part of a Multiprogramming


operating systems. Such operating systems allow more than one
process to be loaded into the executable memory at a time and the
loaded process shares the CPU using time multiplexing.
Categories of Scheduling
There are two categories of scheduling:

1. Non-preemptive: Here the resource can’t be taken from a process


until the process completes execution. The switching of
resources occurs when the running process terminates and
moves to a waiting state.
2. Preemptive: Here the OS allocates the resources to a process for
a fixed amount of time. During resource allocation, the process
switches from running state to ready state or from waiting
state to ready state. This switching occurs as the CPU may
give priority to other processes and replace the process with
higher priority with the running process.

Process Scheduling Queues


The OS maintains all Process Control Blocks (PCBs) in Process
Scheduling Queues. The OS maintains a separate queue for each of
the process states and PCBs of all processes in the same execution
state are placed in the same queue. When the state of a process is
changed, its PCB is unlinked from its current queue and moved to
its new state queue.

The Operating System maintains the following important process


scheduling queues −

 Job queue − This queue keeps all the processes in the system.
 Ready queue − This queue keeps a set of all processes residing
in main memory, ready and waiting to execute. A new process
is always put in this queue.
 Device queues − The processes which are blocked due to
unavailability of an I/O device constitute this queue.
The OS can use different policies to manage each queue (FIFO,
Round Robin, Priority, etc.). The OS scheduler determines how to
move processes between the ready and run queues which can only
have one entry per processor core on the system; in the above
diagram, it has been merged with the CPU.

Two-State Process Model


Two-state process model refers to running and non-running states
which are described below −

S.N. State & Description

Running
1
When a new process is created, it enters into the system as in the running state.

Not Running
Processes that are not running are kept in queue, waiting for their turn to
execute. Each entry in the queue is a pointer to a particular process. Queue is
2 implemented by using linked list. Use of dispatcher is as follows. When a
process is interrupted, that process is transferred in the waiting queue. If the
process has completed or aborted, the process is discarded. In either case, the
dispatcher then selects a process from the queue to execute.

Schedulers
Schedulers are special system software which handle process
scheduling in various ways. Their main task is to select the jobs to
be submitted into the system and to decide which process to run.
Schedulers are of three types −

 Long-Term Scheduler
 Short-Term Scheduler
 Medium-Term Scheduler

Long Term Scheduler


It is also called a job scheduler. A long-term scheduler determines
which programs are admitted to the system for processing. It
selects processes from the queue and loads them into memory for
execution. Process loads into the memory for CPU scheduling.

The primary objective of the job scheduler is to provide a balanced


mix of jobs, such as I/O bound and processor bound. It also
controls the degree of multiprogramming. If the degree of
multiprogramming is stable, then the average rate of process
creation must be equal to the average departure rate of processes
leaving the system.

On some systems, the long-term scheduler may not be available or


minimal. Time-sharing operating systems have no long term
scheduler. When a process changes the state from new to ready,
then there is use of long-term scheduler.

Short Term Scheduler


It is also called as CPU scheduler. Its main objective is to increase
system performance in accordance with the chosen set of criteria. It
is the change of ready state to running state of the process. CPU
scheduler selects a process among the processes that are ready to
execute and allocates CPU to one of them.

Short-term schedulers, also known as dispatchers, make the


decision of which process to execute next. Short-term schedulers
are faster than long-term schedulers.
Medium Term Scheduler
Medium-term scheduling is a part of swapping. It removes the
processes from the memory. It reduces the degree of
multiprogramming. The medium-term scheduler is in-charge of
handling the swapped out-processes.

A running process may become suspended if it makes an I/O


request. A suspended processes cannot make any progress towards
completion. In this condition, to remove the process from memory
and make space for other processes, the suspended process is
moved to the secondary storage. This process is called swapping, and
the process is said to be swapped out or rolled out. Swapping may
be necessary to improve the process mix.

Comparison among Scheduler


Short-Term Medium-Term
S.N. Long-Term Scheduler
Scheduler Scheduler

It is a process swapping
1 It is a job scheduler It is a CPU scheduler
scheduler.

Speed is in between both


Speed is lesser than Speed is fastest among
2 short and long term
short term scheduler other two
scheduler.

It provides lesser
It controls the degree of It reduces the degree of
3 control over degree of
multiprogramming multiprogramming.
multiprogramming

It is almost absent or
It is also minimal in It is a part of Time
4 minimal in time sharing
time sharing system sharing systems.
system

It can re-introduce the


It selects processes from It selects those
process into memory and
5 pool and loads them into processes which are
execution can be
memory for execution ready to execute
continued.

Context Switching
A context switching is the mechanism to store and restore the state
or context of a CPU in Process Control block so that a process
execution can be resumed from the same point at a later time.
Using this technique, a context switcher enables multiple processes
to share a single CPU. Context switching is an essential part of a
multitasking operating system features.

When the scheduler switches the CPU from executing one process to
execute another, the state from the current running process is
stored into the process control block. After this, the state for the
process to run next is loaded from its own PCB and used to set the
PC, registers, etc. At that point, the second process can start
executing.
Context switches are computationally intensive since register and
memory state must be saved and restored. To avoid the amount of
context switching time, some hardware systems employ two or
more sets of processor registers. When the process is switched, the
following information is stored for later use.

 Program Counter
 Scheduling information
 Base and limit register value
 Currently used register
 Changed State
 I/O State information
 Accounting information

Preemptive and Non-Preemptive Scheduling




Prerequisite – CPU Scheduling


You will discover the distinction between preemptive and non-preemptive scheduling in
this article. But first, you need to understand preemptive and non-preemptive scheduling
before going over the differences.
Preemptive Scheduling
Preemptive scheduling is used when a process switches from the running state to the
ready state or from the waiting state to the ready state. The resources (mainly CPU
cycles) are allocated to the process for a limited amount of time and then taken away, and
the process is again placed back in the ready queue if that process still has CPU burst
time remaining. That process stays in the ready queue till it gets its next chance to
execute.
Algorithms based on preemptive scheduling are Round Robin (RR), Shortest Remaining
Time First (SRTF), Priority (preemptive version), etc.

Pause
Unmute
×
Preemptive scheduling has a number of advantages and disadvantages. The following are
preemptive scheduling’s benefits and drawbacks:
Advantages
1. Because a process may not monopolize the processor, it is a more reliable method.
2. Each occurrence prevents the completion of ongoing tasks.
3. The average response time is improved.
4. Utilizing this method in a multi-programming environment is more advantageous.
5. The operating system makes sure that every process using the CPU is using the same
amount of CPU time.
Disadvantages
1. Limited computational resources must be used.
2. Suspending the running process, change the context, and dispatch the new incoming
process all take more time.
3. The low-priority process would have to wait if multiple high-priority processes
arrived at the same time.
Non-Preemptive Scheduling
Non-preemptive Scheduling is used when a process terminates, or a process switches
from running to the waiting state. In this scheduling, once the resources (CPU cycles) are
allocated to a process, the process holds the CPU till it gets terminated or reaches a
waiting state. In the case of non-preemptive scheduling does not interrupt a process
running CPU in the middle of the execution. Instead, it waits till the process completes its
CPU burst time, and then it can allocate the CPU to another process.
Algorithms based on non-preemptive scheduling are: Shortest Job First (SJF basically
non preemptive) and Priority (nonpreemptive version), etc.
Non-preemptive scheduling has both advantages and disadvantages. The following are
non-preemptive scheduling’s benefits and drawbacks:
Advantages
1. It has a minimal scheduling burden.
2. It is a very easy procedure.
3. Less computational resources are used.
4. It has a high throughput rate.
Disadvantages
1. Its response time to the process is super.
2. Bugs can cause a computer to freeze up.

Key Differences Between Preemptive and Non-Preemptive


Scheduling
1. In preemptive scheduling, the CPU is allocated to the processes for a limited time
whereas, in Non-preemptive scheduling, the CPU is allocated to the process till it
terminates or switches to the waiting state.
2. The executing process in preemptive scheduling is interrupted in the middle of
execution when a higher priority one comes whereas, the executing process in non-
preemptive scheduling is not interrupted in the middle of execution and waits till its
execution.
3. In Preemptive Scheduling, there is the overhead of switching the process from the
ready state to the running state, vise-verse, and maintaining the ready queue. Whereas
in the case of non-preemptive scheduling has no overhead of switching the process
from running state to ready state.
4. In preemptive scheduling, if a high-priorThe process The process non-preemptive
low-priority process frequently arrives in the ready queue then the process with low
priority has to wait for a long, and it may have to starve. , in non-preemptive
scheduling, if CPU is allocated to the process having a larger burst time then the
processes with a small burst time may have to starve.
5. Preemptive scheduling attains flexibility by allowing the critical processes to access
the CPU as they arrive in the ready queue, no matter what process is executing
currently. Non-preemptive scheduling is called rigid as even if a critical process enters
the ready queue the process running CPU is not disturbed.
6. Preemptive Scheduling has to maintain the integrity of shared data that’s why it is
cost associative which is not the case with Non-preemptive Scheduling.
Comparison Chart
PREEMPTIVE NON-PREEMPTIVE
Parameter SCHEDULING SCHEDULING

Once resources(CPU Cycle) are


In this resources(CPU Cycle) are
allocated to a process, the process
Basic allocated to a process for a
holds it till it completes its burst
limited time.
time or switches to waiting state.

Process can be interrupted in Process can not be interrupted until


Interrupt
between. it terminates itself or its time is up.

If a process having high priority If a process with a long burst time is


frequently arrives in the ready running CPU, then later coming
Starvation
queue, a low priority process may process with less CPU burst time
starve. may starve.

It has overheads of scheduling the


Overhead It does not have overheads.
processes.

Flexibility flexible rigid

Cost cost associated no cost associated

CPU In preemptive scheduling, CPU It is low in non preemptive


Utilization utilization is high. scheduling.

Waiting Preemptive scheduling waiting Non-preemptive scheduling waiting


PREEMPTIVE NON-PREEMPTIVE
Parameter SCHEDULING SCHEDULING

Time time is less. time is high.

Response Preemptive scheduling response Non-preemptive scheduling


Time time is less. response time is high.

Decisions are made by the process


Decisions are made by the
Decision itself and the OS just follows the
scheduler and are based on
making process’s instructions
priority and time slice allocation

The OS has less control over the


Process The OS has greater control over
scheduling of processes
control the scheduling of processes

Lower overhead since context


Higher overhead due to frequent
Overhead switching is less frequent
context switching

Examples of preemptive Examples of non-preemptive


Examples scheduling are Round Robin and scheduling are First Come First
Shortest Remaining Time First. Serve and Shortest Job First.

Conclusion
Preemptive scheduling is not better than non-preemptive scheduling, and vice versa. It all
depends on how a scheduling algorithm increases CPU utilization while decreasing
average process waiting time.

Frequently Asked Questions


Q.1: How is priority determined in preemptive scheduling?
Answer:
Preemptive scheduling systems often assign priority levels to tasks or processes. The
priority can be determined based on factors like the nature of the task, its importance, or
its deadline. Higher-priority tasks are given precedence and are allowed to execute
before lower-priority tasks.
Q.2: What happens in non-preemptive scheduling if a task does not yield the
CPU?
Answer:
In non-preemptive scheduling, if a task does not voluntarily yield the CPU, it can lead to
a situation called a “starvation” or “deadlock” where other tasks are unable to execute.
To avoid such scenarios, it’s important to ensure that tasks have mechanisms to release
the CPU when necessary, such as waiting for I/O operations or setting maximum
execution times.
OR

Difference between Preemptive and


Non-Preemptive Scheduling
In this article, you will learn the difference between preemptive and non-preemptive
scheduling. But before discussing the differences, you need to know about preemptive
and non-preemptive scheduling.

What is Preemptive Scheduling?


Preemptive scheduling is a method that may be used when a process switches from a
running state to a ready state or from a waiting state to a ready state. The resources are
assigned to the process for a particular time and then removed. If the resources still
have the remaining CPU burst time, the process is placed back in the ready queue. The
process remains in the ready queue until it is given a chance to execute again.

When a high-priority process comes in the ready queue, it doesn't have to wait for the
running process to finish its burst time. However, the running process is interrupted in
the middle of its execution and placed in the ready queue until the high-priority process
uses the resources. As a result, each process gets some CPU time in the ready queue. It
improves the overhead of switching a process from running to ready state and vice
versa, increasing preemptive scheduling flexibility. It may or may not include SJF and
Priority scheduling.

For example:

ADVERTISEMENT

Let us take the example of Preemptive Scheduling. We have taken P0, P1,
P2, and P3 are the four processes.
Process Arrival Time CPU Burst time (in millisec.)

P0 3 2

P1 2 4

P2 0 6

P3 1 4

ADVERTISEMENT
ADVERTISEMENT

o Firstly, the process P2 comes at time 0. So, the CPU is assigned to process P2.
o When process P2 was running, process P3 arrived at time 1, and the remaining
time for process P2 (5 ms) is greater than the time needed by process P3 (4 ms).
So, the processor is assigned to P3.
o When process P3 was running, process P1 came at time 2, and the remaining
time for process P3 (3 ms) is less than the time needed by processes P1 (4
ms) and P2 (5 ms). As a result, P3 continues the execution.
o When process P3 continues the process, process P0 arrives at time 3. P3's
remaining time (2 ms) is equal to P0's necessary time (2 ms). So,
process P3 continues the execution.
o When process P3 finishes, the CPU is assigned to P0, which has a shorter burst
time than the other processes.
o After process P0 completes, the CPU is assigned to process P1 and then to
process P2.
Advantages and disadvantages of Preemptive
Scheduling
There are various advantages and disadvantages of Preemptive scheduling. The
advantages and disadvantages of non-preemptive scheduling are as follows:

Advantages

1. It is a more robust method because a process may not monopolize the processor.
2. Each event causes an interruption in the execution of ongoing tasks.
3. It improves the average response time.
4. It is more beneficial when you use this method in a multi-programming
environment.
5. The operating system ensures that all running processes use the same amount of
CPU.

Disadvantages

1. It requires the use of limited computational resources.


2. It takes more time suspending the executing process, switching the context, and
dispatching the new incoming process.
3. If several high-priority processes arrive at the same time, the low-priority process
would have to wait longer.

What is Non-Preemptive Scheduling?


Non-preemptive scheduling is a method that may be used when a process terminates or
switches from a running to a waiting state. When processors are assigned to a process,
they keep the process until it is eliminated or reaches a waiting state. When the
processor starts the process execution, it must complete it before executing the other
process, and it may not be interrupted in the middle.

When a non-preemptive process with a high CPU burst time is running, the other
process would have to wait for a long time, and that increases the process average
waiting time in the ready queue. However, there is no overhead in transferring processes
from the ready queue to the CPU under non-preemptive scheduling. The scheduling is
strict because the execution process is not even preempted for a higher priority process.
ADVERTISEMENT
ADVERTISEMENT

For example:

Let's take the above preemptive scheduling example and solve it in a non-preemptive
manner.

o The process P2 comes at 0, so the processor is assigned to process P2 and


takes (6 ms) to execute.
o All of the processes, P0, P1, and P3, arrive in the ready queue in between. But all
processes wait till process P2 finishes its CPU burst time.
o After that, the process that comes after process P2, i.e., P3, is assigned to the
CPU until it finishes its burst time.
o When process P1 completes its execution, the CPU is given to process P0.

Advantages and disadvantages of Non-preemptive


Scheduling
There are various advantages and disadvantages of non-preemptive scheduling. The
advantages and disadvantages of non-preemptive scheduling are as follows:

Advantages

1. It provides a low scheduling overhead.


2. It is a very simple method.
3. It uses less computational resources.
4. It offers high throughput.

Disadvantages
1. It has a poor response time for the process.
2. A machine can freeze up due to bugs.

Main Differences between the Preemptive


and Non-Preemptive Scheduling
Here, you will learn the main differences between Preemptive and Non-Preemptive
Scheduling. Various differences between the Preemptive and Non-Preemptive
Scheduling are as follows:

1. In preemptive scheduling, the CPU is assigned to the processes for a particular


time period. In contrast, the CPU is assigned to the process until it removes and
switches to the waiting state.
2. When a process with a high priority appears in the ready queue frequently in
preemptive scheduling, the process with a low priority must wait for a long
period and can starve. In contrast, when the CPU is assigned to the process with
the high burst time, the processes with the shorter burst time can starve in non-
preemptive scheduling.
3. When a higher priority process comes in the CPU, the running process in
preemptive scheduling is halted in the middle of its execution. On the other
hand, the running process in non-preemptive scheduling doesn't interrupt in the
middle of its execution and waits until it is completed.
4. Preemptive scheduling is flexible in processing. On the other side, non-
preemptive is strict.
5. Preemptive scheduling is quite flexible because critical processes are allowed to
access the CPU because they come in the ready queue and no matter which
process is currently running. Non-preemptive scheduling is tough because if an
essential process is assigned to the ready queue, the CPU process is not be
interrupted.
6. In preemptive scheduling, CPU utilization is more effective than non-preemptive
scheduling. On the other side, in non-preemptive scheduling, the CPU utilization
is not effective as preemptive scheduling.
7. Preemptive scheduling is very cost-effective because it ensures the integrity of
shared data. In contrast, it is not in the situation of non-preemptive scheduling.

Head-to-head Comparison between the


Preemptive and Non-Preemptive Scheduling
Here, you will learn the head-to-head comparison between preemptive and non-
preemptive scheduling. The main differences between preemptive and non-preemptive
scheduling are as follows:

ADVERTISEMENT
ADVERTISEMENT

Preemptive Scheduling Non-Preemptive Scheduling

The resources are assigned to a Once resources are assigned to a process, they are
process for a long time period. held until it completes its burst period or changes
to the waiting state.

Its process may be paused in the When the processor starts the process execution, it
middle of the execution. must complete it before executing the other
process, and it may not be interrupted in the
middle.

When a high-priority process When a high burst time process uses a CPU,
continuously comes in the ready another process with a shorter burst time can
queue, a low-priority process can starve.
starve.

It is flexible. It is rigid.

It is cost associated. It does not cost associated.

It has overheads associated with It doesn't have overhead.


process scheduling.

It affects the design of the operating It doesn't affect the design of the OS kernel.
system kernel.

Its CPU utilization is very high. Its CPU utilization is very low.
Examples: Round Robin and Shortest FCFS and SJF are examples of non-preemptive
Remaining Time First scheduling.
ADVERTISEMENT

Conclusion
It's not a case of preemptive scheduling being superior to non-preemptive scheduling
or vice versa. It all depends on how a scheduling algorithm reduces average process
waiting time while increasing CPU utilization

OR
In Short
Difference between Preemptive and Non-
Preemptive Scheduling
Preemptive vs Non-Preemptive Scheduling: Difference between
Preemptive and Non-Preemptive Scheduling

What is Preemptive Scheduling?


Preemptive Scheduling is a technique where the assignments are allotted with their preferences
or priorities. In the case of preemptive scheduling, it is crucial to execute a higher preference
assignment even if the task with a lower priority is still in the running stage. When it comes to
executing the higher priority job, the task with lower priority waits for some time and continues
its executing process when the first priority job gets completed.

What is Non-Preemptive Scheduling?


Non-preemptive Scheduling is a CPU scheduling method in which a procedure takes the
resource, and holds it till it gets terminated or changes to the waiting condition.

Difference between Preemptive and Non-Preemptive Scheduling

S.No. Preemptive Scheduling Non-Preemptive Scheduling


1 In preemptive scheduling, the bits of help or In non-preemptive scheduling, once the bits of help or resources
resources are allotted to a procedure for a are allotted to a procedure, the process carries it until it satisfies or
fixed time. shifts to the waiting state.

2 Here, interruption can occur between the Here, the process cannot be interrupted.
processes.

4 It includes overheads of organising the It does not include overheads.


procedures.

5 It is adaptable in nature. It is not flexible in nature.

6 It is cost-oriented. Non-preemptive scheduling is not cost oriented.

7 CPU utilisation is high here. CPU utilisation is low here.

OR
Preemptive and Non-Preemptive
Scheduling
Key Differences between Preemptive and Non-
Preemptive Scheduling
 In Preemptive Scheduling, the CPU is allocated to the processes
for a specific time period, and the non-preemptive scheduling
CPU is allocated to the process until it terminates.
 In Preemptive Scheduling, tasks are switched based on priority,
while in non-preemptive Scheduling, no switching takes place.
 The preemptive algorithm has the overhead of switching the
process from the ready state to the running state, while Non-
preemptive Scheduling has no such overhead of switching.
 Preemptive Scheduling is flexible, while Non-preemptive
Scheduling is rigid.

Pr
eemptive vs Non-Preemptive Scheduling

What is Preemptive Scheduling?


Preemptive Scheduling is a scheduling method where the tasks are
mostly assigned with their priorities. Sometimes it is important to run a
task with a higher priority before another lower priority task, even if the
lower priority task is still running.

At that time, the lower priority task holds for some time and resumes
when the higher priority task finishes its execution.

What is Non-Preemptive Scheduling?


In this type of scheduling method, the CPU has been allocated to a
specific process. The process that keeps the CPU busy will release
the CPU either by switching context or terminating.

It is the only method that can be used for various hardware platforms.
That’s because it doesn’t need specialized hardware (for example, a
timer) like preemptive Scheduling.

Non-Preemptive Scheduling occurs when a process voluntarily enters


the wait state or terminates.
Preemptive vs Non-Preemptive Scheduling:
Comparison Table
Here, are head-to-head comparison Preemptive vs Non-Preemptive
Scheduling. The main differences between Preemptive and Non-
Preemptive Scheduling in OS are as follows:
Preemptive Scheduling Non-preemptive Scheduling
A processor can be preempted to execute the different Once the processor starts its execution, it must finish
processes in the middle of any current process it before executing the other. It can’t be paused in the
execution. middle.
CPU utilization is more efficient compared to Non- CPU utilization is less efficient compared to
Preemptive Scheduling. preemptive Scheduling.
Waiting and response time of preemptive Scheduling Waiting and response time of the non-preemptive
is less. Scheduling method is higher.
When any process enters the state of running, the
Preemptive Scheduling is prioritized. The highest
state of that process is never deleted from the
priority process is a process that is currently utilized.
scheduler until it finishes its job.
Preemptive Scheduling is flexible. Non-preemptive Scheduling is rigid.
Examples: – Shortest Remaining Time First, Round Examples: First Come First Serve, Shortest Job First,
Robin, etc. Priority Scheduling, etc.
Preemptive Scheduling algorithm can be pre-empted In non-preemptive scheduling process cannot be
that is the process can be Scheduled Scheduled
In this process, the CPU is allocated to the processes In this process, CPU is allocated to the process until it
for a specific time period. terminates or switches to the waiting state.
Preemptive algorithm has the overhead of switching Non-preemptive Scheduling has no such overhead of
the process from the ready state to the running state switching the process from running into the ready
and vice-versa. state.

Advantages of Preemptive Scheduling


Here, are pros/benefits of Preemptive Scheduling method:

 Preemptive scheduling method is more robust, approach so one


process cannot monopolize the CPU
 Choice of running task reconsidered after each interruption.
 Each event cause interruption of running tasks
 The OS makes sure that CPU usage is the same by all running
process.
 In this, the usage of CPU is the same, i.e., all the running
processes will make use of CPU equally.
 This scheduling method also improvises the average response
time.
 Preemptive Scheduling is beneficial when we use it for the multi-
programming environment.

Advantages of Non-preemptive Scheduling


Here, are pros/benefits of Non-preemptive Scheduling method:

 Offers low scheduling overhead


 Tends to offer high throughput
 It is conceptually very simple method
 Less computational resources need for Scheduling

Disadvantages of Preemptive Scheduling


Following are the drawbacks of preemptive scheduling:

 Need limited computational resources for Scheduling


 Takes a higher time by the scheduler to suspend the running
task, switch the context, and dispatch the new incoming task.
 The process which has low priority needs to wait for a longer
time if some high priority processes arrive continuously.

Disadvantages of Non-Preemptive Scheduling


Here, are cons/drawback of Non-Preemptive Scheduling method:

 It can lead to starvation especially for those real-time tasks


 Bugs can cause a machine to freeze up
 It can make real-time and priority Scheduling difficult
 Poor response time for processes

Example of Non-Preemptive Scheduling


In non-preemptive SJF scheduling, once the CPU cycle is allocated to
process, the process holds it till it reaches a waiting state or
terminated.

Consider the following five processes each having its own unique
burst time and arrival time.
Process Queue Burst time Arrival time
P1 6 2
P2 2 5
P3 8 1
P4 3 0
P5 4 4
Step 0) At time=0, P4 arrives and starts execution.

Step 1) At time= 1, Process P3 arrives. But, P4 still needs 2 execution


units to complete. It will continue execution.
Step 2) At time =2, process P1 arrives and is added to the waiting
queue. P4 will continue execution.

Step 3) At time = 3, process P4 will finish its execution. The burst time
of P3 and P1 is compared. Process P1 is executed because its burst
time is less compared to P3.

Step 4) At time = 4, process P5 arrives and is added to the waiting


queue. P1 will continue execution.
Step 5) At time = 5, process P2 arrives and is added to the waiting
queue. P1 will continue execution.

Step 6) At time = 9, process P1 will finish its execution. The burst time
of P3, P5, and P2 is compared. Process P2 is executed because its
burst time is the lowest.
Step 7) At time=10, P2 is executing, and P3 and P5 are in the waiting
queue.

Step 8) At time = 11, process P2 will finish its execution. The burst
time of P3 and P5 is compared. Process P5 is executed because its
burst time is lower.

Step 9) At time = 15, process P5 will finish its execution.


Step 10) At time = 23, process P3 will finish its execution.

Step 11) Let’s calculate the average waiting time for above example.
Wait time
P4= 0-0=0
P1= 3-2=1
P2= 9-5=4
P5= 11-4=7
P3= 15-1=14
Average Waiting Time= 0+1+4+7+14/5 = 26/5 = 5.2

Example of Pre-emptive Scheduling


Consider this following three processes in Round-robin
Process Queue Burst time
P1 4
P2 3
P3 5
Step 1) The execution begins with process P1, which has burst time
4. Here, every process executes for 2 seconds. P2 and P3 are still in
the waiting queue.
Step 2) At time =2, P1 is added to the end of the Queue and P2 starts
executing

Step 3) At time=4 , P2 is preempted and add at the end of the queue.


P3 starts executing.
Step 4) At time=6 , P3 is preempted and add at the end of the queue.
P1 starts executing.

Step 5) At time=8 , P1 has a burst time of 4. It has completed


execution. P2 starts execution
Step 6) P2 has a burst time of 3. It has already executed for 2 interval.
At time=9, P2 completes execution. Then, P3 starts execution till it
completes.

Step 7) Let’s calculate the average waiting time for above example.
Wait time
P1= 0+ 4= 4
P2= 2+4= 6
P3= 4+3= 7

Q2. Define thread scheduling, justify


how a thread scheduler is different from
process scheduler.
Thread Scheduling



There is a component in Java that basically decides which thread should execute or get a
resource in the operating system.
Scheduling of threads involves two boundary scheduling.
1. Scheduling of user-level threads (ULT) to kernel-level threads (KLT) via lightweight
process (LWP) by the application developer.
2. Scheduling of kernel-level threads by the system scheduler to perform different
unique OS functions.
Lightweight Process (LWP)
Light-weight process are threads in the user space that acts as an interface for the ULT to
access the physical CPU resources. Thread library schedules which thread of a process to
run on which LWP and how long. The number of LWPs created by the thread library
depends on the type of application. In the case of an I/O bound application, the number of
LWPs depends on the number of user-level threads. This is because when an LWP is
blocked on an I/O operation, then to invoke the other ULT the thread library needs to
create and schedule another LWP. Thus, in an I/O bound application, the number of LWP
is equal to the number of the ULT. In the case of a CPU-bound application, it depends
only on the application. Each LWP is attached to a separate kernel-level thread.

Pause
In real-time, the first boundary of thread scheduling is beyond specifying the scheduling
policy and the priority. It requires two controls to be specified for the User level threads:
Contention scope, and Allocation domain. These are explained as following below.
Contention Scope
The word contention here refers to the competition or fight among the User level threads
to access the kernel resources. Thus, this control defines the extent to which contention
takes place. It is defined by the application developer using the thread library.
Depending upon the extent of contention it is classified as-
 Process Contention Scope (PCS) :
The contention takes place among threads within a same process. The thread library
schedules the high-prioritized PCS thread to access the resources via available LWPs
(priority as specified by the application developer during thread creation).
 System Contention Scope (SCS) :
The contention takes place among all threads in the system. In this case, every SCS
thread is associated to each LWP by the thread library and are scheduled by the
system scheduler to access the kernel resources.
In LINUX and UNIX operating systems, the POSIX Pthread library provides a
function Pthread_attr_setscope to define the type of contention scope for a thread
during its creation.
int Pthread_attr_setscope(pthread_attr_t *attr, int scope)
The first parameter denotes to which thread within the process the scope is defined.
The second parameter defines the scope of contention for the thread pointed. It takes two
values.

PTHREAD_SCOPE_SYSTEM
PTHREAD_SCOPE_PROCESS
If the scope value specified is not supported by the system, then the function
returns ENOTSUP.
Allocation Domain
The allocation domain is a set of one or more resources for which a thread is
competing. In a multicore system, there may be one or more allocation domains where
each consists of one or more cores. One ULT can be a part of one or more allocation
domain. Due to this high complexity in dealing with hardware and software architectural
interfaces, this control is not specified. But by default, the multicore system will have an
interface that affects the allocation domain of a thread.
Consider a scenario, an operating system with three process P1, P2, P3 and 10 user level
threads (T1 to T10) with a single allocation domain. 100% of CPU resources will be
distributed among all the three processes. The amount of CPU resources allocated to each
process and to each thread depends on the contention scope, scheduling policy and
priority of each thread defined by the application developer using thread library and also
depends on the system scheduler. These User level threads are of a different contention
scope.
In this case, the contention for allocation domain takes place as follows:
Process P1
All PCS threads T1, T2, T3 of Process P1 will compete among themselves. The PCS
threads of the same process can share one or more LWP. T1 and T2 share an LWP and
T3 are allocated to a separate LWP. Between T1 and T2 allocation of kernel resources via
LWP is based on preemptive priority scheduling by the thread library. A Thread with a
high priority will preempt low priority threads. Whereas, thread T1 of process p1 cannot
preempt thread T3 of process p3 even if the priority of T1 is greater than the priority of
T3. If the priority is equal, then the allocation of ULT to available LWPs is based on the
scheduling policy of threads by the system scheduler(not by thread library, in this case).

Process P2
Both SCS threads T4 and T5 of process P2 will compete with processes P1 as a whole
and with SCS threads T8, T9, T10 of process P3. The system scheduler will schedule the
kernel resources among P1, T4, T5, T8, T9, T10, and PCS threads (T6, T7) of process P3
considering each as a separate process. Here, the Thread library has no control of
scheduling the ULT to the kernel resources.

Process P3
Combination of PCS and SCS threads. Consider if the system scheduler allocates 50% of
CPU resources to process P3, then 25% of resources is for process scoped threads and the
remaining 25% for system scoped threads. The PCS threads T6 and T7 will be allocated
to access the 25% resources based on the priority by the thread library. The SCS threads
T8, T9, T10 will divide the 25% resources among themselves and access the kernel
resources via separate LWP and KLT. The SCS scheduling is by the system scheduler.

Note:
For every system call to access the kernel resources, a Kernel Level
thread is created
and associated to separate LWP by the system scheduler.
Number of Kernel Level Threads = Total Number of LWP
Total Number of LWP = Number of LWP for SCS + Number of LWP for PCS
Number of LWP for SCS = Number of SCS threads
Number of LWP for PCS = Depends on application developer
Here,
Number of SCS threads = 5
Number of LWP for PCS = 3
Number of SCS threads = 5
Number of LWP for SCS = 5
Total Number of LWP = 8 (=5+3)
Number of Kernel Level Threads = 8
Advantages of PCS over SCS
 If all threads are PCS, then context switching, synchronization, scheduling everything
takes place within the userspace. This reduces system calls and achieves better
performance.
 PCS is cheaper than SCS.
 PCS threads share one or more available LWPs. For every SCS thread, a separate
LWP is associated.For every system call, a separate KLT is created.
 The number of KLT and LWPs created highly depends on the number of SCS threads
created. This increases the kernel complexity of handling scheduling and
synchronization. Thereby, results in a limitation over SCS thread creation, stating
that, the number of SCS threads to be smaller than the number of PCS threads.
 If the system has more than one allocation domain, then scheduling and
synchronization of resources becomes more tedious. Issues arise when an SCS thread
is a part of more than one allocation domain, the system has to handle n number of
interfaces.
The second boundary of thread scheduling involves CPU scheduling by the system
scheduler. The scheduler considers each kernel-level thread as a separate process and
provides access to the kernel resources.

OR
Scheduling threads
Laatst bijgewerkt: 2023-03-24

Threads can be scheduled, and the threads library provides several facilities to handle and control
the scheduling of threads.

It also provides facilities to control the scheduling of threads during synchronization operations
such as locking a mutex. Each thread has its own set of scheduling parameters. These parameters
can be set using the thread attributes object before the thread is created. The parameters can also
be dynamically set during the thread's execution.

Controlling the scheduling of a thread can be a complicated task. Because the scheduler handles
all threads system wide, the scheduling parameters of a thread interact with those of all other
threads in the process and in the other processes. The following facilities are the first to be used
if you want to control the scheduling of a thread.

The threads library allows the programmer to control the execution scheduling of the threads in
the following ways:

 By setting scheduling attributes when creating a thread


 By dynamically changing the scheduling attributes of a created thread
 By defining the effect of a mutex on the thread's scheduling when creating a mutex
(known as synchronization scheduling)
 By dynamically changing the scheduling of a thread during synchronization operations
(known as synchronization scheduling)

Scheduling parameters

A thread has the following scheduling parameters:

Paramete
Description
r
The contention scope of a thread is defined by the thread model used in the threads
scope
library.
The scheduling policy of a thread defines how the scheduler treats the thread after it
policy
gains control of the CPU.
The scheduling priority of a thread defines the relative importance of the work being
priority
done by each thread.

The scheduling parameters can be set before the thread's creation or during the thread's
execution. In general, controlling the scheduling parameters of threads is important only for
threads that are CPU-intensive. Thus, the threads library provides default values that are
sufficient for most cases.

OR
Scheduling threads
 Article
 09/15/2021
 12 contributors
Feedback

Every thread has a thread priority assigned to it. Threads created within the common
language runtime are initially assigned the priority of ThreadPriority.Normal. Threads
created outside the runtime retain the priority they had before they entered the
managed environment. You can get or set the priority of any thread with
the Thread.Priority property.

Threads are scheduled for execution based on their priority. Even though threads are
executing within the runtime, all threads are assigned processor time slices by the
operating system. The details of the scheduling algorithm used to determine the order
in which threads are executed varies with each operating system. Under some operating
systems, the thread with the highest priority (of those threads that can be executed) is
always scheduled to run first. If multiple threads with the same priority are all available,
the scheduler cycles through the threads at that priority, giving each thread a fixed time
slice in which to execute. As long as a thread with a higher priority is available to run,
lower priority threads do not get to execute. When there are no more runnable threads
at a given priority, the scheduler moves to the next lower priority and schedules the
threads at that priority for execution. If a higher priority thread becomes runnable, the
lower priority thread is preempted and the higher priority thread is allowed to execute
once again. On top of all that, the operating system can also adjust thread priorities
dynamically as an application's user interface is moved between foreground and
background. Other operating systems might choose to use a different scheduling
algorithm.

Difference between Process and Thread




Process: Processes are basically the programs that are dispatched from the ready state
and are scheduled in the CPU for execution. PCB(Process Control Block) holds the
concept of process. A process can create other processes which are known as Child
Processes. The process takes more time to terminate and it is isolated means it does not
share the memory with any other process.
The process can have the following states new, ready, running, waiting, terminated, and
suspended.
Thread: Thread is the segment of a process which means a process can have multiple
threads and these multiple threads are contained within a process. A thread has three
states: Running, Ready, and Blocked.
The thread takes less time to terminate as compared to the process but unlike the process,
threads do not isolate.

Process vs Thread

Difference between Process and Thread:


S.NO Process Thread

Process means any program is


Thread means a segment of a process.
1. in execution.

The process takes more time to


The thread takes less time to terminate.
2. terminate.

3. It takes more time for creation. It takes less time for creation.
S.NO Process Thread

It also takes more time for


It takes less time for context switching.
4. context switching.

The process is less efficient in Thread is more efficient in terms of


5. terms of communication. communication.

We don’t need multi programs in action for


Multiprogramming holds the
multiple threads because a single process consists
concepts of multi-process.
6. of multiple threads.

7. The process is isolated. Threads share memory.

The process is called the A Thread is lightweight as each thread in a


8. heavyweight process. process shares code, data, and resources.

Process switching uses an Thread switching does not require calling an


interface in an operating operating system and causes an interrupt to the
9. system. kernel.

If one process is blocked then it


If a user-level thread is blocked, then all other
will not affect the execution of
user-level threads are blocked.
10. other processes

The process has its own Process


Thread has Parents’ PCB, its own Thread Control
Control Block, Stack, and
Block, and Stack and common Address space.
11. Address Space.

Since all threads of the same process share


Changes to the parent process address space and other resources so any changes
do not affect child processes. to the main thread may affect the behavior of the
12. other threads of the process.

No system call is involved, it is created using


A system call is involved in it.
13. APIs.

14. The process does not share data Threads share data with each other.
S.NO Process Thread

with each other.

Note: In some cases where the thread is processing a bigger workload compared to a
process’s workload then the thread may take more time to terminate. But this is an
extremely rare situation and has fewer chances to occur.
OR

How do you compare


thread scheduling and
process scheduling?
1What are threads and processes?
A thread is a basic unit of execution that can run a sequence of
instructions within a program. A process is a collection of one or more
threads that share the same address space, resources, and context.
Each thread has its own stack, registers, and program counter, but can
access the shared memory and files of the process. A process can
create multiple threads to perform different tasks in parallel, such as
user interface, network communication, computation, and so on.

2How are threads and processes


scheduled?
The operating system is responsible for managing the allocation of
CPU time to threads. This is called scheduling, and it involves deciding
which thread should run next, for how long, and on which CPU core.
There are different algorithms and policies for scheduling, such as
priority-based, round-robin, and shortest job first. The goal of
scheduling is to maximize the CPU utilization, throughput,
responsiveness, and fairness of the system.

3What are the benefits of thread


scheduling?
Thread scheduling has several advantages over process scheduling.
Thread switching is faster and cheaper than process switching because
it does not require changing the address space, resources, or context
of the process. It can also improve the performance and scalability of a
program by exploiting the parallelism of multiple CPU cores and
reducing the blocking time of threads. And thread scheduling can
enhance the responsiveness of a program by allowing threads to
communicate and synchronize with each other more easily than
processes.

4What are the drawbacks of thread


scheduling?
Thread scheduling also has some disadvantages compared to process
scheduling. It can introduce more complexity and overhead to a
program because it requires careful design, implementation, and
testing of the multithreading logic, data structures, and
synchronization mechanisms. And thread scheduling can increase the
risk of errors and bugs, such as deadlock, race condition, and memory
leak. Also, it can affect the portability and security of a program
because it depends on the operating system's support and protection
for threads.
5How can you optimize thread
scheduling?
You can optimize thread scheduling for multithreading applications by
following some general guidelines and best practices. To start, choose
the number and type of threads based on the workload, resources,
and performance goals. Additionally, use high-level abstractions and
libraries for multithreading to help avoid creating and destroying
threads often. And minimize the contention and synchronization
among threads with lock-free or wait-free data structures if possible.
Balance the load and affinity of threads across CPU cores as well, so as
to not overload or starve any core. Finally, monitor the performance
and overhead of thread scheduling with tools like profilers, debuggers,
and analyzers to identify and resolve any issues.

6What are the trade-offs of thread


scheduling vs process scheduling?
Thread scheduling and process scheduling are both useful and
powerful techniques for multithreading, but they have different trade-
offs in terms of performance and overhead. Thread scheduling can
offer faster and more efficient execution of multiple tasks within a
program, but it can also introduce more complexity and overhead to
the program's logic, data, and synchronization. Process scheduling can
offer more isolation and protection of multiple programs or tasks, but
it can also incur more cost and time for switching between them.
Therefore, choose the right approach for your multithreading scenario
based on the requirements, constraints, and trade-offs of your
application.
Q3. Explain in detail Peterson’s
solution for various race conditions.
What is a race condition?
A race condition is an undesirable situation that occurs when a device or
system attempts to perform two or more operations at the same time, but
because of the nature of the device or system, the operations must be done in
the proper sequence to be done correctly.

Race conditions are most commonly associated with computer science and
programming. They occur when two computer program processes, or threads,
attempt to access the same resource at the same time and cause problems in
the system.

Race conditions are considered a common issue for multithreaded


applications.

Peterson's Solution
By Hari Hara Sankar
7 mins read
Last updated: 14 Oct 2023
1.1k views

Overview
In operating systems, there may be a need for more than one process to access a shared resource
such as memory or CPU. In shared memory, if more than one process is accessing a variable,
then the value of that variable is determined by the last process to modify it, and the last
modified value overwrites the first modified value. This may result in losing important
information written during the first process. The location where these processes occur is called
the critical section. These critical sections prevent information loss by preventing two processes
from simultaneously being in the same critical region or updating the same variable
simultaneously. This problem is called the Critical-Section problem, and one of the solutions to
this problem is the Peterson's solution.
What is Peterson's Solution in OS?
Peterson's solution is a classic solution to the critical section problem. The critical section
problem ensures that no two processes change or modify a resource's value simultaneously.

For example, let int a=5, and there are two processes p1 and p2 that can modify the value of a. p1
adds 2 to a a=a+2 and p2 multiplies a with 2, a=a*2. If both processes modify the value of a at
the same time, then a value depends on the order of execution of the process. If p1 executes first,
a will be 14; if p2 executes first, a will be 12. This change of values due to access by two
processes at a time is the cause of the critical section problem.

The section in which the values are being modified is called the critical section. There are three
sections except for the critical sections: the entry section,exit section, and the reminder
section.

 The process entering the critical region must pass the entry region in which they request
entry to the critical section.
 The process exiting the critical section must pass the exit region.
 The remaining code left after execution is in the remainder section.

Peterson's solution provides a solution to the following problems,

 It ensures that if a process is in the critical section, no other process must be allowed to
enter it. This property is termed mutual exclusion.
 If more than one process wants to enter the critical section, the process that should enter
the critical region first must be established. This is termed progress.
 There is a limit to the number of requests that processors can make to enter the critical
region, provided that a process has already requested to enter and is waiting. This is
termed bounding.
 It provides platform neutrality as this solution is developed to run in user mode, which
doesn't require any permission from the kernel.
Pseudocode for Peterson's Algorithm
The algorithm used for implementing Peterson's algorithm can be written in pseudocode as
follows,

int flag[10]=false //intialize all process to flase


//represent which process turn to enter the critical region.
int turn;
void Entry_Section(int i) //i represnt process
{
int j;
j=i-1; // j represent other process
flag[i]= true; // allow process to enter the region.
turn = j;
// loop infintely until process j is in the critical region
while(flag[j]==true && turn=j);
}
void Exit_section(int i)
{
// allow next process to enter
flag[i]=false;
}

Explanation of Peterson’s Algorithm in OS


Let us see how the code above works in detail:-

 There are total N processes, each with a variable flag set to false on initialization. This
variable flag indicates if a process is ready to enter the critical region.
 The turn variable denotes which process its turn is now to enter the critical region.
 Then, every process will enter the entry section in which we define j, which denotes
another process that came before process i.
 We allow process I to enter the critical section and indicate that j process is now turned to
enter the critical section using turn=j.
 Then we check whether the process j is in the critical region using the
conditions flag[j]==true && turn=j. If process j is in the critical region, the while loop
runs continuously, and stalls process i from entering the region until process j exits out of
the critical region.
 The process which has exited the critical region is marked by flag[i]=false;, where I
denote the process exiting from the critical region.

Implementation of Peterson's Algorithm using


Programming Language
Typically the C programming language is used to implement Peterson's algorithm in OS, as the
basic OS programs were written in C or C++.
Before getting into the example, we must understand how to create a shared region in memory
that will act as a critical region. The shared memory can be created with the shmget() function
with syntax,

int shmget(key_t identifier, size_t storage_space, int flags);

The identifier is associated with the shared memory segment, and the storage_space is the
amount of space needed for storage. The flags tell us the permissions and type for the shared
memory. The IPC_CREAT | 06660 flag creates a new shared memory if it doesn't exist and gives
read and write permissions.

We can access a shared memory using the shmat() function.

syntax:

void *shmat(int id, const void *address, int flags);

The id is the value returned from the shmget() function, and the address is used to identify an
address within the segment and a NULL value will give the first address in the shared memory.
The flags are for permission.

The srand() and rand() functions are used to get random numbers based on time.
The srand() function marks the start of numbers for generating random numbers using
the rand() function.

Syntax:

srand(unsigned start);
int rand() //generate a numbers above start

The time is obtained using gettimeofday(); function with syntax,

int gettimeofday(struct timeval *time, struct timezone *time_zone)


time_t seconds =time.tv_sec;

Then the seconds are obtained from the default variable present in the time
structure, time.tv_sec.

The implementation of Peterson's Algorithm using the C programming language to solve the
producer-consumer problem is as follows,

#include <stdio.h>
#include <stdlib.h>
#include <unistd.h> //for constants and data types
#include <time.h> // data types to represent time
#include <sys/types.h> //has data types
#include <sys/ipc.h> // data types for shared memory
#include <sys/shm.h> // for shmat() and shmdt()
#include <stdbool.h> // for using boolean
#include <sys/time.h> // getimeofday() function

#define buffer_size 8

int shmid1, shmid2;


int shmid3, shmid4;
bool* flag_memory;
int* turn_memory;
int* buffer_memory;

int getrandom(int n)
{
time_t t;
// set the range of random number based on time.
srand((unsigned)time(&t));
// crate random number and return it
return (rand() % n + 1);
}
time_t gettime()
{
// timer to check for busy waiting
struct timeval t;
gettimeofday(&t, NULL);
time_t time= t.tv_sec;
return time;
}
int main()
{
// creating shared memory (critical section)
// shared memory for flag varible
shmid1 = shmget(5354, sizeof(bool) * 2, IPC_CREAT | 0660);
// shared memory for turn
shmid2 = shmget(1232, sizeof(int) * 1, IPC_CREAT | 0660);
// shared memory for buffer
shmid3 = shmget(4232, sizeof(int) * buffer_size, IPC_CREAT | 0660);
//shared memory used by the timer
shmid4 = shmget( 5633, sizeof(int) * 1, IPC_CREAT | 0660);
// checking if the critical section is created successfully
if (shmid1 < 0 || shmid2 < 0 || shmid3 < 0 || shmid4 < 0) {
error("Creation failed: ");
exit(1);
}
// gettting time
time_t t1, t2;
t1 =gettime();

// initalizing a empty array to store products


buffer_memory = (int*)shmat(shmid3, NULL, 0);
int num = 0;
while (num < buffer_size)
buffer_memory[num++] = 0;

// get data from the critical section


int* current_state = (int*)shmat(shmid4, NULL, 0);
*current_state = 1;
int wait_time;
int i = 0;
int j = 1;

// creating producer process with fork()


pid_t a =fork(); // creating two process with fork
if (a<0){
perror("Creating producer and consumer failed");
exit(1);
}
if (a>0) // producer process
{
// fetching values from critical section
flag_memory = (bool*)shmat(shmid1, NULL, 0);
turn_memory = (int*)shmat(shmid2, NULL, 0);
buffer_memory = (int*)shmat(shmid3, NULL, 0);
if (flag_memory == (bool*)-1 || turn_memory == (int*)-1 ||
buffer_memory == (int*)-1) {
perror("Producer can't be created: ");
exit(1);
}

bool* flag = flag_memory;


int* turn = turn_memory;
int* buf = buffer_memory;
int index = 0;
// implementing Peterson's Algorithm
while (*current_state == 1) {
flag[j] = true;
printf("Producer is ready now.\n\n");
*turn = i;
while (flag[i] == true && *turn == i);
// creating a product with random numbers
index = 0;
while (index < buffer_size) {
if (buf[index] == 0) {
int temp = getrandom(buffer_size * 3);
printf("The product %d have been
produced and is ready to be consumed\n", tempo);
buf[index] = temp;
break;
}
index++;
}
// caecking if the array is full
if (index == buffer_size)
printf("The producer has produced the
produccts to maximum capacity\n");
printf("Products: ");
index = 0;
while (index < buffer_size)
printf("%d ", buf[index++]);
printf("\n");

// exiting section
flag[j] = false;
if (*current_state == 0)
break;
wait_time = getrandom(2);
printf("Producer will wait for %d seconds\n\n",
wait_time);
sleep(wait_time);
}
exit(0);
}
else // consumer process
{
// getting data from critical region
flag_memory = (bool*)shmat(shmid1, NULL, 0);
turn_memory = (int*)shmat(shmid2, NULL, 0);
buffer_memory = (int*)shmat(shmid3, NULL, 0);
if (flag_memory == (bool*)-1 || turn_memory == (int*)-1 ||
buffer_memory == (int*)-1) {
perror("Consumer shared memory error");
exit(1);
}

bool* flag = flag_memory;


int* turn = turn_memory;
int* buf = buffer_memory;
int index = 0;
flag[i] = false;
sleep(5);
// implementing Peterson's Algorithm
while (*current_state == 1) {
flag[i] = true;
printf("Consumer can consumer produts.\n\n");
*turn = j;
while (flag[j] == true && *turn == j)
;

// checking if products are available for consumption


if (buf[0] != 0) {
printf("Job %d has been consumed\n", buf[0]);
buf[0] = 0;
index = 1;
while (index < buffer_size)
{
buf[index - 1] = buf[index];
index++;
}
buf[index - 1] = 0;
} else
printf("NO products available to the consumer
to be consumed\n");
printf("Buffer: ");
index = 0;
while (index < buffer_size)
printf("%d ", buf[index++]);
printf("\n");

//exit section
flag[i] = false;
if (*current_state == 0)
break;
wait_time = getrandom(15);
// time consumer need to wait to get new products
printf("Consumer will nedd to wait for %d seconds to
get products to be consumed.\n\n", wait_time);
sleep(wait_time);
}
exit(0);
}
//busy waiting
while (1) {
t2 = gettime();
// set current_state to 0 if process waits longer than 10 seconds.
if (t2 - t1 > 10)
{
*current_state = 0;
break;
}
}
// exit if too much time on busy waiting.
wait();
wait();
printf("Too much time has passed.\n");
return 0;
}

 A shared memory accessed by both processes is the critical section.


 The entry section is the state before producing or consuming products.
 The exit section is the state after producing or consuming the products.
 If there are no products, the consumer is locked in the while (flag[i] == true && *turn ==
i); condition.
 The index is used to identify the positions of the products in the array and to check if the
size of the array is lesser than the array's capacity.
 The variable current_state is used to check if the process waits too long. A
value 0 represents they spend too much time waiting.
 The fork() system call creates the producer and consumer processes. It has the syntax,
 Syntax:

pid_t fork();

It returns a positive number on successful creation and a negative number when failed to create
processes.

 A buf array is used to hold all the products, and computations are performed when a new
product is produced or consumed.

Example of Peterson’s Solution


Peterson's solution finds applications and examples of different problems in Operating Systems.
 The producer-consumer problem can be solved using Peterson's solution. Learn more
about how synchronization is maintained in producer-consumer problems.
 The logic used in a semaphore is similar to Peterson's solution. The semaphores are used
to solve many problems in OS.
 The most suited example is the usage of Peterson's solution in the critical section
problem.

Advantages of Peterson’s Solution


 Peterson's solution allows multiple processes to share and access a resource without
conflict between the resources.
 Every process gets a chance of execution.
 It is simple to implement and uses simple and powerful logic.
 It can be used in any hardware as it is completely software dependent and executed in the
user mode.
 Prevents the possibility of a deadlock.

Disadvantages of Peterson’s Solution


 The process may take a long time to wait for the other processes to come out of the
critical region. It is termed as Busy waiting.
 This algorithm may not work on systems having multiple CPUs.
 The Peterson solution is restricted to only two processes at a time.

Conclusion
 Peterson's solution is one of the classical solutions to solve the critical-section problem in
OS.
 It follows a simple algorithm and is limited to two processes simultaneously.
 We can implement Peterson's solution in any programming language, and it can be used
to solve other problems like the producer-consumer problem and reader-writer problem.
OR

What is Peterson's Solution


It is used to solve the process synchronisation problems where two or more processes access
shared resources.

Peterson’s Solution is based on two main ideas:


1) Willingness of a process to acquire the lock on the critical section.
2) Turn of a process to acquire lock.

It provides shared access to memory between the co-operating processes. It ensures mutual
exclusion among process sharing resources. The Producer Consumer is a classical problem
which can be solved using the Peterson’s Solution. To synchronise any two processes,
Peterson’s solution uses two variables:
1) Flag: a boolean array of size 2
2) Turn: integer variable

Initially both the flags in the array are set to false. Peterson’s Solution is a software-based
solution to race conditions. Although it is not used in modern-day computing, it provides an
algorithm-level way to solve the critical section problem. Peterson’s Solution works well in the
case of two cooperating processes as well as multiple processes wanting to share the critical
section.

Working of Peterson's Solution


Whenever a process wants to execute in its critical section, it sets its flag to true. Also the turn
variable is set to the index of the second cooperating process so as to allow the second process to
enter the critical section (if required). The process enters into busy waiting until the other process
has completely executed in its critical section.

Then, the current process starts executing in its critical section and utilises the shared resources.
Once the current process has executed completely, it sets its own flag to false to indicate that it
does not want to execute any further. Therefore, the current process executes for a fixed amount
of time before exiting from the system.

OR

Peterson’s Algorithm in Process


Synchronization


Prerequisite – Synchronization, Critical Section


The producer-consumer problem (or bounded buffer problem) describes two processes,
the producer and the consumer, which share a common, fixed-size buffer used as a queue.
Producers produce an item and put it into the buffer. If the buffer is already full then the
producer will have to wait for an empty block in the buffer. Consumers consume an item
from the buffer. If the buffer is already empty then the consumer will have to wait for an
item in the buffer. Implement Peterson’s Algorithm for the two processes using shared
memory such that there is mutual exclusion between them. The solution should have free
from synchronization problems.
Peterson’s algorithm –
C++
#include <iostream>
#include <thread>
#include <vector>

const int N = 2; // Number of threads (producer and consumer)

std::vector<bool> flag(N, false); // Flags to indicate readiness


int turn = 0; // Variable to indicate turn

void producer(int j) {
do {
flag[j] = true; // Producer j is ready to produce
turn = 1 - j; // Allow consumer to consume
while (flag[1 - j] && turn == 1 - j) {
// Wait for consumer to finish
// Producer waits if consumer is ready and it's consumer's turn
}

// Critical Section: Producer produces an item and puts it into the


buffer

flag[j] = false; // Producer is out of the critical section

// Remainder Section: Additional actions after critical section


} while (true); // Continue indefinitely
}

void consumer(int i) {
do {
flag[i] = true; // Consumer i is ready to consume
turn = i; // Allow producer to produce
while (flag[1 - i] && turn == i) {
// Wait for producer to finish
// Consumer waits if producer is ready and it's producer's turn
}

// Critical Section: Consumer consumes an item from the buffer

flag[i] = false; // Consumer is out of the critical section

// Remainder Section: Additional actions after critical section


} while (true); // Continue indefinitely
}

int main() {
std::thread producerThread(producer, 0); // Create producer thread
std::thread consumerThread(consumer, 1); // Create consumer thread

producerThread.join(); // Wait for producer thread to finish


consumerThread.join(); // Wait for consumer thread to finish

return 0;
}
C
// code for producer (j)

// producer j is ready
// to produce an item
flag[j] = true;

// but consumer (i) can consume an item


turn = i;

// if consumer is ready to consume an item


// and if its consumer's turn
while (flag[i] == true &amp;&amp; turn == i)

{ /* then producer will wait*/ }

// otherwise producer will produce


// an item and put it into buffer (critical Section)

// Now, producer is out of critical section


flag[j] = false;
// end of code for producer

//--------------------------------------------------------
// code for consumer i

// consumer i is ready
// to consume an item
flag[i] = true;

// but producer (j) can produce an item


turn = j;

// if producer is ready to produce an item


// and if its producer's turn
while (flag[j] == true &amp;&amp; turn == j)

{ /* then consumer will wait */ }

// otherwise consumer will consume


// an item from buffer (critical Section)

// Now, consumer is out of critical section


flag[i] = false;
// end of code for consumer
Java
import java.util.concurrent.locks.Condition;
import java.util.concurrent.locks.Lock;
import java.util.concurrent.locks.ReentrantLock;

public class Main {


static final int N = 2; // Number of threads (producer and consumer)
static final Lock lock = new ReentrantLock();
static final Condition[] readyToProduce = {lock.newCondition(),
lock.newCondition()};
static volatile int turn = 0; // Variable to indicate turn

static void producer(int j) {


do {
lock.lock();
try {
while (turn != j) {
readyToProduce[j].await();
}

// Critical Section: Producer produces an item and puts it


into the buffer
System.out.println("Producer " + j + " produces an item.");

turn = 1 - j; // Allow consumer to consume


readyToProduce[1 - j].signal();
} catch (InterruptedException e) {
e.printStackTrace();
} finally {
lock.unlock();
}

// Remainder Section: Additional actions after critical section


} while (true); // Continue indefinitely
}

static void consumer(int i) {


do {
lock.lock();
try {
while (turn != i) {
readyToProduce[i].await();
}

// Critical Section: Consumer consumes an item from the


buffer
System.out.println("Consumer " + i + " consumes an item.");

turn = 1 - i; // Allow producer to produce


readyToProduce[1 - i].signal();
} catch (InterruptedException e) {
e.printStackTrace();
} finally {
lock.unlock();
}

// Remainder Section: Additional actions after critical section


} while (true); // Continue indefinitely
}

public static void main(String[] args) {


Thread producerThread = new Thread(() -> producer(0)); // Create
producer thread
Thread consumerThread = new Thread(() -> consumer(1)); // Create
consumer thread

producerThread.start(); // Start producer thread


consumerThread.start(); // Start consumer thread

try {
Thread.sleep(1000); // Run for 1 second
} catch (InterruptedException e) {
e.printStackTrace();
} finally {
producerThread.interrupt(); // Interrupt producer thread
consumerThread.interrupt(); // Interrupt consumer thread
}
}
}
C#
using System;
using System.Threading;
using System.Collections.Generic;

class GFG
{
const int N = 2; // Number of threads
static List<bool> flag = new List<bool>(new bool[N]);
static int turn = 0; // Variable to indicate turn
// Producer method
static void Producer(object obj)
{
int j = (int)obj;
do
{
flag[j] = true;
turn = 1 - j;
// Wait for consumer to finish
// Producer waits if consumer is ready and it's consumer's turn
while (flag[1 - j] && turn == 1 - j)
{
// Wait
}
// Critical Section: Producer produces an item and
// puts it into the buffer
Console.WriteLine($"Producer {j} produced an item");
flag[j] = false;
// Remainder Section: Additional actions after critical section
Thread.Sleep(1000);

} while (true);
}
// Consumer method
static void Consumer(object obj)
{
int i = (int)obj;
do
{
flag[i] = true;
turn = i;
// Wait for producer to finish
// Consumer waits if producer is ready and it's producer's turn
while (flag[1 - i] && turn == i)
{
// Wait
}
// Critical Section: Consumer consumes an item from buffer
Console.WriteLine($"Consumer {i} consumed an item");
flag[i] = false;
// Remainder Section: Additional actions after critical section
Thread.Sleep(1000);
} while (true);
}
static void Main(string[] args)
{
Thread producerThread = new Thread(Producer); // Create producer
thread
Thread consumerThread = new Thread(Consumer); // Create consumer
thread
producerThread.Start(0); // Start producer thread with index 0
consumerThread.Start(1); // Start consumer thread with index 1
producerThread.Join(); // Wait for producer thread to finish
consumerThread.Join(); // Wait for consumer thread to finish
}
}
Javascript
const N = 2; // Number of threads (producer and consumer)
const lockObject = {}; // Lock object for synchronization

async function producer(j) {


while (true) {
await new Promise((resolve) => {
lock(lockObject, () => {
// Critical Section: Producer produces an item and puts it
into the buffer
console.log(`Producer ${j} produces an item`);
// Remainder Section: Additional actions after the critical
section
});
resolve();
});
await sleep(100); // Simulate some work before the next iteration
}
}

async function consumer(i) {


while (true) {
await new Promise((resolve) => {
lock(lockObject, () => {
// Critical Section: Consumer consumes an item from the
buffer
console.log(`Consumer ${i} consumes an item`);
// Remainder Section: Additional actions after the critical
section
});
resolve();
});
await sleep(100); // Simulate some work before the next iteration
}
}

function sleep(ms) {
return new Promise(resolve => setTimeout(resolve, ms));
}

function lock(obj, callback) {


if (!obj.__lock__) {
obj.__lock__ = true;
try {
callback();
} finally {
delete obj.__lock__;
}
}
}

// Start producer and consumer threads


producer(0); // Start producer 0
producer(1); // Start producer 1
consumer(0); // Start consumer 0
consumer(1); // Start consumer 1

// Run for 1 second


setTimeout(() => {
process.exit(); // Terminate the program after 1 second
}, 1000);
Python3
import threading

N = 2 # Number of threads (producer and consumer)


flag = [False] * N # Flags to indicate readiness
turn = 0 # Variable to indicate turn

# Function for producer thread


def producer(j):
while True:
flag[j] = True # Producer j is ready to produce
turn = 1 - j # Allow consumer to consume
while flag[1 - j] and turn == 1 - j:
# Wait for consumer to finish
# Producer waits if consumer is ready and it's consumer's turn
pass

# Critical Section: Producer produces an item and puts it into the


buffer

flag[j] = False # Producer is out of the critical section

# Remainder Section: Additional actions after critical section

# Function for consumer thread


def consumer(i):
while True:
flag[i] = True # Consumer i is ready to consume
turn = i # Allow producer to produce
while flag[1 - i] and turn == i:
# Wait for producer to finish
# Consumer waits if producer is ready and it's producer's turn
pass

# Critical Section: Consumer consumes an item from the buffer

flag[i] = False # Consumer is out of the critical section

# Remainder Section: Additional actions after critical section

# Create producer and consumer threads


producer_thread = threading.Thread(target=producer, args=(0,))
consumer_thread = threading.Thread(target=consumer, args=(1,))

# Start the threads


producer_thread.start()
consumer_thread.start()

# Wait for the threads to finish


producer_thread.join()
consumer_thread.join()
Explanation of Peterson’s Algorithm
Peterson’s Algorithm is used to synchronize two processes. It uses two variables, a bool
array flag of size 2 and an int variable turn to accomplish it. In the solution, i represents
the Consumer and j represents the Producer. Initially, the flags are false. When a process
wants to execute it’s critical section, it sets its flag to true and turn into the index of the
other process. This means that the process wants to execute but it will allow the other
process to run first. The process performs busy waiting until the other process has
finished it’s own critical section. After this, the current process enters its critical section
and adds or removes a random number from the shared buffer. After completing the
critical section, it sets it’s own flag to false, indicating it does not wish to execute
anymore. The program runs for a fixed amount of time before exiting. This time can be
changed by changing value of the macro RT.

Pause

Unmute

×
C++
#include <iostream>
#include <vector>

#ifdef _WIN32
#include <windows.h>
#else
#include <unistd.h>
#endif

#include <sys/types.h>
#include <sys/ipc.h>
#include <sys/shm.h>
#include <sys/time.h>

#define BSIZE 8
#define PWT 2
#define CWT 10
#define RT 10

int shmid1, shmid2, shmid3, shmid4;


key_t k1 = 5491, k2 = 5812, k3 = 4327, k4 = 3213;

bool* SHM1;
int* SHM2;
int* SHM3;

void initializeBuffer(int* buf, int size) {


for (int i = 0; i < size; ++i) {
buf[i] = 0;
}
}

int myrand(int n) {
static int initialized = 0;
if (!initialized) {
srand(static_cast<unsigned>(time(nullptr)));
initialized = 1;
}
return (rand() % n + 1);
}

void cleanup() {
shmdt(SHM1);
shmdt(SHM2);
shmdt(SHM3);
shmctl(shmid1, IPC_RMID, nullptr);
shmctl(shmid2, IPC_RMID, nullptr);
shmctl(shmid3, IPC_RMID, nullptr);
shmctl(shmid4, IPC_RMID, nullptr);
}

int main() {
shmid1 = shmget(k1, sizeof(bool) * 2, IPC_CREAT | 0660);
shmid2 = shmget(k2, sizeof(int) * 1, IPC_CREAT | 0660);
shmid3 = shmget(k3, sizeof(int) * BSIZE, IPC_CREAT | 0660);
shmid4 = shmget(k4, sizeof(int) * 1, IPC_CREAT | 0660);

if (shmid1 < 0 || shmid2 < 0 || shmid3 < 0 || shmid4 < 0) {


perror("Main shmget error: ");
exit(1);
}

SHM3 = static_cast<int*>(shmat(shmid3, nullptr, 0));


initializeBuffer(SHM3, BSIZE);

struct timeval t;
gettimeofday(&t, nullptr);
time_t t1 = t.tv_sec;

int* state = static_cast<int*>(shmat(shmid4, nullptr, 0));


*state = 1;
int wait_time;

int i = 0; // Consumer
int j = 1; // Producer

if (fork() == 0) // Producer code


{
SHM1 = static_cast<bool*>(shmat(shmid1, nullptr, 0));
SHM2 = static_cast<int*>(shmat(shmid2, nullptr, 0));
SHM3 = static_cast<int*>(shmat(shmid3, nullptr, 0));

if (SHM1 == nullptr || SHM2 == nullptr || SHM3 == nullptr) {


perror("Producer shmat error: ");
exit(1);
}

bool* flag = SHM1;


int* turn = SHM2;
int* buf = SHM3;
int index = 0;

while (*state == 1) {
flag[j] = true;
printf("Producer is ready now.\n\n");
*turn = i;

while (flag[i] == true && *turn == i);

// Critical Section Begin


index = 0;
while (index < BSIZE) {
if (buf[index] == 0) {
int tempo = myrand(BSIZE * 3);
printf("Job %d has been produced\n", tempo);
buf[index] = tempo;
break;
}
index++;
}

if (index == BSIZE)
printf("Buffer is full, nothing can be produced!!!\n");

printf("Buffer: ");
index = 0;
while (index < BSIZE)
printf("%d ", buf[index++]);
printf("\n");
// Critical Section End

flag[j] = false;
if (*state == 0)
break;

wait_time = myrand(PWT);
printf("Producer will wait for %d seconds\n\n", wait_time);

#ifdef _WIN32
Sleep(wait_time * 1000);
#else
usleep(wait_time * 1000000);
#endif
}

exit(0);
}

if (fork() == 0) // Consumer code


{
SHM1 = static_cast<bool*>(shmat(shmid1, nullptr, 0));
SHM2 = static_cast<int*>(shmat(shmid2, nullptr, 0));
SHM3 = static_cast<int*>(shmat(shmid3, nullptr, 0));

if (SHM1 == nullptr || SHM2 == nullptr || SHM3 == nullptr) {


perror("Consumer shmat error:");
exit(1);
}

bool* flag = SHM1;


int* turn = SHM2;
int* buf = SHM3;
int index = 0;
flag[i] = false;

while (*state == 1) {
flag[i] = true;
printf("Consumer is ready now.\n\n");
*turn = j;

while (flag[j] == true && *turn == j);

// Critical Section Begin


if (buf[0] != 0) {
printf("Job %d has been consumed\n", buf[0]);
buf[0] = 0;
index = 1;
while (index < BSIZE) {
buf[index - 1] = buf[index];
index++;
}
buf[index - 1] = 0;
} else
printf("Buffer is empty, nothing can be consumed!!!\n");

printf("Buffer: ");
index = 0;
while (index < BSIZE)
printf("%d ", buf[index++]);
printf("\n");
// Critical Section End

flag[i] = false;
if (*state == 0)
break;

wait_time = myrand(CWT);
printf("Consumer will sleep for %d seconds\n\n", wait_time);

#ifdef _WIN32
Sleep(wait_time * 1000);
#else
usleep(wait_time * 1000000);
#endif
}

exit(0);
}

// Parent process will now wait for RT seconds before causing the child to
terminate
while (1) {
gettimeofday(&t, nullptr);
time_t t2 = t.tv_sec;
if (t2 - t1 > RT) {
*state = 0;
break;
}
}

// Waiting for both processes to exit


wait(nullptr);
wait(nullptr);

cleanup();
printf("The clock ran out.\n");

return 0;
}
C
// C program to implement Peterson’s Algorithm
// for producer-consumer problem.
#include &lt;stdio.h&gt;
#include &lt;stdlib.h&gt;
#include &lt;unistd.h&gt;
#include &lt;time.h&gt;
#include &lt;sys/types.h&gt;
#include &lt;sys/ipc.h&gt;
#include &lt;sys/shm.h&gt;
#include &lt;stdbool.h&gt;
#define _BSD_SOURCE
#include &lt;sys/time.h&gt;

#define BSIZE 8 // Buffer size


#define PWT 2 // Producer wait time limit
#define CWT 10 // Consumer wait time limit
#define RT 10 // Program run-time in seconds

int shmid1, shmid2, shmid3, shmid4;


key_t k1 = 5491, k2 = 5812, k3 = 4327, k4 = 3213;
bool* SHM1;
int* SHM2;
int* SHM3;

int myrand(int n) // Returns a random number between 1 and n


{
time_t t;
srand((unsigned)time(&amp;t));
return (rand() % n + 1);
}

int main()
{
shmid1 = shmget(k1, sizeof(bool) * 2, IPC_CREAT | 0660); // flag
shmid2 = shmget(k2, sizeof(int) * 1, IPC_CREAT | 0660); // turn
shmid3 = shmget(k3, sizeof(int) * BSIZE, IPC_CREAT | 0660); // buffer
shmid4 = shmget(k4, sizeof(int) * 1, IPC_CREAT | 0660); // time stamp

if (shmid1 &lt; 0 || shmid2 &lt; 0 || shmid3 &lt; 0 || shmid4 &lt; 0) {


perror(&quot;Main shmget error: &quot;);
exit(1);
}
SHM3 = (int*)shmat(shmid3, NULL, 0);
int ix = 0;
while (ix &lt; BSIZE) // Initializing buffer
SHM3[ix++] = 0;

struct timeval t;
time_t t1, t2;
gettimeofday(&amp;t, NULL);
t1 = t.tv_sec;

int* state = (int*)shmat(shmid4, NULL, 0);


*state = 1;
int wait_time;
int i = 0; // Consumer
int j = 1; // Producer

if (fork() == 0) // Producer code


{
SHM1 = (bool*)shmat(shmid1, NULL, 0);
SHM2 = (int*)shmat(shmid2, NULL, 0);
SHM3 = (int*)shmat(shmid3, NULL, 0);
if (SHM1 == (bool*)-1 || SHM2 == (int*)-1 || SHM3 == (int*)-1) {
perror(&quot;Producer shmat error: &quot;);
exit(1);
}

bool* flag = SHM1;


int* turn = SHM2;
int* buf = SHM3;
int index = 0;

while (*state == 1) {
flag[j] = true;
printf(&quot;Producer is ready now.\n\n&quot;);
*turn = i;
while (flag[i] == true &amp;&amp; *turn == i)
;

// Critical Section Begin


index = 0;
while (index &lt; BSIZE) {
if (buf[index] == 0) {
int tempo = myrand(BSIZE * 3);
printf(&quot;Job %d has been produced\n&quot;, tempo);
buf[index] = tempo;
break;
}
index++;
}
if (index == BSIZE)
printf(&quot;Buffer is full, nothing can be produced!!!\
n&quot;);
printf(&quot;Buffer: &quot;);
index = 0;
while (index &lt; BSIZE)
printf(&quot;%d &quot;, buf[index++]);
printf(&quot;\n&quot;);
// Critical Section End

flag[j] = false;
if (*state == 0)
break;
wait_time = myrand(PWT);
printf(&quot;Producer will wait for %d seconds\n\n&quot;,
wait_time);
sleep(wait_time);
}
exit(0);
}

if (fork() == 0) // Consumer code


{
SHM1 = (bool*)shmat(shmid1, NULL, 0);
SHM2 = (int*)shmat(shmid2, NULL, 0);
SHM3 = (int*)shmat(shmid3, NULL, 0);
if (SHM1 == (bool*)-1 || SHM2 == (int*)-1 || SHM3 == (int*)-1) {
perror(&quot;Consumer shmat error:&quot;);
exit(1);
}

bool* flag = SHM1;


int* turn = SHM2;
int* buf = SHM3;
int index = 0;
flag[i] = false;
sleep(5);
while (*state == 1) {
flag[i] = true;
printf(&quot;Consumer is ready now.\n\n&quot;);
*turn = j;
while (flag[j] == true &amp;&amp; *turn == j)
;

// Critical Section Begin


if (buf[0] != 0) {
printf(&quot;Job %d has been consumed\n&quot;, buf[0]);
buf[0] = 0;
index = 1;
while (index &lt; BSIZE) // Shifting remaining jobs forward
{
buf[index - 1] = buf[index];
index++;
}
buf[index - 1] = 0;
} else
printf(&quot;Buffer is empty, nothing can be consumed!!!\
n&quot;);
printf(&quot;Buffer: &quot;);
index = 0;
while (index &lt; BSIZE)
printf(&quot;%d &quot;, buf[index++]);
printf(&quot;\n&quot;);
// Critical Section End

flag[i] = false;
if (*state == 0)
break;
wait_time = myrand(CWT);
printf(&quot;Consumer will sleep for %d seconds\n\n&quot;,
wait_time);
sleep(wait_time);
}
exit(0);
}
// Parent process will now for RT seconds before causing child to
terminate
while (1) {
gettimeofday(&amp;t, NULL);
t2 = t.tv_sec;
if (t2 - t1 &gt; RT) // Program will exit after RT seconds
{
*state = 0;
break;
}
}
// Waiting for both processes to exit
wait();
wait();
printf(&quot;The clock ran out.\n&quot;);
return 0;
}
C#
using System;
using System.Threading.Tasks;

class Program
{
const int BSIZE = 8;
const int PWT = 2;
const int CWT = 10;
const int RT = 10;

static bool[] SHM1 = new bool[2];


static int[] SHM2 = new int[1];
static int[] SHM3 = new int[BSIZE];
static Random rand = new Random();

static void InitializeBuffer(int[] buf, int size)


{
Array.Fill(buf, 0);
}

static int MyRand(int n) => rand.Next(1, n + 1);

static async Task Producer()


{
bool[] flag = SHM1;
int[] turn = SHM2;
int[] buf = SHM3;
int index = 0;

while (SHM2[0] == 1)
{
flag[1] = true;
Console.WriteLine("Producer is ready now.\n");

turn[0] = 0;
while (flag[0] && turn[0] == 0) ;

// Critical Section Begin


index = 0;
while (index < BSIZE)
{
if (buf[index] == 0)
{
int tempo = MyRand(BSIZE * 3);
Console.WriteLine($"Job {tempo} has been produced");
buf[index] = tempo;
break;
}
index++;
}

if (index == BSIZE)
Console.WriteLine("Buffer is full, nothing can be produced!!!\
n");

Console.Write("Buffer: ");
index = 0;
while (index < BSIZE)
Console.Write($"{buf[index++]} ");
Console.WriteLine("\n");
// Critical Section End

flag[1] = false;

if (SHM2[0] == 0)
break;

int waitTime = MyRand(PWT);


Console.WriteLine($"Producer will wait for {waitTime} seconds\n");

await Task.Delay(waitTime * 1000);


}
}

static async Task Consumer()


{
bool[] flag = SHM1;
int[] turn = SHM2;
int[] buf = SHM3;
int index = 0;
flag[0] = false;

while (SHM2[0] == 1)
{
flag[0] = true;
Console.WriteLine("Consumer is ready now.\n");

turn[0] = 1;
while (flag[1] && turn[0] == 1) ;

// Critical Section Begin


if (buf[0] != 0)
{
Console.WriteLine($"Job {buf[0]} has been consumed");
buf[0] = 0;
index = 1;
while (index < BSIZE)
{
buf[index - 1] = buf[index];
index++;
}
buf[index - 1] = 0;
}
else
Console.WriteLine("Buffer is empty, nothing can be
consumed!!!\n");

Console.Write("Buffer: ");
index = 0;
while (index < BSIZE)
Console.Write($"{buf[index++]} ");
Console.WriteLine("\n");
// Critical Section End

flag[0] = false;

if (SHM2[0] == 0)
break;

int waitTime = MyRand(CWT);


Console.WriteLine($"Consumer will sleep for {waitTime} seconds\n");

await Task.Delay(waitTime * 1000);


}
}

static async Task Main()


{
InitializeBuffer(SHM3, BSIZE);

DateTime startTime = DateTime.Now;

SHM2[0] = 1; // Initializing the state

var producerTask = Producer();


var consumerTask = Consumer();

// Parent process will now wait for RT seconds before causing the
child to terminate
while ((DateTime.Now - startTime).TotalSeconds <= RT) ;

SHM2[0] = 0;

// Waiting for both tasks to finish


await Task.WhenAll(producerTask, consumerTask);

Console.WriteLine("The clock ran out.\n");


}
}
Output:
Producer is ready now.
Job 9 has been produced
Buffer: 9 0 0 0 0 0 0 0
Producer will wait for 1 seconds
Producer is ready now.
Job 8 has been produced
Buffer: 9 8 0 0 0 0 0 0
Producer will wait for 2 seconds
Producer is ready now.
Job 13 has been produced
Buffer: 9 8 13 0 0 0 0 0
Producer will wait for 1 seconds
Producer is ready now.
Job 23 has been produced
Buffer: 9 8 13 23 0 0 0 0
Producer will wait for 1 seconds
Consumer is ready now.
Job 9 has been consumed
Buffer: 8 13 23 0 0 0 0 0
Consumer will sleep for 9 seconds
Producer is ready now.
Job 15 has been produced
Buffer: 8 13 23 15 0 0 0 0
Producer will wait for 1 seconds
Producer is ready now.
Job 13 has been produced
Buffer: 8 13 23 15 13 0 0 0
Producer will wait for 1 seconds
Producer is ready now.
Job 11 has been produced
Buffer: 8 13 23 15 13 11 0 0
Producer will wait for 1 seconds
Producer is ready now.
Job 22 has been produced
Buffer: 8 13 23 15 13 11 22 0
Producer will wait for 2 seconds
Producer is ready now.
Job 23 has been produced
Buffer: 8 13 23 15 13 11 22 23
Producer will wait for 1 seconds
The clock ran out.
C++
//g++ -pthread /path/to/your/Solution.cpp -o your_program_name
//In environments where threading is not supported by default, you need to
explicitly link against the pthread library.
#include <iostream>
#include <vector>
#include <chrono>
#include <cstdlib>
#include <ctime>
#include <thread>

const int BSIZE = 8;


const int PWT = 1000;
const int CWT = 4000;
const int RT = 30000;

bool shmid1 = false;


int shmid2 = 0;
std::vector<int> shmid3(BSIZE, 0);
int shmid4 = 0;

int state = 1;

int myrand(int n) {
return rand() % n + 1;
}

void producer() {
while (state == 1) {
shmid1 = true;
std::cout << "Producer is ready now.\n";
std::this_thread::sleep_for(std::chrono::milliseconds(500));

shmid2 = 0;
while (shmid1 && shmid2 == 0) {}

// Critical Section Begin


int index = 0;
while (index < BSIZE) {
if (shmid3[index] == 0) {
const int tempo = myrand(BSIZE * 3);
std::cout << "Job " << tempo << " has been produced\n";
shmid3[index] = tempo;
break;
}
index++;
}
if (index == BSIZE) {
std::cout << "Buffer is full, nothing can be produced!!!\n";
}
std::cout << "Buffer: ";
for (int val : shmid3) {
std::cout << val << " ";
}
std::cout << "\n";
// Critical Section End
shmid1 = false;
if (state == 0) break;
const int wait_time = myrand(PWT);
std::cout << "Producer will wait for " << wait_time / 1000.0 << "
seconds\n";
std::this_thread::sleep_for(std::chrono::milliseconds(wait_time));
}
}

void consumer() {
shmid1 = false;
std::this_thread::sleep_for(std::chrono::milliseconds(5000));
while (state == 1) {
shmid1 = true;
std::cout << "Consumer is ready now.\n";
std::this_thread::sleep_for(std::chrono::milliseconds(500));

shmid2 = 1;
while (shmid1 && shmid2 == 1) {}

// Critical Section Begin


if (shmid3[0] != 0) {
std::cout << "Job " << shmid3[0] << " has been consumed\n";
shmid3[0] = 0;
int index = 1;
while (index < BSIZE) {
shmid3[index - 1] = shmid3[index];
index++;
}
shmid3[index - 1] = 0;
} else {
std::cout << "Buffer is empty, nothing can be consumed!!!\n";
}
std::cout << "Buffer: ";
for (int val : shmid3) {
std::cout << val << " ";
}
std::cout << "\n";
// Critical Section End

shmid1 = false;
if (state == 0) break;
const int wait_time = myrand(CWT);
std::cout << "Consumer will sleep for " << wait_time / 1000.0 << "
seconds\n";
std::this_thread::sleep_for(std::chrono::milliseconds(wait_time));
}
}

int main() {
srand(time(nullptr));

// Start producer and consumer in separate threads (simulated)


std::thread producer_thread(producer);
std::thread consumer_thread(consumer);

// Simulate program run for RT milliseconds


for (int elapsed_time = 0; elapsed_time < RT; elapsed_time += 100) {
std::this_thread::sleep_for(std::chrono::milliseconds(100));
}

// Set state to 0 to stop producer and consumer


state = 0;

// Join threads
producer_thread.join();
consumer_thread.join();

std::cout << "The clock ran out.\n";

return 0;
}
Java
import java.util.Random;

public class ProducerConsumer {


static final int BSIZE = 8; // Buffer size
static final int PWT = 2; // Producer wait time limit
static final int CWT = 10; // Consumer wait time limit
static final int RT = 10; // Program run-time in seconds
static volatile boolean state = true;
static volatile boolean[] flag = new boolean[2];
static volatile int turn = 1;
static volatile int[] buf = new int[BSIZE];

static int myrand(int n) {


Random rand = new Random();
return rand.nextInt(n) + 1;
}

public static void main(String[] args) {


Thread producer = new Thread(() -> {
int index;
while (state) {
flag[1] = true;
System.out.println("Producer is ready now.\n");

turn = 0;
while (flag[0] && turn == 0) ;

synchronized (buf) {
index = 0;
while (index < BSIZE) {
if (buf[index] == 0) {
int tempo = myrand(BSIZE * 3);
System.out.println("Job " + tempo + " has been
produced");
buf[index] = tempo;
break;
}
index++;
}
if (index == BSIZE)
System.out.println("Buffer is full, nothing can be
produced!!!\n");
System.out.print("Buffer: ");
for (int value : buf) {
System.out.print(value + " ");
}
System.out.println("\n");
}

flag[1] = false;
if (!state) break;
int wait_time = myrand(PWT);
System.out.println("Producer will wait for " + wait_time + "
seconds\n");
try {
Thread.sleep(wait_time * 1000);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
});

Thread consumer = new Thread(() -> {


int index;
flag[0] = false;
try {
Thread.sleep(5000);
} catch (InterruptedException e) {
e.printStackTrace();
}
while (state) {
flag[0] = true;
System.out.println("Consumer is ready now.\n");

turn = 1;
while (flag[1] && turn == 1) ;

synchronized (buf) {
if (buf[0] != 0) {
System.out.println("Job " + buf[0] + " has been
consumed");
buf[0] = 0;
index = 1;
while (index < BSIZE) {
buf[index - 1] = buf[index];
index++;
}
buf[index - 1] = 0;
} else
System.out.println("Buffer is empty, nothing can be
consumed!!!\n");

System.out.print("Buffer: ");
for (int value : buf) {
System.out.print(value + " ");
}
System.out.println("\n");
}

flag[0] = false;
if (!state) break;
int wait_time = myrand(CWT);
System.out.println("Consumer will sleep for " + wait_time + "
seconds\n");
try {
Thread.sleep(wait_time * 1000);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
});

producer.start();
consumer.start();

try {
Thread.sleep(RT * 1000);
} catch (InterruptedException e) {
e.printStackTrace();
}
state = false;

try {
producer.join();
consumer.join();
} catch (InterruptedException e) {
e.printStackTrace();
}
System.out.println("The clock ran out.\n");
}
}
C#
using System;
using System.Threading;
using System.Collections.Generic;

public class ProducerConsumer


{
static readonly int BSIZE = 8; // Buffer size
static readonly int PWT = 2; // Producer wait time limit
static readonly int CWT = 10; // Consumer wait time limit
static readonly int RT = 10; // Program run-time in seconds
static volatile bool state = true;
static volatile bool[] flag = new bool[2];
static volatile int turn = 1;
static volatile int[] buf = new int[BSIZE];

static Random rand = new Random();

static int MyRand(int n)


{
return rand.Next(1, n + 1);
}

static void Main(string[] args)


{
Thread producerThread = new Thread(() =>
{
int index;
while (state)
{
flag[1] = true;
Console.WriteLine("Producer is ready now.\n");
turn = 0;
while (flag[0] && turn == 0) ;

lock (buf)
{
index = 0;
while (index < BSIZE)
{
if (buf[index] == 0)
{
int tempo = MyRand(BSIZE * 3);
Console.WriteLine($"Job {tempo} has been
produced");
buf[index] = tempo;
break;
}
index++;
}
if (index == BSIZE)
Console.WriteLine("Buffer is full, nothing can be
produced!!!\n");
Console.Write("Buffer: ");
foreach (int value in buf)
{
Console.Write($"{value} ");
}
Console.WriteLine("\n");
}

flag[1] = false;
if (!state) break;
int waitTime = MyRand(PWT);
Console.WriteLine($"Producer will wait for {waitTime} seconds\
n");
Thread.Sleep(waitTime * 1000);
}
});

Thread consumerThread = new Thread(() =>


{
int index;
flag[0] = false;
Thread.Sleep(5000);

while (state)
{
flag[0] = true;
Console.WriteLine("Consumer is ready now.\n");
turn = 1;
while (flag[1] && turn == 1) ;

lock (buf)
{
if (buf[0] != 0)
{
Console.WriteLine($"Job {buf[0]} has been consumed");
buf[0] = 0;
index = 1;
while (index < BSIZE)
{
buf[index - 1] = buf[index];
index++;
}
buf[index - 1] = 0;
}
else
Console.WriteLine("Buffer is empty, nothing can be
consumed!!!\n");

Console.Write("Buffer: ");
foreach (int value in buf)
{
Console.Write($"{value} ");
}
Console.WriteLine("\n");
}

flag[0] = false;
if (!state) break;
int waitTime = MyRand(CWT);
Console.WriteLine($"Consumer will sleep for {waitTime}
seconds\n");
Thread.Sleep(waitTime * 1000);
}
});

producerThread.Start();
consumerThread.Start();

try
{
Thread.Sleep(RT * 1000);
}
catch (ThreadInterruptedException e)
{
Console.WriteLine(e.StackTrace);
}
state = false;

try
{
producerThread.Join();
consumerThread.Join();
}
catch (ThreadInterruptedException e)
{
Console.WriteLine(e.StackTrace);
}

Console.WriteLine("The clock ran out.\n");


}
}
Javascript
const BSIZE = 8; // Buffer size
const PWT = 1000; // Producer wait time limit in milliseconds
const CWT = 4000; // Consumer wait time limit in milliseconds
const RT = 30000; // Program run-time in milliseconds

let shmid1 = false;


let shmid2 = false;
let shmid3 = new Array(BSIZE).fill(0);
let shmid4 = 0;

let state = 1;

function myrand(n) {
return Math.floor(Math.random() * n) + 1;
}

function producer() {
while (state === 1) {
shmid1 = true;
console.log("Producer is ready now.");
// Simulate some processing time
awaitTimeout(500);

shmid2 = 0;
while (shmid1 && shmid2 === 0) {}

// Critical Section Begin


let index = 0;
while (index < BSIZE) {
if (shmid3[index] === 0) {
const tempo = myrand(BSIZE * 3);
console.log(`Job ${tempo} has been produced`);
shmid3[index] = tempo;
break;
}
index++;
}
if (index === BSIZE) {
console.log("Buffer is full, nothing can be produced!!!");
}
console.log("Buffer:", shmid3.join(" "));
// Critical Section End

shmid1 = false;
if (state === 0) break;
const wait_time = myrand(PWT);
console.log(`Producer will wait for ${wait_time / 1000} seconds`);
awaitTimeout(wait_time);
}
}

function consumer() {
shmid1 = false;
awaitTimeout(5000);
while (state === 1) {
shmid1 = true;
console.log("Consumer is ready now.");
// Simulate some processing time
awaitTimeout(500);

shmid2 = 1;
while (shmid1 && shmid2 === 1) {}

// Critical Section Begin


if (shmid3[0] !== 0) {
console.log(`Job ${shmid3[0]} has been consumed`);
shmid3[0] = 0;
let index = 1;
while (index < BSIZE) {
shmid3[index - 1] = shmid3[index];
index++;
}
shmid3[index - 1] = 0;
} else {
console.log("Buffer is empty, nothing can be consumed!!!");
}
console.log("Buffer:", shmid3.join(" "));
// Critical Section End
shmid1 = false;
if (state === 0) break;
const wait_time = myrand(CWT);
console.log(`Consumer will sleep for ${wait_time / 1000} seconds`);
awaitTimeout(wait_time);
}
}

async function awaitTimeout(ms) {


return new Promise((resolve) => {
setTimeout(resolve, ms);
});
}

(async () => {
producer();
consumer();

await awaitTimeout(RT);
state = 0;

console.log("The clock ran out.");


})();
Python3
import multiprocessing
import random
import time

BSIZE = 8 # Buffer size


PWT = 1 # Producer wait time limit
CWT = 4 # Consumer wait time limit
RT = 30 # Program run-time in seconds

shmid1 = multiprocessing.Value('i', 0)
shmid2 = multiprocessing.Value('i', 0)
shmid3 = multiprocessing.Array('i', [0] * BSIZE)
shmid4 = multiprocessing.Value('i', 0)

state = multiprocessing.Value('i', 1)

def myrand(n):
return random.randint(1, n)

def producer():
global state
while state.value == 1:
shmid1.value = True
print("Producer is ready now.")
time.sleep(0.5) # Simulate some processing time

shmid2.value = 0
while shmid1.value == True and shmid2.value == 0:
pass

with shmid1.get_lock(), shmid3.get_lock():


# Critical Section Begin
index = 0
while index < BSIZE:
if shmid3[index] == 0:
tempo = myrand(BSIZE * 3)
print(f"Job {tempo} has been produced")
shmid3[index] = tempo
break
index += 1
if index == BSIZE:
print("Buffer is full, nothing can be produced!!!")
print("Buffer:", ' '.join(map(str, shmid3)))
# Critical Section End

shmid1.value = False
if state.value == 0:
break
wait_time = myrand(PWT)
print(f"Producer will wait for {wait_time} seconds")
time.sleep(wait_time)

def consumer():
global state
shmid1.value = False
time.sleep(5)
while state.value == 1:
shmid1.value = True
print("Consumer is ready now.")
time.sleep(0.5) # Simulate some processing time

shmid2.value = 1
while shmid1.value == True and shmid2.value == 1:
pass

with shmid1.get_lock(), shmid3.get_lock():


# Critical Section Begin
if shmid3[0] != 0:
print(f"Job {shmid3[0]} has been consumed")
shmid3[0] = 0
index = 1
while index < BSIZE:
shmid3[index - 1] = shmid3[index]
index += 1
shmid3[index - 1] = 0
else:
print("Buffer is empty, nothing can be consumed!!!")
print("Buffer:", ' '.join(map(str, shmid3)))
# Critical Section End

shmid1.value = False
if state.value == 0:
break
wait_time = myrand(CWT)
print(f"Consumer will sleep for {wait_time} seconds")
time.sleep(wait_time)

if __name__ == "__main__":
producer_process = multiprocessing.Process(target=producer)
consumer_process = multiprocessing.Process(target=consumer)

producer_process.start()
consumer_process.start()

time.sleep(RT)
state.value = 0

producer_process.join()
consumer_process.join()

print("The clock ran out.")


Producer is ready now.
Job 13 has been produced
Buffer: 13 0 0 0 0 0 0 0
Producer will wait for 1 seconds
Producer is ready now.
Job 19 has been produced
Buffer: 13 19 0 0 0 0 0 0
Producer will wait for 1 seconds
Producer is ready now.
Job 24 has been produced
Buffer: 13 19 24 0 0 0 0 0
Producer will wait for 1 seconds
Producer is ready now.
Job 14 has been produced
Buffer: 13 19 24 14 0 0 0 0
Producer will wait for 1 seconds
Producer is ready now.
Job 22 has been produced
Buffer: 13 19 24 14 22 0 0 0
Producer will wait for 2 seconds
Consumer is ready now.
Job 13 has been consumed
Buffer: 19 24 14 22 0 0 0 0
Consumer will sleep for 4 seconds
Producer is ready now.
Job 21 has been produced
Buffer: 19 24 14 22 21 0 0 0
Producer will wait for 2 seconds
Producer is ready now.
Job 24 has been produced
Buffer: 19 24 14 22 21 24 0 0
Producer will wait for 2 seconds
Consumer is ready now.
Job 19 has been consumed
Buffer: 24 14 22 21 24 0 0 0
Consumer will sleep for 7 seconds
The clock ran out.

Advantages of the Peterson Solution


1. With Peterson’s solution, multiple processes can access and share a resource without
causing any resource conflicts.
2. Every process has a chance to be carried out.
3. It uses straightforward logic and is easy to put into practice.
4. Since it is entirely software dependent and operates in user mode, it can be used with
any hardware.
eliminates the chance of a deadlock.
Disadvantages of the Peterson’s Solution
1. Waiting for the other processes to exit the critical region may take a long time. We
call it busy waiting.
2. On systems that have multiple CPUs, this algorithm might not function.
3. The Peterson solution can only run two processes concurrently.

Q4. Discuss classical Inter process


communication problems and explain
producer consumer and reader’s writer’s
problem.
Inter-Process Communication Problems
In this section we describe a few classic inter process communication problems. If you
have the time and the inclination you might like to try and write a program which
solves these problems.

The Producer-Consumer Problem

Assume there is a producer (which produces goods) and a consumer (which consumes
goods). The producer, produces goods and places them in a fixed size buffer. The
consumer takes the goods from the buffer.

The buffer has a finite capacity so that if it is full, the producer must stop producing.

Similarly, if the buffer is empty, the consumer must stop consuming.

This problem is also referred to as the bounded buffer problem.

The type of situations we must cater for are when the buffer is full, so the producer
cannot place new items into it. Another potential problem is when the buffer is empty,
so the consumer cannot take from the buffer.

The Dining Philosophers Problem

This problem was posed by (Dijkstra, 1965).

Five philosophers are sat around a circular table. In front of each of them is a bowl of
food. In between each bowl there is a fork. That is there are five forks in all.

Philosophers spend their time either eating or thinking. When they are thinking they
are not using a fork.

When they want to eat they need to use two forks. They must pick up one of the forks
to their right or left. Once they have acquired one fork they must acquire the other
one. They may acquire the forks in any order.
Once a philosopher has two forks they can eat. When finished eating they return both
forks to the table.

The question is, can a program be written, for each philosopher, that never gets stuck,
that is, a philosopher is waiting for a fork forever.

The Readers Writers Problem

This problem was devised by (Courtois et al., 1971). It models a number of processes
requiring access to a database. Any number of processes may read the database but
only one can write to the database.

The problem is to write a program that ensures this happens.

The Sleeping Barber Problem

A barber shop consists of a waiting room with n chairs. There is another room that
contains the barbers chair. The following situations can happen.

· If there are no customers the barber goes to sleep.


· If a customer enters and the barber is asleep (indicating there are no other customers
waiting) he wakes the barber and has a haircut.
· If a customer enters and the barber is busy and there are spare chairs in the waiting
room, the customer sits in one of the chairs.
· If a customer enters and all the chairs in the waiting room are occupied, the customer
leaves.

The problem is to program the barber and the customers without getting into race
conditions.

OR
Reader Writer Problem in OS

PREPBYTESMARCH 15, 2023

Last Updated on December 11, 2023 by Ankit Kochar


The Reader-Writer Problem is a classic synchronization issue in operating systems
(OS) and concurrent programming. It revolves around the challenge of managing
shared resources, specifically a data structure or a section of code, that is accessed by
multiple threads. The problem arises when balancing the need for simultaneous access
by multiple readers against exclusive access for a single writer to ensure data
consistency and integrity. Various solutions such as locks, semaphores, and other
synchronization mechanisms have been proposed to tackle this issue efficiently.

How the Reader-Writer Problem is OS Handled?


In order to handle the problem, it must be ensured that no concurrent processes cause
any form of data inconsistency in the operating system. The reader-writer problem in
os can be assumed as follows:-

There are multiple processes that can be either readers and writers with a shared
resource between them, let us suppose it as a file or a database. In case there are two
processes with both trying to access the resource simultaneously at the same instance.
Although it does not matter how many readers can access it simultaneously, it must be
kept in mind that only one writer can write it at a time.
There are several algorithms that are designed to curb this problem and among the
many algorithms available, we are going to use one to solve the reader-writer problem
in os.

Various Cases of Reader-Writer Problem


There are certain cases that I must look forward to understanding the reader-writer
problem in os and how it impacts the data inconsistency problem that must be
avoided.

Case One
Two processes cannot be allowed to write into shared data parallelly thus they must
wait to get access to write into it.

Case Two
Even if one process is writing on data and the other is reading then also they cannot be
allowed to have the access to shared resources for the reason that a reader will be
reading an incomplete value in this case.

Case Three
The other similar scenario where one process is reading from the data and another
writing on it, on the shared resource, it cannot be allowed. Because the writer updates
some data that is not available to the reader. The solution being that the writer
completes successfully and gives access.
Case Four
In the case that both processes are reading the data then sharing of resources among
both the processes will be allowed as it is a case free from any such anomaly because
reading does not modify the pre-existing data.

The Solution to the Problem


In order to solve the problem, we maintain three variables, namely, mutex,
semaphore, and readCount.

Mutex makes the process to release and acquire a lock when the readCount is being
updated. The lock is acquired when the readCount is updated and decremented back
after it has done performing the operation. The writer waits for the semaphore until it
is its turn to write and increments for other processes to write.

The idea remains to use semaphores when a reader enters the critical section up until
it exits the critical section such that no writer can enter in between as it can cause data
inconsistency. A semaphore is used to prevent writers from accessing the resource
while there are one or more readers accessing it.

Code:

The given code below is the code for the writer’s side.

while (TRUE) {

// Wait on the "w" semaphore to acquire access to the resource

wait(w);

// Perform the necessary write operation(s) on the resource

// ...

// Signal the "w" semaphore to allow other writer processes to access the
resource

signal(w);
}

Explanation:
The above code implements a simple solution for the writer process where the writer
waits for the "w" semaphore to become available and then performs the write
operation on the resource. The writer then signals the "w" semaphore to allow other
writer processes to access the resource. Note that the code is in an infinite loop, so the
writer process will continuously wait for the "w" semaphore to become available and
then perform the write operation.

Given below is the code for readers side:-

while (TRUE) { // Loop indefinitely

// Acquire lock

wait(m); // Wait for the mutex semaphore to be available

readCount++; // Increment the number of processes doing the read operation

if (readCount == 1) {

wait(w); // If this is the first reader, wait for writer semaphore to be


available

// Release lock

signal(m); // Signal mutex semaphore to allow other processes to access the


critical section

/* Perform the reading operation */

// Here we assume that the necessary code to read from the resource has been
added
// Acquire lock

wait(m); // Wait for the mutex semaphore to be available

readCount--; // Decrement the number of processes doing the read operation

if (readCount == 0) {

signal(w); // If this is the last reader, signal writer semaphore to allow


writer processes to access the critical section

// Release lock

signal(m); // Signal mutex semaphore to allow other processes to access the


critical section

Explanation:
The code uses three variables: "mutex" to ensure mutual exclusion while updating the
"readCount" variable, "w" semaphore to ensure that no writer can access the critical
section when a reader is accessing it, and "readCount" to keep track of the number of
processes performing the read operation. In the code, each process first acquires the
"mutex" lock before updating the "readCount" variable. If this is the first reader
process to access the resource, it waits for the "w" semaphore to become available,
which means that no writer process is currently accessing the resource.

After performing the read operation, the process again acquires the "mutex" lock and
decrements the "readCount" variable. If this is the last reader process accessing the
resource, it signals the "w" semaphore to allow writer processes to access the critical
section.

Conclusion
Addressing the Reader-Writer Problem is crucial in ensuring efficient and safe
concurrent access to shared resources in operating systems. Understanding the
nuances and challenges posed by multiple readers and writers accessing the same data
concurrently is essential for designing robust and scalable systems. Employing
synchronization techniques and algorithms tailored to this problem can significantly
enhance system performance while preventing data corruption and inconsistencies.

OR

Producer Consumer Problem in OS


By Rohan Kumar singh

6 mins read

Last updated: 29 Apr 2022

1.6k views

Overview
Producer-Consumer problem is a classical synchronization problem in the operating system.
With the presence of more than one process and limited resources in the system the
synchronization problem arises. If one resource is shared between more than one process at the
same time then it can lead to data inconsistency. In the producer-consumer problem, the producer
produces an item and the consumer consumes the item produced by the producer.

What is Producer Consumer Problem?


Before knowing what is Producer-Consumer Problem we have to know what are Producer and
Consumer.

 In operating System Producer is a process which is able to produce data/item.


 Consumer is a Process that is able to consume the data/item produced by the Producer.
 Both Producer and Consumer share a common memory buffer. This buffer is a space of a certain
size in the memory of the system which is used for storage. The producer produces the data into
the buffer and the consumer consumes the data from the buffer.
So, what are the Producer-Consumer Problems?

1. Producer Process should not produce any data when the shared buffer is full.
2. Consumer Process should not consume any data when the shared buffer is empty.
3. The access to the shared buffer should be mutually exclusive i.e at a time only one process
should be able to access the shared buffer and make changes to it.

For consistent data synchronization between Producer and Consumer, the above problem should
be resolved.

Solution For Producer Consumer Problem


To solve the Producer-Consumer problem three semaphores variable are used :

Semaphores are variables used to indicate the number of resources available in the system at a
particular time. semaphore variables are used to achieve `Process Synchronization.

Full

The full variable is used to track the space filled in the buffer by the Producer process. It is
initialized to 0 initially as initially no space is filled by the Producer process.

Empty

The Empty variable is used to track the empty space in the buffer. The Empty variable is initially
initialized to the BUFFER-SIZE as initially, the whole buffer is empty.

Mutex

Mutex is used to achieve mutual exclusion. mutex ensures that at any particular time only the
producer or the consumer is accessing the buffer.

Mutex - mutex is a binary semaphore variable that has a value of 0 or 1.


We will use the Signal() and wait() operation in the above-mentioned semaphores to arrive at a
solution to the Producer-Consumer problem.

Signal() - The signal function increases the semaphore value by 1. Wait() - The wait operation
decreases the semaphore value by 1.

Let's look at the code of Producer-Consumer Process

The code for Producer Process is as follows :

void Producer(){
while(true){
// producer produces an item/data
wait(Empty);
wait(mutex);
add();
signal(mutex);
signal(Full);
}
}

Let's understand the above Producer process code :

 wait(Empty) - Before producing items, the producer process checks for the empty space in the
buffer. If the buffer is full producer process waits for the consumer process to consume items
from the buffer. so, the producer process executes wait(Empty) before producing any item.
 wait(mutex) - Only one process can access the buffer at a time. So, once the producer process
enters into the critical section of the code it decreases the value of mutex by executing
wait(mutex) so that no other process can access the buffer at the same time.
 add() - This method adds the item to the buffer produced by the Producer process. once the
Producer process reaches add function in the code, it is guaranteed that no other process will be
able to access the shared buffer concurrently which helps in data consistency.
 signal(mutex) - Now, once the Producer process added the item into the buffer it increases the
mutex value by 1 so that other processes which were in a busy-waiting state can access the
critical section.
 signal(Full) - when the producer process adds an item into the buffer spaces is filled by one item
so it increases the Full semaphore so that it indicates the filled spaces in the buffer correctly.

The code for the Consumer Process is as follows :

void Consumer() {
while(true){
// consumer consumes an item
wait(Full);
wait(mutex);
consume();
signal(mutex);
signal(Empty);
}
}
Let's understand the above Consumer process code :

 wait(Full) - Before the consumer process starts consuming any item from the buffer it checks if
the buffer is empty or has some item in it. So, the consumer process creates one more empty
space in the buffer and this is indicated by the full variable. The value of the full variable
decreases by one when the wait(Full) is executed. If the Full variable is already zero i.e the
buffer is empty then the consumer process cannot consume any item from the buffer and it
goes in the busy-waiting state.
 wait(mutex) - It does the same as explained in the producer process. It decreases the mutex by
1 and restricts another process to enter the critical section until the consumer process increases
the value of mutex by 1.
 consume() - This function consumes an item from the buffer. when code reaches the consuming
() function it will not allow any other process to access the critical section which maintains the
data consistency.
 signal(mutex) - After consuming the item it increases the mutex value by 1 so that other
processes which are in a busy-waiting state can access the critical section now.
 signal(Empty) - when a consumer process consumes an item it increases the value of the Empty
variable indicating that the empty space in the buffer is increased by 1.

Why can mutex solve the producer consumer Problem ?


Mutex is used to solve the producer-consumer problem as mutex helps in mutual exclusion. It
prevents more than one process to enter the critical section. As mutexes have binary values i.e 0
and 1. So whenever any process tries to enter the critical section code it first checks for the
mutex value by using the wait operation.

wait(mutex);

wait(mutex) decreases the value of mutex by 1. so, suppose a process P1 tries to enter the critical
section when mutex value is 1. P1 executes wait(mutex) and decreases the value of mutex. Now,
the value of mutex becomes 0 when P1 enters the critical section of the code.

Now, suppose Process P2 tries to enter the critical section then it will again try to decrease the
value of mutex. But the mutex value is already 0. So, wait(mutex) will not execute, and P2 will
now keep waiting for P1 to come out of the critical section.

Now, suppose if P2 comes out of the critical section by executing signal(mutex).

signal(mutex)

signal(mutex) increases the value of mutex by 1.mutex value again becomes 1. Now, the
process P2 which was in a busy-waiting state will be able to enter the critical section by
executing wait(mutex).

So, mutex helps in the mutual exclusion of the processes.


In the above section in both the Producer process code and consumer process code, we have the
wait and signal operation on mutex which helps in mutual exclusion and solves the problem of
the Producer consumer process.

Dive into the world of operating systems with our free Operating System course. Join today
and acquire the skills from industry experts!

Conclusion
 Producer Process produces data item and consumer process consumes data item.
 Both producer and consumer processes share a common memory buffer.
 Producer should not produce any item if the buffer is full.
 Consumer should not consume any item if the buffer is empty.
 Not more than one process should access the buffer at a time i.e mutual exclusion should be
there.
 Full, Empty and mutex semaphore help to solve Producer-consumer problem.
 Full semaphore checks for the number of filled space in the buffer by the producer process
 Empty semaphore checks for the number of empty spaces in the buffer.
 mutex checks for the mutual exclusion.
OR
Producer Consumer Problem in C


Concurrency is an important topic in concurrent programming since it allows us to


completely understand how the systems work. Among the several challenges faced by
practitioners working with these systems, there is a major synchronization issue which is
the producer-consumer problem. In this article, we will discuss this problem and look at
possible solutions based on C programming.
What is the Producer-Consumer Problem?
The producer-consumer problem is an example of a multi-process
synchronization problem. The problem describes two processes, the producer and the
consumer that share a common fixed-size buffer and use it as a queue.
 The producer’s job is to generate data, put it into the buffer, and start again.
 At the same time, the consumer is consuming the data (i.e., removing it from the
buffer), one piece at a time.
What is the Actual Problem?
Given the common fixed-size buffer, the task is to make sure that the producer can’t add
data into the buffer when it is full and the consumer can’t remove data from an empty
buffer. Accessing memory buffers should not be allowed to producers and consumers at
the same time.
Producer Consumer Problem

Solution of Producer-Consumer Problem


The producer is to either go to sleep or discard data if the buffer is full. The next time the
consumer removes an item from the buffer, it notifies the producer, who starts to fill the
buffer again. In the same manner, the consumer can go to sleep if it finds the buffer to be
empty. The next time the producer transfer data into the buffer, it wakes up the sleeping
consumer.

Pause

Unmute

×
Note: An inadequate solution could result in a deadlock where both processes are waiting
to be awakened.
Approach: The idea is to use the concept of parallel programming and Critical Section to
implement the Producer-Consumer problem in C language using OpenMP.
Below is the implementation of the above approach:
C
// C program for the above approach
#include <stdio.h>
#include <stdlib.h>

// Initialize a mutex to 1
int mutex = 1;

// Number of full slots as 0


int full = 0;

// Number of empty slots as size


// of buffer
int empty = 10, x = 0;

// Function to produce an item and


// add it to the buffer
void producer()
{
// Decrease mutex value by 1
--mutex;

// Increase the number of full


// slots by 1
++full;

// Decrease the number of empty


// slots by 1
--empty;

// Item produced
x++;
printf("\nProducer produces"
"item %d",
x);

// Increase mutex value by 1


++mutex;
}

// Function to consume an item and


// remove it from buffer
void consumer()
{
// Decrease mutex value by 1
--mutex;

// Decrease the number of full


// slots by 1
--full;

// Increase the number of empty


// slots by 1
++empty;
printf("\nConsumer consumes "
"item %d",
x);
x--;

// Increase mutex value by 1


++mutex;
}
// Driver Code
int main()
{
int n, i;
printf("\n1. Press 1 for Producer"
"\n2. Press 2 for Consumer"
"\n3. Press 3 for Exit");

// Using '#pragma omp parallel for'


// can give wrong value due to
// synchronization issues.

// 'critical' specifies that code is


// executed by only one thread at a
// time i.e., only one thread enters
// the critical section at a given time
#pragma omp critical

for (i = 1; i > 0; i++) {

printf("\nEnter your choice:");


scanf("%d", &n);

// Switch Cases
switch (n) {
case 1:

// If mutex is 1 and empty


// is non-zero, then it is
// possible to produce
if ((mutex == 1)
&& (empty != 0)) {
producer();
}

// Otherwise, print buffer


// is full
else {
printf("Buffer is full!");
}
break;

case 2:

// If mutex is 1 and full


// is non-zero, then it is
// possible to consume
if ((mutex == 1)
&& (full != 0)) {
consumer();
}

// Otherwise, print Buffer


// is empty
else {
printf("Buffer is empty!");
}
break;

// Exit Condition
case 3:
exit(0);
break;
}
}
}
Output:

Problem for Practice


Question 1: Processes P1 and P2 have a producer-consumer relationship, communicating
by the use of a set of shared buffers.
P1: repeat
Obtain an empty buffer
Fill it
Return a full buffer
forever
P2: repeat
Obtain a full buffer
Empty it
Return an empty buffer
forever
Increasing the number of buffers is likely to do which of the following? [ISRO CS 2018]
I. Increase the rate at which requests are satisfied (throughput).
II. Decrease the likelihood of deadlock .
III. Increase the ease of achieving a correct implementation.
(A) Ill only
(B) II only
(C) I only
(D) II and III only
Solution: Increasing the size of the memory allocated to the process
or increasing buffer requirement does not affect the likelihood of
the
deadlock and doesn't affect the implementation of the system. It can
increase the rate at which the requests are satisfied(throughput)
larger
will be the size of the buffer, larger will be throughput.
Therefore the only statement correct is I. Hence option (C) is
correct.
OR
The producer-consumer problem and the reader-writer problem are both related to inter-process
communication in concurrent programming.

In the producer-consumer problem, there are two types of processes: producers, which produce
data, and consumers, which consume the data. The challenge is to ensure that the producers do
not produce data if the buffer is full, and that the consumers do not consume data if the buffer is
empty. This requires synchronization and coordination between the producers and consumers.

On the other hand, the reader-writer problem involves multiple processes accessing a shar
… (continue chat)

Your response is private

Was this worth your time?

This helps us sort answers on the page.

Absolutely not

Definitely yes

Upvote

Cristiano Cavo
·

Follow

Studied at Polytechnic University of Turin9y

Originally Answered: What is the difference between a producer-consumer and reader-


writer problem?

The two problems may seem to be similar, but it no so. Both the problems are about
resource sharing.

The first, (the Producer and the Consumer problem or "PCP"), contemplate the sharing of
many resources between a single Producer and a single Consumer (this condition is not
mandatory) whose use a queue (circular buffer). The Producer fills the queue at the tail and
the Consumer empties it from the head. The inter process communication is made by two
semaphores: "emptyS" and "fullS" in which "S" stands for "semaphore".
Key concepts for the PCP:
- one Producer;
- one Consumer;
- many resources stored in a common queue.

I dare to say that the other problem: (the Writer and Reader Problem or "WRP") can be
considered the opposite problem, in fact: there is only a resource (that I call RES) shared by
one or more Writers and by one or more Readers. There are some little but fundamental
rules:
- two or more Readers can access to the RES concurrently (not in mutual exclusion) between
them, but absolutely a Reader couldn't accesses to the RES with a Writer (there is a mutual
exclusion between Readers and Writers);
- two or more Writers couldn't access concurrently to the RES. Only one at a time can
accesses to the RES and so there is a relation of total mutual exclusion when a Writers uses
a RES.
There are two ways to deal with the WRP:
- Way One. Give precedence to the Writers. This could put the Readers in starvation (but not
the System in deadlock!).
- Way Two. Give precedence to the Readers. Viceversa: this could put the Writers in
starvation.

However, exist many complex and elegant solutions than I proposed, but historically the
WRP and PCP are these.

OR
Producer-Consumer problem
The Producer-Consumer problem is a classical multi-process synchronization problem,
that is we are trying to achieve synchronization between more than one process.

There is one Producer in the producer-consumer problem, Producer is producing some


items, whereas there is one Consumer that is consuming the items produced by the
Producer. The same memory buffer is shared by both producers and consumers which is
of fixed-size.

The task of the Producer is to produce the item, put it into the memory buffer, and
again start producing items. Whereas the task of the Consumer is to consume the item
from the memory buffer.

Let's understand what is the problem?


Below are a few points that considered as the problems occur in Producer-Consumer:

Backward Skip 10sPlay VideoForward Skip 10s

ADVERTISEMENT

ADVERTISEMENT

o The producer should produce data only when the buffer is not full. In case it is found
that the buffer is full, the producer is not allowed to store any data into the memory
buffer.
o Data can only be consumed by the consumer if and only if the memory buffer is not
empty. In case it is found that the buffer is empty, the consumer is not allowed to use
any data from the memory buffer.
o Accessing memory buffer should not be allowed to producer and consumer at the same
time.

Let's see the code for the above problem:

Producer Code
Producer Code

Let's understand above Producer and Consumer


code:
Before Starting an explanation of code, first, understand the few terms used in the
above code:

1. "in" used in a producer code represent the next empty buffer


2. "out" used in consumer code represent first filled buffer
3. count keeps the count number of elements in the buffer
4. count is further divided into 3 lines code represented in the block in both the producer
and consumer code.

If we talk about Producer code first:

--Rp is a register which keeps the value of m[count]

--Rp is incremented (As element has been added to buffer)

--an Incremented value of Rp is stored back to m[count]

Similarly, if we talk about Consumer code next:

--Rc is a register which keeps the value of m[count]

--Rc is decremented (As element has been removed out of buffer)

--the decremented value of Rc is stored back to m[count].

BUFFER

As we can see from Fig: Buffer has total 8 spaces out of which the first 5 are filled, in =
5(pointing next empty position) and out = 0(pointing first filled position).
Let's start with the producer who wanted to produce an element " F ", according to
code it will enter into the producer() function, while(1) will always be true, itemP = F will
be tried to insert into the buffer, before that while(count == n); will evaluate to be False.

ADVERTISEMENT

Note: Semicolon after while loop will not let the code to go ahead if it turns out to be
True(i.e. infinite loop/ Buffer is full)

Buffer[in] = itemP → Buffer[5] = F. ( F is inserted now)

in = (in + 1) mod n → (5 + 1)mod 8→ 6, therefore in = 6; (next empty buffer)

After insertion of F, Buffer looks like this

ADVERTISEMENT

ADVERTISEMENT

Where out = 0, but in = 6

Since count = count + 1; is divided into three parts:

ADVERTISEMENT

ADVERTISEMENT

Load Rp, m[count] → will copy count value which is 5 to register Rp.

Increment Rp → will increment Rp to 6.


Suppose just after Increment and before the execution of third line (store m[count],
Rp) Context Switch occurs and code jumps to consumer code. . .

Consumer Code:

Now starting consumer who wanted to consume the first element " A ", according to
code it will enter into the consumer() function, while(1) will always be true, while(count
== 0); will evaluate to be False( since the count is still 5, which is not equal to 0.

Note: Semicolon after while loop will not let the code to go ahead if it turns out to be
True( i.e. infinite loop/ no element in buffer)

itemC = Buffer[out]→ itemC = A ( since out is 0)

ADVERTISEMENT

out = (out + 1) mod n → (0 + 1)mod 8→ 1, therefore out = 1( first filled position)

A is removed now

After removal of A, Buffer look like this

Where out = 1, and in = 6

Since count = count - 1; is divided into three parts:

Load Rc, m[count] → will copy count value which is 5 to register Rp.

Decrement Rc → will decrement Rc to 4.


store m[count], Rc → count = 4.

Now the current value of count is 4

Suppose after this Context Switch occurs back to the leftover part of producer code. . .

Since context switch at producer code was occurred after Increment and before the
execution of the third line (store m[count], Rp)

So we resume from here since Rp holds 6 as incremented value

Hence store m[count], Rp → count = 6

ADVERTISEMENT

Now the current value of count is 6, which is wrong as Buffer has only 5 elements,
this condition is known as Race Condition and Problem is Producer-Consumer
Problem.

The solution of Producer-Consumer Problem using


Semaphore
The above problems of Producer and Consumer which occurred due to context switch
and producing inconsistent result can be solved with the help of semaphores.

To solve the problem occurred above of race condition, we are going to use Binary
Semaphore and Counting Semaphore

Binary Semaphore: In Binary Semaphore, only two processes can compete to enter into
its CRITICAL SECTION at any point in time, apart from this the condition of mutual
exclusion is also preserved.

Counting Semaphore: In counting semaphore, more than two processes can compete
to enter into its CRITICAL SECTION at any point of time apart from this the condition of
mutual exclusion is also preserved.

Semaphore: A semaphore is an integer variable in S, that apart from initialization is


accessed by only two standard atomic operations - wait and signal, whose definitions
are as follows:

1. 1. wait( S )
2. {
3. while( S <= 0) ;
4. S--;
5. }

1. 2. signal( S )
2. {
3. S++;
4. }

From the above definitions of wait, it is clear that if the value of S <= 0 then it will enter
into an infinite loop (because of the semicolon; after while loop). Whereas the job of the
signal is to increment the value of S.

Let's see the code as a solution of producer and consumer problem using semaphore
( Both Binary and Counting Semaphore):

Producer Code- solution

1. void producer( void )


2. {
3. wait ( empty );
4. wait(S);
5. Produce_item(item P)
6. buffer[ in ] = item P;
7. in = (in + 1)mod n
8. signal(S);
9. signal(full);
10.
11. }

Consumer Code- solution

1. void consumer(void)
2. {
3. wait ( empty );
4. wait(S);
5. itemC = buffer[ out ];
6. out = ( out + 1 ) mod n;
7. signal(S);
8. signal(empty);
9. }

Let's understand the above Solution of Producer and


Consumer code:
Before Starting an explanation of code, first, understand the few terms used in the
above code:

1. "in" used in a producer code represent the next empty buffer


2. "out" used in consumer code represent first filled buffer
3. "empty" is counting semaphore which keeps a score of no. of empty buffer
4. "full" is counting semaphore which scores of no. of full buffer
5. "S" is a binary semaphore BUFFER

If we see the current situation of Buffer

S = 1(init. Value of Binary semaphore

in = 5( next empty buffer)

out = 0(first filled buffer)


As we can see from Fig: Buffer has total 8 spaces out of which the first 5 are filled, in =
5(pointing next empty position) and out = 0(pointing first filled position).

Semaphores used in Producer Code:

6. wait(empty) will decrease the value of the counting semaphore variable empty by 1,
that is when the producer produces some element then the value of the space gets
automatically decreased by one in the buffer. In case the buffer is full, that is the value
of the counting semaphore variable "empty" is 0, then wait(empty); will trap the process
(as per definition of wait) and does not allow to go further.

7. wait(S) decreases the binary semaphore variable S to 0 so that no other process


which is willing to enter into its critical section is allowed.

8. signal(s) increases the binary semaphore variable S to 1 so that other processes who
are willing to enter into its critical section can now be allowed.

9. signal(full) increases the counting semaphore variable full by 1, as on adding the


item into the buffer, one space is occupied in the buffer and the variable full must be
updated.

Semaphores used in Producer Code:

10.0wait(full) will decrease the value of the counting semaphore variable full by 1, that
is when the consumer consumes some element then the value of the full space gets
automatically decreased by one in the buffer. In case the buffer is empty, that is the
value of the counting semaphore variable full is 0, then wait(full); will trap the process(as
per definition of wait) and does not allow to go further.
11. wait(S) decreases the binary semaphore variable S to 0 so that no other process
which is willing to enter into its critical section is allowed.

12. signal(S) increases the binary semaphore variable S to 1 so that other processes
who are willing to enter into its critical section can now be allowed.

13. signal(empty) increases the counting semaphore variable empty by 1, as on


removing an item from the buffer, one space is vacant in the buffer and the variable
empty must be updated accordingly.

Producer Code:

Let's start with producer() who wanted to produce an element " F ", according to code it
will enter into the producer() function.

wait(empty); will decrease the value of empty by one, i.e. empty = 2

Suppose just after this context switch occurs and jumps to consumer code.

Consumer Code:

Now starting consumer who wanted to consume first element " A ", according to code it
will enter into consumer() function,

wait(full); will decrease the value of full by one, i.e. full = 4

wait (S); will decrease the value of S to 0

itemC = Buffer[out]; → itemC = A ( since out is 0)

A is removed now

out = (out + 1) mod n → (0 + 1)mod 8 → 1, therefore out = 1( first filled position)


S = 0(Value of Binary semaphore)

in = 5( next empty buffer)

out = 1(first filled buffer)

Suppose just after this context, switch occurs back to producer code

Since the next instruction of producer() is wait(S);, this will trap the producer process, as
the current value of S is 0, and wait(0); is an infinite loop: as per the definition of wait,
hence producer cannot move further.

Therefore, we move back to the consumer process next instruction.

signal(S); will now increment the value of S to 1.

signal(empty); will increment empty by 1, i.e. empty = 3

Now moving back to producer() code;

Since the next instruction of producer() is wait(S); will successfully execute, as S is now 1
and it will decrease the value of S by 1, i.e. S = 0

Buffer[in] = itemP; → Buffer[5] = F. ( F is inserted now)

in = (in + 1) mod n → (5 + 1)mod 8 → 6, therefore in = 6; (next empty buffer)

signal(S); will increment S by 1,

signal(full); will increment full by 1, i.e. full = 5


Now add current value of full and empty, i.e. full + empty = 5 + 3 = 8(which is
absolutely fine) No inconsistent result is generated even after so many context switches.
But in the previous condition of producer and consumer without semaphore, we see the
inconsistent result in case of context switches.

This is the solution to the Producer consumer problem.

Q5. Explain semaphores and monitors.


Difference between Semaphore and
Monitor
In this article, you will learn the difference between the semaphore and monitor. But
before discussing the differences, you will need to know about the semaphore and
monitor.
What is Semaphore?
A semaphore is an integer variable that allows many processes in a parallel system to
manage access to a common resource like a multitasking OS. It is an integer variable
(S), and it is initialized with the number of resources in the system.
The wait() and signal() methods are the only methods that may modify the semaphore
(S) value. When one process modifies the semaphore value, other processes can't
modify the semaphore value simultaneously.

Furthermore, the operating system categorizes semaphores into two types:

1. Counting Semaphore
2. Binary Semaphore

Counting Semaphore
In Counting Semaphore, the value of semaphore S is initialized to the number of
resources in the system. When a process needs to access shared resources, it calls
the wait() method on the semaphore, decreasing its value by one. When the shared
resource is released, it calls the signal() method, increasing the value by 1.

Backward Skip 10sPlay VideoForward Skip 10s


ADVERTISEMENT

When the semaphore count reaches 0, it implies that the processes have used all
resources. Suppose a process needs to utilize a resource when the semaphore count is
0. In that case, it performs the wait() method, and it is blocked until another process
using the shared resources releases it, and the value of the semaphore increases to 1.

Binary Semaphore
Semaphore has a value between 0 and 1 in binary semaphore. It's comparable to mutex
lock, except that mutex is a locking method while the semaphore is a signalling method.
When a process needs to access a binary semaphore resource, it uses the wait() method
to decrement the semaphore's value from 1 to 0.

When the process releases the resource, it uses the signal() method to increase the
semaphore value to 1. When the semaphore value is 0, and a process needs to use the
resource, it uses the wait() method to block until the current process that is using the
resource releases it.
Syntax:
The syntax of the semaphore may be used as:

1. // Wait Operation
2. wait(Semaphore S) {
3. while (S<=0);
4. S--;
5. }
6. // Signal Operation
7. signal(Semaphore S) {
8. S++;
9. }

Advantages and Disadvantages of Semaphore


Various advantages and disadvantages of the semaphore are as follows:

Advantages;

1. They don't allow multiple processes to enter the critical part simultaneously.
Mutual exclusion is achieved in this manner, making it much more efficient than
other synchronization techniques.
2. There is no waste of process time or resources as a result of the busy waiting in
semaphore. It is because processes are only allowed to access the critical section
if a certain condition is satisfied.
3. They enable resource management that is flexible.
4. They are machine-independent because they execute in the microkernel's
machine-independent code.

Disadvantages

1. There could be a situation of priority inversion where the processes with low
priority get access to the critical section than those with higher priority.
2. Semaphore programming is complex, and there is a risk that mutual exclusion
will not be achieved.
3. The wait() and signal() methods must be conducted correctly to avoid
deadlocks.

What is Monitor?
It is a synchronization technique that enables threads to mutual exclusion and
the wait() for a given condition to become true. It is an abstract data type. It has a
shared variable and a collection of procedures executing on the shared variable. A
process may not directly access the shared data variables, and procedures are required
to allow several processes to access the shared data variables simultaneously.

At any particular time, only one process may be active in a monitor. Other processes
that require access to the shared variables must queue and are only granted access after
the previous process releases the shared variables.

Syntax:
The syntax of the monitor may be used as:

1. monitor {
2.
3. //shared variable declarations
4. data variables;
5. Procedure P1() { ... }
6. Procedure P2() { ... }
7. .
8. .
9. .
10. Procedure Pn() { ... }
11. Initialization Code() { ... }
12. }

Advantages and Disadvantages of Monitor


Various advantages and disadvantages of the monitor are as follows:

Advantages
1. Mutual exclusion is automatic in monitors.
2. Monitors are less difficult to implement than semaphores.
3. Monitors may overcome the timing errors that occur when semaphores are used.
4. Monitors are a collection of procedures and condition variables that are
combined in a special type of module.

Disadvantages

1. Monitors must be implemented into the programming language.


2. The compiler should generate code for them.
3. It gives the compiler the additional burden of knowing what operating system
features is available for controlling access to crucial sections in concurrent
processes.

Main Differences between the Semaphore


and Monitor
Here, you will learn the main differences between the semaphore and monitor. Some of
the main differences are as follows:

1. A semaphore is an integer variable that allows many processes in a parallel


system to manage access to a common resource like a multitasking OS. On the
other hand, a monitor is a synchronization technique that enables threads to
mutual exclusion and the wait() for a given condition to become true.
2. When a process uses shared resources in semaphore, it calls the wait() method
and blocks the resources. When it wants to release the resources, it executes
the signal() In contrast, when a process uses shared resources in the monitor, it
has to access them via procedures.
3. Semaphore is an integer variable, whereas monitor is an abstract data type.
4. In semaphore, an integer variable shows the number of resources available in the
system. In contrast, a monitor is an abstract data type that permits only a process
to execute in the crucial section at a time.
5. Semaphores have no concept of condition variables, while monitor has condition
variables.
6. A semaphore's value can only be changed using the wait() and signal() In
contrast, the monitor has the shared variables and the tool that enables the
processes to access them.

Head-to-head comparison between the


Semaphore and Monitor
Various head-to-head comparisons between the semaphore and monitor are as follows:

Features Semaphore Monitor

Definition A semaphore is an integer variable that It is a synchronization process that


allows many processes in a parallel enables threads to have mutual
system to manage access to a common exclusion and the wait() for a
resource like a multitasking OS. given condition to become true.

Syntax // Wait Operation monitor {


wait(Semaphore S) { //shared variable declarations
while (S<=0); data variables;
S--; Procedure P1() { ... }
} Procedure P2() { ... }
// Signal Operation .
signal(Semaphore S) { .
S++; .
} Procedure Pn() { ... }
}

Basic Integer variable Abstract data type

Access When a process uses shared resources, When a process uses shared
it calls the wait() method on S, and resources in the monitor, it has to
when it releases them, it uses the access them via procedures.
signal() method on S.

Action The semaphore's value shows the The Monitor type includes shared
number of shared resources available variables as well as a set of
in the system. procedures that operate on them.

Condition No condition variables. It has condition variables.


Variable

Conclusion
In summary, semaphore and monitor are two synchronization mechanisms. A
semaphore is an integer variable that performs the wait() and signal() methods. In
contrast, the monitor is an abstract data type that enables only a process to use a
shared resource at a time. Monitors are simpler to implement than semaphores, and
there are fewer chances of making a mistake in monitors than with semaphores.

OR
Difference Between Semaphore and Monitor
in OS
DifferencesComputersOperating System

Both Semaphore and Monitor are types of process synchronization tools in


operating systems. Semaphores and monitors allow the different processes to
utilize the shared resources in mutual exclusion, however they are different
from each other. The basic difference between a semaphore and a monitor is
that a semaphore is an integer variable, whereas a monitor is an abstract data
type.

Read this article to find out more about semaphores and monitors and how they
are different from each other.

What is Semaphore?
A semaphore is a process synchronizing tool. It is basically an integer variable,
denoted by "S". The initialization of this variable "S" is done by assigning a
number equal to the number of resources present in the system.
There are two functions, wait() and signal(), which are used to modify the value of
semaphore "S". The wait() and signal() functions indivisibly change the value of
the semaphore "S". That means, when one process is changing the value of the
semaphore "S", another process cannot change the value of the semaphore at
the same time.
In operating systems, semaphores are grouped into two categories− counting
semaphore and binary semaphore. In a counting semaphore, the value of the
semaphore is initialized to the number of resources present in the system. On
the other hand, in a binary semaphore, the semaphore "S" has the value "0" or
"1".

What is Monitor?
Monitor is also a process synchronization tool. It is an abstract data type that is
used for highlevel synchronization of processes. It has been developed to
overcome the timing errors that occur while using the semaphore for the
process synchronization. Since the monitor is an abstract data type, it contains
the shared data variables. These data variables are to be shared by all the
processes. Hence, this allows the processes to execute in mutual exclusion.

There can be only one process active at a time within a monitor. If any other
process tries to access the shared variable in the monitor, it will be blocked and
lined up in the queue to get the access to the data. This is done by a
"conditional variable" in the monitor. The conditional variable is used for
providing additional synchronization mechanism.

Difference between Semaphore and Monitor in OS


The following table highlights all the important differences between semaphore
and monitor in operating systems −

S. Semaphore Monitor
No.

It is an integer variable. It is an abstract data


1.
type.

2. The value of this integer It contains shared


variable tells about the variables.
number of shared resources
that are available in the
system.

When any process has It also contains a set of


access to the shared procedures that operate
3. resources, it performs the upon the shared variable.
‘wait’ operation (using wait
method) on the semaphore.

When a process releases the When a process wishes


shared resources, it to access the shared
4. performs the ‘signal’ variables in the monitor,
operation (using signal it has to do so using
method) on the semaphore. procedures.

It doesn’t have condition It has condition


5.
variables. variables.

Conclusion
Both semaphores and monitors are types process synchronization tools in
operating systems, however they are quite different from each other, as
descried in the above table. The most significant difference that you should note
here is that a semaphore is an integer variable that indicates the number of
resources available in the system, whereas a monitor is an abstract data type
that allows only one process to be executed at a time.

OR
Implementing Monitor using Semaphore


When multiple processes run at the same time and share system resources, the results
may be different. This is called a critical section problem. To overcome this problem we
can implement many solutions. The monitor mechanism is the compiler-type solution to
avoid critical section problems. In this section, we will see how to implement a monitor
using semaphore.
Primary Terminologies
 Operating System: The operating system acts as an interface or intermediary
between the user and the computer hardware.
 Process: The program in the execution state is called a process.
 Process synchronization: Process synchronization is a mechanism that controls the
execution of processes running concurrently to ensure that consistent results are
produced.
 Semaphore: A semaphore is an operating system-type solution to the critical
section problem. It is a variable that is used to provide synchronization among
multiple processes running concurrently.
 Critical Section: It is the section of the program where a process accesses the shared
resource during its execution.

critical section
Implementing Monitor using Semaphore
Let’s implement a a monitor mechanism using semaphores.
Following is a step-by-step implementation:
Step 1: Initialize a semaphore mutex to 1.
Step 2: Provide a semaphore mutex for each monitor.

Pause

Unmute

×
Step 3: A process must execute wait (mutex) before entering the monitor and must
execute signal (mutex) after leaving the monitor.
Step 4: Since a signaling process must wait until the resumed process either leaves or
waits, introduce an additional semaphore, S, and initialize it to 0.
Step 5: The signaling processes can use S to suspend themselves. An integer variable
S_count is also provided to count the number of processes suspended next. Thus, each
external function Fun is replaced by
wait (mutex);
body of Fun
if (S_count > 0)
signal (S);
else
signal (mutex);
Mutual exclusion within a monitor is ensured.
Let’s see how condition variables are implemented.
Step 1: x is condition.
Step 2: Introduce a semaphore x_num and an integer variable x_count.
Step 3: Initialize both semaphores to 0.
x.wait() is now implemented as: x.wait()
x_count++;
if (S_count > 0)
signal (S);
else
signal (mutex);
wait (x_num);
x_count–;
The operation x.signal () can be implemented as
if (x_count > 0) {
S_count++;
signal (x_num);
wait (S);
S_count–;
}
Conclusion
In conclusion, we can implement a monitor mechanism using semaphore mutex, and
condition variable is implemented using wait and signal operations to allow the process
to access critical sections in a synchronized manner to avoid inconsistent results,
and mutual exclusion is ensured within monitors.
FAQs on Implementing Monitor Using Semaphore
Q.1: How monitor is different from a semaphore?
Answer:
Monitor and semaphore both are solutions to the critical section problem. Monitor
mechanism is a type solution for a compiler. Monitor is a higher level synchronization
construct that makes process synchronization easier by offering a high level abstraction
for accessing and synchronizing data and semaphore is operating system type solution to
the critical section problem. It is variable which is used to provide synchronization
among multiple processes running concurrently.
Q.2: Why do we use condition variables?
Answer:
Condition variable helps to allow a process to wait within monitor. It includes two
operations i.e., wait and signal.

OR
Monitors vs Semaphores
Computer ScienceMCAOperating System

Monitors and semaphores are used for process synchronization and allow
processes to access the shared resources using mutual exclusion. However,
monitors and semaphores contain many differences. Details about both of these
are given as follows −

Monitors
Monitors are a synchronization construct that were created to overcome the
problems caused by semaphores such as timing errors.

Monitors are abstract data types and contain shared data variables and
procedures. The shared data variables cannot be directly accessed by a process
and procedures are required to allow a single process to access the shared data
variables at a time.

This is demonstrated as follows:

monitor monitorName
{
data variables;

Procedure P1(....)
{

Procedure P2(....)
{

Procedure Pn(....)
{

Initialization Code(....)
{

}
}

Only one process can be active in a monitor at a time. Other processes that
need to access the shared variables in a monitor have to line up in a queue and
are only provided access when the previous process release the shared
variables.

Semaphores
A semaphore is a signalling mechanism and a thread that is waiting on a
semaphore can be signalled by another thread. This is different than a mutex as
the mutex can be signalled only by the thread that called the wait function.
A semaphore uses two atomic operations, wait and signal for process
synchronization.

The wait operation decrements the value of its argument S, if it is positive. If S


is negative or zero, then no operation is performed.

wait(S)
{
while (S<=0);

S--;
}

The signal operation increments the value of its argument S.

signal(S)
{
S++;
}

There are mainly two types of semaphores i.e. counting semaphores and binary
semaphores.

Counting Semaphores are integer value semaphores and have an unrestricted


value domain. These semaphores are used to coordinate the resource access,
where the semaphore count is the number of available resources.

The binary semaphores are like counting semaphores but their value is
restricted to 0 and 1. The wait operation only works when the semaphore is 1
and the signal operation succeeds when semaphore is 0.

You might also like