Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
You are on page 1of 168

Process Management

BY
N.Madhuri
UNIT-II:
• Process Management – Process concept, The process, Process State Diagram ,
Process control block, Process Scheduling- Scheduling Queues, Schedulers,
Operations on Processes, Interprocess Communication, Threading Issues,
Scheduling-Basic Concepts, Scheduling Criteria,Scheduling Algorithms.
Process Management
Process Concept:
• process, which is a program in execution. A process is the unit of work in a modern
time-sharing system.
• A batch system executes jobs, whereas a time-shared system has user programs, or
tasks.
• Even on a single-user system, a user may be able to run several programs at one time:
a word processor, a Web browser, and an e-mail package. And even if a user can
execute only one program at a time, such as on an embedded device that does not
support multitasking, the operating system may need to support its own internal
programmed activities, such as memory management. In many respects, all these
activities are similar, so we call all of them processes.
• the process concept can be categorized into 4 ways are
• i)The process ii) process states
• iii) process control block(PCB) iv) threads
i) The process:
• a process is a program in execution. A process is more than the program code, which is
sometimes known as the text section.
• It also includes the current activity, as represented by the value of the program counter and
the contents of the processor’s registers.
• A process generally also includes the process stack, which contains temporary data (such as
function parameters, return addresses, and local variables), and a data section, which
contains global variables.
• A process may also include a heap, which is memory that is dynamically allocated during
process run time. The structure of a process in memory is shown
A program by itself is not a process. A program is a
passive entity, such as a file containing a list of
instructions stored on disk (often called an executable
file).

In contrast, a process is an active entity, with a


program counter specifying the next instruction to
execute and a set of associated resources.

A program becomes a process when an executable file


is loaded into memory.
ii) Process State:
• As a process executes, it changes state. The state of a process is defined in part by the
current activity of that process. A process may be in one of the following states:
•New. The process is being created.

• Running. Instructions are being executed.

• Waiting. The process is waiting for some event to occur (such as an I/O
completion or reception of a signal).

• Ready. The process is waiting to be assigned to a processor.

• Terminated. The process has finished execution.


iii) Process Control Block
• Each process is represented in the operating system by a process control block (PCB)—
also called a task control block. APCB. It contains many pieces of information associated
with a specific process, including these:
Process state : The state may be new, ready, running, waiting, halted, and so on.

Program counter : The counter indicates the address of the next instruction
to be executed for this process.

CPU registers : The registers vary in number and type, depending on the computer
architecture. They include accumulators(alu), index registers(modifying operand add
during run),stack register, and general-purpose registers(store temp data in time of
operation), plus any condition-code information. Along with the program counter, this
state information must be saved when an interrupt occurs, to allow the process to be
continued correctly afterward
CPU-scheduling information. This information includes a process priority,pointers to
scheduling queues, and any other scheduling parameters.

Memory-management information. This information may include such items as the value
of the base and limit registers and the page tables, or the segment tables, depending on the
memory system used by the operating system.

Accounting information. This information includes the amount of CPU and real time used,
time limits, account numbers, job or process numbers,and so on.

I/O status information. This information includes the list of I/O devices
allocated to the process, a list of open files, and so on.
Iv) Threads:
• A thread is a basic unit of CPU utilization; it comprises a thread ID, a program
counter, a register set, and a stack.
• It shares with other threads belonging to the same process its code section,
data section, and other operating-system resources, such as open files and
signals.
• A traditional (or heavyweight) process has a single thread of control. If a
process has multiple threads of control, it can perform more than one task at
a time.
Process Scheduling

• The objective of multiprogramming is to have some process running at all times, to


maximize CPU utilization.
• The objective of time sharing is to switch the CPU among processes so frequently that
users can interact with each program while it is running.
• To meet these objectives, the process scheduler selects an available process (possibly from
a set of several available processes) for program execution on the CPU.
• For a single-processor system, there will never be more than one running process. If there
are more processes, the rest will have to wait until the CPU is free and can be rescheduled.
Process scheduling can be done in 3 ways

• i) scheduling queues
• Ii) schedulers
• Iii) context switching
i) scheduling queues
• As processes enter the system, they are put into a job queue, which consists of
all processes in the system.
• The processes that are residing in main memory and are ready and waiting to
execute are kept on a list called the ready queue. This queue is generally stored
as a linked list.
• A ready-queue header contains pointers to the first and final PCBs in the list.
Each PCB includes a pointer field that points to the next PCB in the ready queue.
• The system also includes other queues. When a process is allocated the CPU, it
executes for a while and eventually quits, is interrupted, or waits for the
occurrence of a particular event, such as the completion of an I/O request.

.
• Suppose the process makes an I/O request to a shared device, such as a
disk.Since there are many processes in the system, the disk may be busy
with the I/O request of some other process. The process therefore may have
to wait for the disk. The list of processes waiting for a particular I/O device
is called a device queue.
• The OS maintains 3 types of process scheduling queues
• i)job queue ii) ready queue iii)device queue
• job queue − This queue keeps all the processes in the system.
• Ready queue − This queue keeps a set of all processes residing in main memory,
ready and waiting to execute. A new process is always put in this queue.
• Device queues − The processes which are blocked due to unavailability of an I/O
device constitute this queue.
A common representation of process scheduling is a queueing diagram,
such as that in Figure 3.6. Each rectangular box represents a queue. Two
types of queues are present: the ready queue and a set of device queues.
The circles represent the resources that serve the queues, and the arrows
indicate the flow of processes in the system.
• A new process is initially put in the ready queue. It waits there until it is
selected for execution. Once the process is allocated the CPU and is executing,
one of several events could occur:
• The process could issue an I/O request and then be placed in an I/O
queue.
• The process could create a new child process and wait for the child’s
termination.
• The process could be removed forcibly from the CPU, as a result of an
interrupt, and be put back in the ready queue.
• In the first two cases, the process eventually switches fromthe waiting state to
the ready state and is then put back in the ready queue. A process continues
this cycle until it terminates, at which time it is removed from all queues and
has its PCB and resources deallocated.
ii) Schedulers
• A process migrates among the various scheduling queues throughout its
lifetime. The operating system must select, for scheduling purposes, processes
from these queues in some fashion. The selection process is carried out by the
appropriate scheduler.

• There are three types of schedulers available:


• Long Term Scheduler
• Short Term Scheduler
• Medium Term Scheduler
Long Term Scheduler:
• Long term scheduler also called job scheduler
• Long term scheduler runs less frequently. i.e may not available in all
systems.
• Long Term Schedulers decide which program must get into the job
queue. From the job queue, the Job Processor, selects processes and
loads them into the memory for execution.
• Primary aim of the Job Scheduler is to maintain a good degree of
Multiprogramming( the number of processes in memory).
• If the degree of multiprogramming is stable(optimal degree of
Multiprogramming )means the average rate of process creation is equal
to the average departure rate of processes from the execution memory.
Short Term Scheduler:

• This is also known as CPU Scheduler and runs very frequently. The
primary aim of this scheduler is to enhance CPU performance and
increase process execution rate.
• CPU scheduler selects a process among the processes that are ready to
execute and allocates CPU to one of them.
• Short-term schedulers, also known as dispatchers, make the decision of
which process to execute next.
• Short-term schedulers are faster than long-term schedulers.
Medium Term Scheduler

• This scheduler removes the processes from memory (and from active
contention for the CPU), and thus reduces the degree of multiprogramming.
• At some later time, the process can be reintroduced into memory and its
execution can be continued where it left off. This scheme is
called swapping.
• The process is swapped out, and is later swapped in, by the medium term
scheduler.
Comparision of schedulars
S.N. Long-Term Scheduler Short-Term Scheduler Medium-Term Scheduler

1 It is a job scheduler It is a CPU scheduler It is a process swapping scheduler.

2 Speed is lesser than short term Speed is fastest among other two Speed is in between both short and
scheduler long term scheduler.

3 It controls the degree of It provides lesser control over degree It reduces the degree of
multiprogramming of multiprogramming multiprogramming.

4 It is almost absent or minimal in time It is also minimal in time sharing It is a part of Time sharing systems.
sharing system system
5 It selects processes from pool and It selects those processes which are It can re-introduce the process into
loads them into memory for execution ready to execute memory and execution can be
continued.
iii) Context Switch

• Switching the CPU to another process requires saving the state of the old process
and loading the saved state for the new process. This task is known as a Context Switch.
• The context of a process is represented in the Process Control Block(PCB) of a process; it
includes the value of the CPU registers, the process state and memory-management
information.
• When a context switch occurs, the Kernel saves the context of the old process in its PCB and
loads the saved context of the new process scheduled to run.
• Context switch time is pure overhead, because the system does no useful work while
switching.
• Its speed varies from machine to machine, depending on the memory speed, the number of
registers that must be copied, and the existence of special instructions(such as a single
instruction to load or store all registers). Typical speeds range from 1 to 1000 microseconds.
• Context Switching has become such a performance bottleneck that
programmers are using new structures(threads) to avoid it whenever and
wherever possible
Operations on Processes
• The processes in most systems can execute concurrently, and they may be created and deleted
dynamically. Thus, these systems must provide a mechanism for process creation and
termination.
• The main operations on processes are
• i) process creation
• ii) process termination
• Process Creation
• Through appropriate system calls process is created , such as fork, processes may create other
processes. The process which creates other process, is termed the parent of the other process,
while the created sub-process is termed its child.
• Each process is given an integer identifier, termed as process identifier, or PID. The parent PID (PPID) is also
stored for each process.
• A child process may receive some amount of shared resources with its parent depending on
system implementation.

• There are two options for the parent process after creating the child :

• Wait for the child process to terminate before proceeding. Parent process makes
a wait() system call, for either a specific child process or for any particular child
process, which causes the parent process to block until the wait() returns. UNIX shells
normally wait for their children to complete before issuing a new prompt.

• Run concurrently with the child, continuing to process without waiting. It is also
possible for the parent to run for a while, and then wait for the child later, which might
occur in a sort of a parallel processing operation.
• There are also two possibilities in terms of the address space of the new
process:
• The child process is a duplicate of the parent process.
• The child process has a program loaded into it.
Process Termination
• By making the exit(system call), typically returning an int, processes may request their own
termination. This int is passed along to the parent if it is doing a wait(), and is typically zero
on successful completion and some non-zero code in the event of any problem.

• A parent may terminate the execution of one of its children for a variety of reasons, such as
these:
 The inability of the system to deliver the necessary system resources.
 In response to a KILL command or other unhandled process interrupts.
 A parent may kill its children if the task assigned to them is no longer needed i.e. if the need of having a
child terminates.
 If the parent exits, the system may or may not allow the child to continue without a parent (In UNIX
systems, orphaned processes are generally inherited by init, which then proceeds to kill them.)
Interprocess Communication
• Processes executing concurrently in the operating system may be either independent
processes or cooperating processes.

• A process is independent if it cannot affect or be affected by the other processes executing


in the system.. Any process that does not share data with any other process is independent.

• process is cooperating if it can affect or be affected by the other processes executing in the
system. Clearly, any process that shares data with other processes is a cooperating process.
There are several reasons for providing an environment that allows process
cooperation:

• Information sharing: Since several users may be interested in the same piece of information
(for instance, a shared file), we must provide an environment to allow concurrent access to
such information.
• Computation speedup: If we want a particular task to run faster, we must break it into
subtasks, each of which will be executing in parallel with the others. Notice that such a
speedup can be achieved only if the computer has multiple processing cores.
• Modularity: We may want to construct the system in a modular fashion, dividing the
system functions into separate processes or threads.
• Convenience: Even an individual user may work on many tasks at the same time. For
instance, a user may be editing, listening to music, and compiling in parallel.
• Cooperating processes require an inter process communication(IPC)
mechanism that will allow them to exchange data and information.

• There are two fundamental models of inter process communications are


i)shared memory and
ii)message passing
i. Shared memory systems: Efficeint means of passing data between programs

• In the shared-memory model, a region of memory that is shared by cooperating


processes is established. Processes can then exchange information by reading and
writing data to the shared region.
• Inter process communication using shared memory requires communicating processes to
establish a region of shared memory.
• Typically, a shared-memory region resides in the address space of the process creating the
shared-memory segment. Other processes that wish to communicate using this shared-
memory segment must attach it to their address space
• Recall that, normally, the operating system tries to prevent one process from accessing
another process’s memory. Shared memory requires that two or more processes agree to
remove this restriction. They can then exchange information by reading and writing data
in the shared areas.
.
• Shared memory can be faster than message passing,since message-passing
systems are typically implemented using system calls and thus require the more
time-consuming task of kernel intervention.
• In shared-memory systems, system calls are required only to establish shared
memory regions. Once shared memory is established, all accesses are treated as
routine memory accesses, and no assistance from the kernel is required.
• shmat() and shmdt() are used to attach and detach shared memory segments.
• shmctl() is used to alter the permissions
• shmget() is used to obtain access to a shared memory segment
• a compiler may produce assembly code that is consumed by an assembler. The
assembler, in turn, may produce object modules that are consumed by the
loader.
• The form of the data and the location are determined by these processes and
are not under the operating system’s control. The processes are also
responsible for ensuring that they are not writing to the same location
simultaneously.
Producer-Consumer Example
• a very simple example of two cooperating processes. The problem is called the
producer-consumer problem and it uses two processes called producer and
consumer.
• Producer Process: it generates information that will be consumed by the
consumer.
• Consumer Process: it consumes information created by the producer.
• Both processes run parallel. If the consumer has nothing to use, it waits. There
are two sides of the producer.

• In version one, the producer can produce an unlimited amount of items. This is
called the Unbounded Buffer Producer-Consumer Problem.

• In the other version, there is a predetermined limit to the buffer size. When
the buffer is full, the producer must wait until there is some space in the buffer
before it can make a new item. This is called bounded buffer problem.
ii. Message Passing Model
•In the message-passing model, the cooperating processes communicate by
exchanging messages with one another. This model is useful for exchanging a
smaller amount of data. It can also be implemented more easily than a shared
memory model.

• It is typically implemented using system calls and is more time-consuming. The
message-passing model is typically useful in a distributed environment. The
processes may reside on different computers connected by a network.
• The chat program used on the internet is an example of the message passing
system. Users can communicate by exchanging messages with one another. A
message-passing model provides at least two operations:
Send (message)
Receive (message)
• Two processes can communicate with each other by sending and receiving
messages. This communication can take place by establishing a communication
link between these processes.
• This link can be implemented in a variety of ways. We are concerned here not
with the link’s physical implementation (such as shared memory, hardware bus,
or network) but rather with its logical implementation. Here are several methods
for logically implementing a link and the send()/receive() operations:
•a) Direct or indirect communication
• b)Synchronous or asynchronous communication
• c)Automatic or explicit buffering
• Processes that want to communicate must have a way to refer to each other.
They can use either direct or indirect communication.
a. Direct Communication
•In Direct communication, each process that wants to communicate must name
the sender or recipient of the communication. There are two schemes of direct
communication known as symmetric communication and asymmetric
communication.
•The symmetric communication requires that both the sender and receiver
process must name each other to communicate
•Send () and Receive () functions in symmetric communication are defined as
follows:
• Send (P, msg) it is used to send the message msg to the process P.
• Receive (Q, msg) it is used to receive the message msg by the process Q.
•  In asymmetric communication, only the sender process must name the
recipient. The recipient process does not need to name the sender process.
• Send () and Receive () functions in asymmetric communication are defined as
follows:
• Send (P, msg) it is used to send the message msg to the process P.
• Receive (id, msg) it is used to receive message msg from any process. The variable id refers to
the name of the process with which the communication has taken place.
• The properties of the communication link in direct communication are as
follows:
• A link is automatically established between the pair of processes that want to communicate.
The processes only need to know the identity of each other.
• A link is established between exactly two processes.
• Only one link is established between one pair of processes.
In-Direct Communication
•In-indirect communication, the message is sent and received from mailboxes or
ports. A mailbox is an object used by the processes to store and remove messages.
Each mailbox has a unique identification. Two processes can communicate using
this method only if they have a shared mailbox. The send() and receive()
primitives are defined as follows:
• send(A, message)—Send a message to mailbox A.
• receive(A, message)—Receive a message from mailbox A.
•In this scheme, a communication link has the following properties:
• A link is established between a pair of processes only if both members of the pair have a
shared mailbox.
• A link may be associated with more than two processes.
• Between each pair of communicating processes, a number of different links may exist,
with each link corresponding to one mailbox.
• A mailbox can be owned by a processor or the Operating System. If the mailbox
is owned by a process then it disappears when that process is terminated.
• The Operating System must provide a mechanism for the following:
• Create a new mailbox
• Send and receive messages via mailbox
• Delete a mailbox
• There are some problems with the mailbox system that occur when more than
two processes share a mailbox. Suppose that process 1 sends a message to a
mailbox that is shared with process 2 and process 3. Process 2 and process 3
both try to receive the message. Who will get it?
• Some solutions to the above problem are :
• To allow a link to be established with at most two processes
•  Allow only one process at a time to execute a receive operation
• To allow the system to arbitrarily select the receiver. The sender must be notified who
received the message.
b.Synchronization
•The communication between different processes takes place through calls to
send() and receive () functions. Different design options to implement each
function include message passing by blocking or nonblocking. These are also
known as synchronous or asynchronous.
•Blocking send. The sending process is blocked until the message is received by
the receiving process .
• Nonblocking send. The sending process sends the message and resumes
operation.
• Blocking receive. The receiver blocks until a message is available.
• Nonblocking receive. The receiver retrieves either a valid message or a null.
c. Buffering
•Whether communication is direct or indirect, messages exchanged by
communicating processes reside in a temporary queue. Basically, such queues
can be implemented in three ways:

• Zero capacity. The queue has a maximum length of zero; thus, the link cannot
have any messages waiting in it. In this case, the sender must block until the
recipient receives the message.
• Bounded capacity. The queue has finite length n; thus, at most n messages can
reside in it. If the queue is not full when a new message is sent, the message is
placed in the queue (either the message is copied or a pointer to the message is
kept), and the sender can continue execution without waiting. The link’s
capacity is finite, however. If the link is full, the sender must block until space
is available in the queue.

• Unbounded capacity. The queue’s length is potentially infinite; thus, any


number of messages can wait in it. The sender never blocks.

• The zero-capacity case is sometimes referred to as a message system with no


buffering. The other cases are referred to as systems with automatic buffering.
Communications in Client-Server Systems

• Sockets
• Remote Procedure Calls
• Pipes
• Remote Method Invocation (Java)
Sockets
• A socket is defined as an endpoint for communication

• Concatenation of IP address and port – a number included at start of message


packet to differentiate network services on a host

• The socket 161.25.19.8:1625 refers to port 1625 on host 161.25.19.8

• Communication consists between a pair of sockets

• All ports below 1024 are well known, used for standard services

• Special IP address 127.0.0.1 (loopback) to refer to system on which process is


running
Socket Communication
Remote Procedure Calls
• Remote procedure call (RPC) abstracts procedure calls between processes
on networked systems
• Again uses ports for service differentiation
• Stubs – client-side proxy for the actual procedure on the server
• The client-side stub locates the server and marshalls the parameters
• The server-side stub receives this message, unpacks the marshalled
parameters, and performs the procedure on the server
• On Windows, stub code compile from specification written in Microsoft
Interface Definition Language (MIDL)
Remote Procedure Calls (Cont.)
• Data representation handled via External Data Representation (XDL)
format to account for different architectures
• Big-endian and little-endian
• Remote communication has more failure scenarios than local
• Messages can be delivered exactly once rather than at most once
• OS typically provides a rendezvous (or matchmaker) service to
connect client and server
Execution of RPC
Pipes
• Acts as a conduit allowing two processes to communicate
• Issues:
• Is communication unidirectional or bidirectional?
• In the case of two-way communication, is it half or full-duplex?
• Must there exist a relationship (i.e., parent-child) between the communicating processes?
• Can the pipes be used over a network?
• Ordinary pipes – cannot be accessed from outside the process that created it.
Typically, a parent process creates a pipe and uses it to communicate with a child
process that it created.
• Named pipes – can be accessed without a parent-child relationship.
Ordinary Pipes
 Ordinary Pipes allow communication in standard producer-consumer style
 Producer writes to one end (the write-end of the pipe)
 Consumer reads from the other end (the read-end of the pipe)
 Ordinary pipes are therefore unidirectional
 Require parent-child relationship between communicating processes

 Windows calls these anonymous pipes


 See Unix and Windows code samples in textbook
Named Pipes
• Named Pipes are more powerful than ordinary pipes
• Communication is bidirectional
• No parent-child relationship is necessary between the
communicating processes
• Several processes can use the named pipe for communication
• Provided on both UNIX and Windows systems
Threads
• A thread is an execution unit that has its own program counter, a stack and a set
of registers that reside in a process.
•  A thread is a flow of execution through the process code.
Why Threads:
• A thread is also known as lightweight process. The idea is to achieve parallelism
by dividing a process into multiple threads. 
• Actually only one thread is executed at a time by the CPU, but the CPU
switches rapidly between the threads to give an illusion(a false appearance) that
the threads are running parallelly.
• Threads can’t exist outside any process. Also, each thread belongs to exactly one
process. The information like code segment, files, and data segment can be
shared by the different threads.
Example:
•A programmer wish to type the text in word processor. Then the programmer
opens a file in a word processor and typing the text (It is a thread), the text is
automatically formatting (It is another thread). The text automatically specifies
the spelling mistakes (It is another thread), and the file is automatically saved in
the disk (It is another thread).
• The diagram above shows the single-threaded process and the multi-threaded process.
A single-threaded process is a process with a single thread.
• A multi-threaded process is a process with multiple threads. As the diagram clearly shows that
the multiple threads in it have its own registers, stack, and counter but they share the code and
data segment.
Multicore Programming
• Types of parallelism
• Data parallelism – distributes subsets of the same data across multiple cores, same
operation on each
• Task parallelism – distributing threads across cores, each thread performing unique
operation
Data and Task Parallelism
Differences between process and thread
Types of Thread
In the operating system, there are two types of threads.
•Kernel level thread.
•User-level thread.
User-level thread :
•The operating system does not recognize the user-level thread.
• User threads can be easily implemented and it is implemented by the user.
•If a user performs a user-level thread blocking operation, the whole process is
blocked.
• The kernel level thread does not know nothing about the user level thread. The
kernel-level thread manages user-level threads as if they are single-threaded
processes?Examples: Java thread, POSIX threads, etc.
Kernel Level Thread
• There is a thread control block and process control
block in the system for each thread and process in the
kernel-level thread.
• The kernel-level thread is implemented by the operating
system.
• The kernel-level thread offers a system call to create
and manage the threads from user-space. The
implementation of kernel threads is more difficult than
the user thread.
• Context switch time is longer in the kernel thread.
• If a kernel thread performs a blocking operation, the
Banky thread execution can continue. Example:
Window Solaris.
Types of threads
user level thread kernel level thread
Advantages of User-level threads –
Can be implemented on an OS that does’t support multithreading.
Simple representation since thread has only program counter, register set, stack space.
Simple to create since no intervention of kernel.
Thread switching is fast since no OS calls need to be made.

Disadvantages of User-level threads –


No or less co-ordination among the threads and Kernel.
If one thread causes a page fault, the entire process blocks.

Advantages of Kernel Level Thread –


Since kernel has full knowledge about the threads in the system, scheduler may decide to give more time to processes
having large number of threads.
Good for applications that frequently block.

Disadvantages of Kernel Level Thread –


Slow and inefficient.
It requires thread control block so it is an overhead.
Multi threading and its models
• As we know, each process has its own address space which
contains code, data, files, registers, stack and a single thread of control.
• The process with a single thread of control is a traditional or classical process.
• Its drawback is that if the thread of the process gets blocked, the whole process will get
blocked.
• So the solution is to divide the single thread of control into multiple threads.
• The process with multiple threads can execute several tasks at a time.
• The multiple threads of the same process share the code, data and files of the process
and each thread maintains its own register and stack. This is called multithreading and
the process with multiple threads is called multithreaded process.
The benefits of multithreaded programming can be broken down into four major
categories:
•Resource Sharing All the threads of a process share its resources such as memory, data, files
etc. A single application can have different threads within the same address space using
resource sharing.
•Responsiveness: Multithreading an interactive application may allow a program to continue
running even if part of it is blocked or is performing a lengthy operation, thereby increasing
responsiveness to the user.
•Scalability(Utilization of Multiprocessor Architecture) In a multiprocessor architecture,
each thread can run on a different processor in parallel using multithreading. This increases
concurrency of the system. This is in direct contrast to a single processor system, where only
one process or thread can run on a processor at a time.
•Economy It is more economical to use threads as they share the process resources.
Comparatively, it is more expensive and time-consuming to create processes as they require
more memory and resources. The overhead for process creation and management is much
higher than thread creation and management.
• Ultimately, a relationship must exist between user threads and kernel threads.
In this section, we look at three common ways of establishing such a
relationship: the many-to-one model, the one-to-one model, and the many-to -
many model.
• User programs can make their own schedulers too. However these aren't kernel
threads. Each user thread can't actually run on its own and for this it needs a
kernel level thread.
• So for a user level thread to make progress the user program has to have it's
scheduler take a user level thread and run it on a kernel level thread and then
we have different mappings to achieve this.
• The different multi threading models are:
• i) Many-to-One Model
• ii) One-to-One Model
• iii) Many-to-Many Model
1. Many to One Multithreading Model
• Many to one multithreading model maps many user threads to a single kernel
thread. The thread library present at the user space is responsible for thread
management at the user level. The user threads do not require any kernel
support.
• In many to one model, a single user thread can access the kernel at a time. So,
the multiple threads can’t run in parallel. In this scenario, if a thread makes a
system blocking call the entire process can get blocked. As only one thread
has the access to the kernel at a time.
• This model is implemented by the Green Threads library of Solaris Operating
system. This type of model is implemented by the operating systems that do
not create kernel threads.
2. One to One Multithreading Model

•One to one multithreading model maps one user thread to one kernel thread. So, for each
user thread, there is a corresponding kernel thread present in the kernel.
• In this model, thread management is done at the kernel level and the user threads are fully
supported by the kernel.
•Here, if one thread makes the blocking system call, the other threads still run. So, when
compared with many to one model, the one to one model provides more concurrency.
•Draw back of this model is creating used thread needs creating corresponding kernel thread.
•Implementing a kernel thread corresponding to each user thread increases the overhead of
the kernel, resulting in slow thread management.
•This model restricts the number of threads that can be supported by the system.
• The operating system, Linux and Windows family implement one to one
multithreading model.
• 3. Many to Many Multithreading Model
• Many to many multithreading models in operating system maps many user threads to a lesser
or equal number of kernel threads.
• This model does not suffer from any of the drawbacks of many to one model and one to one
model.
• Many to many multithreading models does not restrict the number of user thread created. The
developer can create as many threads as required. Neither the process block if a thread makes
a blocking system call. As the kernel can schedule other thread for execution.
Two-level Model
• Similar to M:M, except that it allows a user thread to be bound to kernel
thread
• Examples
• IRIX
• HP-UX
• Tru64 UNIX
• Solaris 8 and earlier
Thread Libraries
• Thread library provides programmer with API for creating and managing
threads
• Two primary ways of implementing
• Library entirely in user space
• Kernel-level library supported by the OS
There are 3 main Thread Libraries :
1.PThreads
2.Window Threads
3.Java Threads
PThreads
• May be provided either as user-level or kernel-level
• A POSIX standard (IEEE 1003.1c) API for thread creation and synchronization
• Specification, not implementation
• API specifies behavior of the thread library, implementation is up to
development of the library
• Common in UNIX operating systems (Solaris, Linux, Mac OS X)
Pthreads Example
Pthreads Example
Pthreads (Cont.)
Example (Cont.)
Pthreads Code for Joining 10 Threads
Windows Multithreaded C Program
Windows Multithreaded C Program (Cont.)
Java Threads
• Java threads are managed by the JVM
• Typically implemented using the threads model provided by underlying OS
• Java threads may be created by:

• Extending Thread class


• Implementing the Runnable interface
Java Multithreaded Program
Java Multithreaded Program (Cont.)
Threading Issues :
• The main purpose of multithreading is to provide simultaneous execution of
two or more parts of a program to maximum utilize the CPU time.
A multithreaded program contains two or more parts that can run
concurrently. Each such part of a program called thread.
• we discuss some of the issues to consider in designing multithreaded programs.
• Following threading issues are:
• The fork() and exec() system call
• Signal handling
• Thread cancelation
• Thread local storage
• Scheduler activation
The fork() and exec() system call
•how the fork() system call is used to create a separate, duplicate process(or)
child process. The semantics of the fork() and exec() system calls change in a
multithreaded program.

•(Q)If one thread in a program calls fork(), does the new process duplicate
all threads, or is the new process single-threaded?
•(A)ome UNIX systems have chosen to have two versions of fork(), one that
duplicates all threads and another that duplicates only the thread that invoked the
fork() system call.
• The exec system call is used to execute a file which is residing in an active
process. When exec is called the previous executable file is replaced and new
file is executed. More precisely, we can say that using exec system call will
replace the old file or program from the process with a new file or program.
• The exec() system call typically works in the same way That is, if a thread
invokes the exec() system call, the program specified in the parameter to exec()
will replace the entire process—including all threads.
• Signal Handling( interrupts might be signals sent explicitly by another process, or might
represent external actions such as a user typing a Control-C.)
• A signal is used in UNIX systems to notify a process that a particular event has
occurred. A signal may be received either synchronously or asynchronously,
depending on the source of and the reason for the event being signaled. All
signals, whether synchronous or asynchronous, follow the same pattern:
• 1. A signal is generated by the occurrence of a particular event.
• 2. The signal is delivered to a process.
• 3. Once delivered, the signal must be handled.
• Examples of synchronous signal include illegal memory access and division
by 0. If a running program performs either of these actions, a signal is generated.
Synchronous signals are delivered to the same process that performed the
operation that caused the signal (that is the reason they are considered
synchronous).
• Examples asynchronous signals include terminating a process with specific
keystrokes (such as <control><C>) and having a timer expire. Typically, an
asynchronous signal is sent to another process. When a signal is generated by
an event external to a running process, that process receives the signal
asynchronously.
• A signal may be handled by one of two possible handlers:
• 1. A default signal handler
• 2. A user-defined signal
• Every signal has a default signal handler that the kernel runs when handling
that signal. This default action can be overridden by a user-defined signal
handler that is called to handle the signal. Signals are handled in different
ways. Some signals (such as changing the size of a window) are simply
ignored; others (such as an illegal memory access) are handled by terminating
the program.
Thread Cancellation
•Threads that are no-longer required can be cancelled by another thread in one of
two techniques:
•Asynchronous cancellation
•Deferred cancellation
Asynchronous Cancellation 
•One thread immediately terminates the target thread(A thread that is to be
canceled is often referred to as the target thread.).Allocation of resources and
inter thread data transfer may be challenging for asynchronous cancellation.
Deferred Cancellation
•In this method a flag is sets that indicating the thread should cancel itself when
it is feasible. It’s upon the cancelled thread to check this flag intermittently and
exit nicely when it sees the set flag.
Thread Local Storage :
•Threads belonging to a process share the data of the process. Indeed, this data
sharing provides one of the benefits of multithreaded programming.However, in
some circumstances, each thread might need its own copy of certain data.We will
call such data thread-local storage (or TLS.)
•It is easy to confuse TLS with local variables. However, local variables are
visible only during a single function invocation, whereas TLS data are visible
across function invocations.
• In some ways, TLS is similar to static data. The difference is that TLS data are
unique to each thread. Most thread libraries—including Windows and Pthreads—
provide some form of support for thread-local storage; Java provides support as
well.
Scheduler Activation
•A final issue to be considered with multithreaded programs concerns
communication between the kernel and the thread library, which may be
required by the many-to-many and two-level models.
•Many systems implementing either the many-to-many or the two-level model
place an intermediate data structure between the user and kernel threads. This
data structure—typically known as a lightweight process, or LWP—is shown in
Figure 4.13.
• To the user-thread library, the LWP appears to be a virtual processor on which
the application can schedule a user thread to run. Each LWP is attached to a
kernel thread, and it is kernel threads that the operating system schedules to run
on physical processors. If a kernel thread blocks (such as while waiting for an
I/O operation to complete), the LWP blocks as well. Up the chain, the user-level
thread attached to the LWP also blocks.
• An application may require any number of LWPs to run efficiently.
• One scheme for communication between the user-thread library and the kernel is
known as scheduler activation. It works as follows: The kernel provides an
application with a set of virtual processors (LWPs), and the application can
schedule user threads onto an available virtual processor.
• Furthermore, the kernel must inform an application about certain events. This
procedure is known as an upcall. Upcalls are handled by the thread library with
an upcall handler, and upcall handlers must run on a virtual processor.
CPU Scheduling
Basic Concepts
•In a single-processor system, only one process can run at a time. Others must wait until the
CPU is free and can be rescheduled.
• The objective of multiprogramming is to have some process running at all times, to
maximize CPU utilization. The idea is relatively simple.
•A process is executed until it must wait, typically for the completion of some I/O request.
•In a simple computer system, the CPU then just sits idle. All this waiting time is wasted; no
useful work is accomplished. With multiprogramming, we try to use this time productively.
Several processes are kept in memory at one time.
•one process has to wait, the operating system takes the CPU away from that process and
gives the CPU to another process. This pattern continues. Every time one process has to wait,
another process can take over use of the CPU.
• Scheduling of this kind is a fundamental operating-system function. Almost all
computer resources are scheduled before use.
• The CPU is, of course, one of the primary computer resources. Thus, its
scheduling is central to operating-system design.
• The basic concept of scheduling includes the followings:
• i) CPU–I/O Burst Cycle
• ii) CPU Scheduler
• iii) Preemptive Scheduling
• iv)Dispatcher
i) CPU–I/O Burst Cycle:

The success of CPU scheduling depends on property of processes:

• process execution consists of a cycle of CPU execution and I/O wait.


• Processes alternate between these two states.
• Process execution begins with a CPU burst. That is followed by an I/O burst, which is
followed by another CPU burst, then another I/O burst, and so on.
• Eventually, the final CPU burst ends with a system request to terminate execution .
• The durations of CPU bursts have been measured extensively. Although they vary greatly
from process to process and from computer to computer.
• CPU Burst: This is the time that a process spends on the cpu executing some
code. CPU Burst deals with the RUNNING state of the process.

•  I/O Burst: This is the time that a process spends in waiting for the completion
of the I/O request. 
• I/O Burst deals with the WAITING STATE of the process.
ii) CPU Scheduler
•Whenever the CPU becomes idle, the operating system must select one of the
processes in the ready queue to be executed. The selection process is carried out
by the short-term scheduler, or CPU scheduler.
•The scheduler selects a process from the processes in memory that are ready to
execute and allocates the CPU to that process.
•Note that the ready queue is not necessarily a first-in, first-out (FIFO) queue.
•When we consider the various scheduling algorithms, a ready queue can be
implemented as a FIFO queue, a priority queue, a tree, or simply an unordered
linked list.
iii) Preemptive and non preemptive scheduling :
•CPU-scheduling decisions may take place under the following four
circumstances:

1. When a process switches from the running state to the waiting state (for example, as the
result of an I/O request or an invocation of wait() for the termination of a child process)
2. When a process switches from the running state to the ready state (for example, when an
interrupt occurs)
3. When a process switches from the waiting state to the ready state (for example, at
completion of I/O)
4. When a process terminates
• Scheduling under 1 and 4 is non preemptive(once the CPU has been allocated
to a process, the process keeps the CPU until it releases, the CPU either by
terminating or by switching to the waiting state)
• All other are pre-emptive scheduling(Preemptive Scheduling is a scheduling
method where the tasks are mostly assigned with their priorities. Sometimes it is
important to run a task with a higher priority before another lower priority task,
even if the lower priority task is still running. At that time, the lower priority
task holds for some time and resumes when the higher priority task finishes its
execution.)
• Examples of preemptive are RR,SRTF(another mode of SJF),PRIORITY
• Examples of nonpreemptive are FCSF,SJF,PRIORITY
Differences between preemptive and nonpreemptive scheduling
Preemptive scheduling Non-preemptive scheduling
1. In preemptive scheduling, the processes are allocated for a 1. In non-preemptive scheduling, the process is allocated to
short period. the CPU, and the resource will hold the process until it
completes its execution or changes its state to waiting for the
state from ready state.
2. In this, we can interrupt the process in between of 2. In this, we can not interrupt until the process completes its
execution. execution or switches its state.
3. This type of schedule is flexible as each process in the ready 3. This type of scheduling is rigid.
queue gets some time to run CPU.
4. Preemptive scheduling has overheads of scheduling the 4. Non-preemptive scheduling has not overheads of
processes. scheduling the processes.
5. There is a cost associated with the preemptive scheduling. 5. There is no cost associated with non-preemptive
scheduling.
6. In preemptive scheduling, if a process which has high 6. In non-preemptive scheduling, if the process which has
priority arrives in the ready queue, then the process which long burst time is executing, then the other method which
has low priority may starve. has less burst time may starve.
7. examples: RR,srtf(sjf),priority 7. examples: FCFS,SJF
iv) Dispatcher
•Another component involved in the CPU-scheduling function is the dispatcher.
The dispatcher is the module that gives control of the CPU to the process selected
by the short-term scheduler. This function involves the following:
• Switching context
• Switching to user mode
• Jumping to the proper location in the user program to restart that program

•The dispatcher should be as fast as possible, since it is invoked during every


process switch. The time it takes for the dispatcher to stop one process and start
another running is known as the dispatch latency.
v) Scheduling Criteria
•Many criteria have been suggested for comparing CPU-scheduling
algorithms.Which characteristics are used for comparison can make a substantial
difference in which algorithm is judged to be best. The criteria include the
following:
• CPU utilization
• Throughput.
• Turnaround time
• Waiting time
• Response time.
• CPU utilization. We want to keep the CPU as busy as possible.
Conceptually,CPU utilization can range from 0 to 100 percent. In a real system, it
should range from 40 percent (for a lightly loaded system) to 90 percent (for a
heavily loaded system).
• Throughput. If the CPU is busy executing processes, then work is being done.
One measure of work is the number of processes that are completed per time
unit, called throughput. For long processes, this rate may be one process per
hour; for short transactions, it may be ten processes per second.
• Turnaround time. From the point of view of a particular process, the important
criterion is how long it takes to execute that process. The interval from the time
of submission of a process to the time of completion is the turnaround time.
Turnaround time is the sum of the periods spent waiting to get into memory,
waiting in the ready queue, executing on the CPU, and doing I/O.
• Turnaround time = Burst time + Waiting time
• Turnaround time = Exit time - Arrival time
• Waiting time. The CPU-scheduling algorithm does not affect the amount of time
during which a process executes or does I/O. It affects only the amount of time
that a process spends waiting in the ready queue. Waiting time is the sum of the
periods spent waiting in the ready queue.
• Waiting time = Turnaround time - Burst time
• Response time:In an interactive system, a process can produce some output
early and can continue, computing new results. Previous results are being
displayed the user. Thus another measure is the time from the submission to the
request until the first response is produced. It is called Response time. The
turnaround time is normally limited by speed of output device.
• Response time = Time at which the process gets the CPU for the first time - Arrival time
• Arrival time: Arrival time is the time when a process enters into the ready state
and is ready for its execution.
• Burst time: is the total time taken by the process for its execution on the CPU.
Scheduling Algorithms
• CPU scheduling deals with the problem of deciding which of the processes in
the ready queue is to be allocated the CPU. There are many different CPU-
scheduling algorithms.
• First-Come, First-Served Scheduling(FCFS)
• Shortest-Job-First Scheduling(SJF)
• Priority Scheduling(Priority)
• Round-Robin Scheduling(RR)
• Multilevel Queue Scheduling
• Multilevel Feedback Queue Scheduling
First-Come, First-Served Scheduling(FCFS)
• In the "First come first serve" scheduling algorithm, as the name suggests, the
process which arrives first, gets executed first, or we can say that the process
which requests the CPU first, gets the CPU allocated first.
• First Come First Serve, is just like FIFO(First in First out) Queue data structure,
where the data element which is added to the queue first, is the one who leaves
the queue first.
• This is used in Batch Systems.
• It's easy to understand and implement programmatically, using a Queue data
structure, where a new process enters through the tail of the queue, and the
scheduler selects process from the head of the queue.
• A perfect real life example of FCFS scheduling is buying tickets at ticket
counter.
Problems with FCFS Scheduling
•Below we have a few shortcomings or problems with the FCFS scheduling
algorithm:
• It is Non Pre-emptive algorithm, which means the process priority doesn't matter. If a
process with very least priority is being executed, more like daily routine
backup process, which takes more time, and all of a sudden some other high priority
process arrives, like interrupt to avoid system crash, the high priority process will have
to wait, and hence in this case, the system will crash, just because of improper process
scheduling.
• Not optimal Average Waiting Time.
• Resources utilization in parallel is not possible, which leads to Convoy Effect, and hence
poor resource(CPU, I/O etc) utilization.
• Convoy Effect is a situation where many processes, who need to use a resource for short
time are blocked by one process holding that resource for a long time.This essentially
leads to poort utilization of resources and hence poor performance.
• Example:
Process Burst time
P1 24
P2 3
p3 3

• P-process, A.T=ARRIVAL TIME B.T= BURST TIME C.T=COMPLETION TIME


• TAT=TURN AROUND TIME(CT-AT) W.T=WAITING TIME(TAT-BT)
• EXAMPLE:
PROCESS ARRIVAL TIME BURST TIME(EXECUTION PROCESS ARRIVAL TIME BURST TIME
TIME)
P1 2 2
P0 0 2
P2 0 1
P1 1 6
P3 2 3
P2 2 4
P4 3 5
P3 3 9
P5 4 4
P4 4 12
SHOTEST JOB FIRST(SJF):
•Shortest Job First is a Preemptive or Non-Preemptive algorithm. In the shortest
job first algorithm, the job having shortest or less burst time will get the CPU first.
It is the best approach to minimize the waiting time. It is simple to implement in
the batch operating system because in this CPU time is known in advance, but it is
not used in interactive systems, because in interactive systems, CPU time is not
known.
•Characteristics of Shortest Job First Scheduling
• SJF algorithm is helpful in batch operating where the waiting time for job completion is
not critical.
• SJF improves the throughput of the process by ensuring that the shorter jobs are
executed first, thus the possibility of less turnaround time.
• SJF enhances the output of the job by executing the process, which is having the shortest
burst time.
• Advantages of Shortest Job First (SJF) Scheduling
• The advantages of Shortest Job First scheduling are:
• SJF is basically used for Long Term Scheduling.
• The average waiting time of Shortest Job First (SJF) is less than the FCFS (First-Come, First
Serve) algorithm.
• For a particular set of processes, SJF provides the lowest average waiting
• In terms of the average turnaround time, it is optimal.
• Disadvantages of Shortest Job First (SJF) Scheduling
• In SJF process completion time needs to be known earlier. Although prediction is difficult.
• Sometimes the problem of starvation occurs in SJF.
• SJF needs the knowledge to know how long a process will run.
• It is not easy to know the upcoming CPU request length..
• In SJF, it is necessary to record elapsed time, resulting in more overhead the processor
Types of Shortest Job First (SJF) Scheduling
•There are two types of Shortest Job First Scheduling.
• Non-Preemptive SJF
• Preemptive SJF(Shortest Remaining Time First(SRTF))

Non-Preemptive SJF: – In Non-Preemptive Scheduling, if a CPU is located to the process,


then the process will hold the CPU until the process enters into the waiting state or
terminated.
Preemptive SJF Scheduling: – In this, jobs are moved into the ready queue when they
arrive. Those Processes which have less burst time begins its execution first. When the
process with less burst time arrives, then the current process stops execution, and the
process having less burst time is allocated with the CPU first.
•In the given question if they not specified preemtive sjf or nonpreemptive sjf by
default we follow non preemptive sjf
Example(non-preemptive sjf)
Process Burst time Process Arrival time Burst time
P1 6 P1 2 1
P2 8 P2 1 5
P3 7 P3 4 1
p4 3 P4 0 6
P5 2 3
Example (premptive sjf)
Process Arrival time Burst time Process Arrival time Burst time
P1 0 8 P1 2 1
P2 1 4 P2 1 5
P3 2 9 P3 4 1
P4 3 5 P4 0 6
P5 2 3
Priority scheduling
•Apriority is associated with each process, and the CPUis allocated to the process
with the highest priority. Equal-priority processes are scheduled in FCFS order.
An SJF algorithm is simply a priority algorithm where the priority (p) is the
inverse of the (predicted) next CPU burst. The larger the CPU burst, the lower the
priority, and vice versa.
•Some systems use low numbers to represent low priority; others use low
numbers for high priority.
• Priorities can be defined either internally or externally.
• Internally defined priorities use some measurable quantity or quantities to
compute the priority of a process. For example, time limits, memory
requirements, the number of open files, and the ratio of average I/O burst to
average CPU burst have been used in computing priorities.
• External priorities are set by criteria outside the operating system, such as the
importance of the process, the type and amount of funds being paid for
computer use, the department sponsoring the work, and other, often political,
factors.
• Priority scheduling can be either preemptive or nonpreemptive. When a process
arrives at the ready queue, its priority is compared with the priority of the
currently running process.
• A preemptive priority scheduling algorithmwill preempt the CPU if the priority
of the newly arrived process is higher than the priority of the currently running
process.
• A nonpreemptive priority scheduling algorithm will simply put the new process
at the head of the ready queue.
• A major problem with priority scheduling algorithms is indefinite blocking,or
starvation.
• Asolution to the problem of indefinite blockage of low-priority processes is
aging. Aging involves gradually increasing the priority of processes that wait in
the system for a long time.
• Example: priority nonpreemptive

Process Priority Burst time Process Priority Arrival time Burst time
P1 3 10 P1 3 0 8
P2 1 1 P2 4 1 2
P3 4 2 P3 4 3 4
P4 5 1 P4 5 4 1
P5 2 5 P5 2 5 6
P6 6 6 5
P7 1 10 1
• Example preemptive priority
Process ARRIVAL TIME Burst time Priority Process Priority Arrival time Burst time

P1 0 4 3 P1 3 0 8

P2 1 2 2 P2 4 1 2

P3 2 3 4 P3 4 3 4

P4 4 2 1 P4 5 4 1
P5 2 5 6
P6 6 6 5
• P7 1 10 1
Round Robin Scheduling Algorithm
•Round Robin scheduling algorithm is one of the most popular scheduling
algorithm which can actually be implemented in most of the operating systems.
• This is the preemptive version of first come first serve scheduling.
•The Algorithm focuses on Time Sharing.
•In this algorithm, every process gets executed in a cyclic way. A certain time
slice is defined in the system which is called time quantum.
•Each process present in the ready queue is assigned the CPU for that time
quantum, if the execution of the process is completed during that time then the
process will terminate else the process will go back to the ready queue and waits
for the next turn to complete the execution.
Advantages
•It can be actually implementable in the system because it is not depending on the
burst time.
•It doesn't suffer from the problem of starvation or convoy effect.
•All the jobs get a fare allocation of CPU.
Disadvantages
•The higher the time quantum, the higher the response time in the system.
•The lower the time quantum, the higher the context switching overhead in the
system.
•Deciding a perfect time quantum is really a very difficult task in the system.
Example preemptive RR calculate ATAT and AWT for following using RR

Process Burst time Process Arrival time Burst time


P1 24 P1 0 5
P2 3 P2 1 3
P3 3 P3 2 1
P4 3 2
P5 4 3
Calculate average turn around time and average waiting time using RR algorithm and time quantum(TQ)=3

Process Arrival time Burst time


P1 0 8
P2 5 2
P3 1 7
P4 6 3
P5 8 5
Multilevel Queue Scheduling
•Another class of scheduling algorithms has been created for situations in which
processes are easily classified into different groups.

•For example: A common division is made between foreground(or interactive)


processes and background (or batch) processes. These two types of processes
have different response-time requirements, and so might have different
scheduling needs. In addition, foreground processes may have priority over
background processes.
• A multi-level queue scheduling algorithm partitions the ready queue into several
separate queues. The processes are permanently assigned to one queue, generally
based on some property of the process, such as memory size, process priority, or
process type. Each queue has its own scheduling algorithm.
• For example: separate queues might be used for foreground and background
processes. The foreground queue might be scheduled by Round Robin
algorithm, while the background queue is scheduled by an FCFS algorithm.
• In addition, there must be scheduling among the queues, which is commonly
implemented as fixed-priority preemptive scheduling. For example: The
foreground queue may have absolute priority over the background queue.
• Let us consider an example of a multilevel queue-scheduling algorithm with five
queues:
• System Processes
• Interactive Processes
• Interactive Editing Processes
• Batch Processes
• Student Processes
• Each queue has absolute priority over lower-priority queues. No process in the
batch queue, for example, could run unless the queues for system processes,
interactive processes, and interactive editing processes were all empty. If an
interactive editing process entered the ready queue while a batch process was
running, the batch process will be preempted.
Multilevel Feedback-Queue Scheduling

•Normally, when the multilevel queue scheduling algorithm is used, processes are
permanently assigned to a queue when they enter the system. If there are separate
queues for foreground and background processes, processes do not move from
one queue to the other, since processes do not change their foreground or
background nature.

•This setup has the advantage of low scheduling overhead, but it is inflexible.
The multilevel feedback-queue scheduling algorithm, in contrast, allows a
process to move between queues.

• The idea is to separate processes according to the characteristics of their CPU
bursts.
• If a process uses too much CPU time, it will be moved to a lower-priority queue.
• This scheme leaves I/O-bound and interactive processes in the higher-priority queues.
• In addition, a process that waits too long in a lower-priority queue may be moved to a
higher-priority queue.
• This form of aging prevents starvation (see Fig. 6.7).
• A process entering the ready queue is put in queue O. A process in queue 0 is given a
time quantum of 8 milliseconds.
• If it does not finish within this time, it is moved to the tail of queue 1.
• If queue 0 is empty, the process at the head of queue 1 is given a quantum of 16
milliseconds.
• If it does not complete, it is preempted and is put into queue 2.
• Processes in queue 2 are run on an FCFS basis but are run only when queues 0 and 1 are
empty.
• In general, a multilevel feedback-queue scheduler is defined by the following
parameters:
• The number of queues.
• The scheduling algorithm for each queue.
• The method used to determine when to upgrade a process to a higher-priority queue.
• The method used to determine when to demote a process to a lower-priority queue.
• The method used to determine which queue a process will enter when that process needs
service.
• The definition of a multilevel feedback-queue scheduler makes it the most
general CPU-scheduling algorithm. Unfortunately, it is also the most complex
algorithm.
Readers-Writers Problem
Monitors
Monitors

• The Monitor is a
module or package
which encapsulates
shared data structure,
procedures, and the
synchronization
between the
concurrent procedure
invocations.
• Characteristics of Monitors.
• Inside the monitors, we can only execute one process at a time.
• Monitors are the group of procedures, and condition variables that are
merged together in a special type of module.
•  If the process is running outside the monitor, then it cannot access the
monitor’s internal variable. But a process can call the procedures of the
monitor.
•  Monitors offer high-level of synchronization
•  Monitors were derived to simplify the complexity of synchronization
problems.
. There is only one process that can be active at a time inside the monitor.
Monitor- Account example

You might also like