Professional Documents
Culture Documents
Process Management: BY N.Madhuri
Process Management: BY N.Madhuri
BY
N.Madhuri
UNIT-II:
• Process Management – Process concept, The process, Process State Diagram ,
Process control block, Process Scheduling- Scheduling Queues, Schedulers,
Operations on Processes, Interprocess Communication, Threading Issues,
Scheduling-Basic Concepts, Scheduling Criteria,Scheduling Algorithms.
Process Management
Process Concept:
• process, which is a program in execution. A process is the unit of work in a modern
time-sharing system.
• A batch system executes jobs, whereas a time-shared system has user programs, or
tasks.
• Even on a single-user system, a user may be able to run several programs at one time:
a word processor, a Web browser, and an e-mail package. And even if a user can
execute only one program at a time, such as on an embedded device that does not
support multitasking, the operating system may need to support its own internal
programmed activities, such as memory management. In many respects, all these
activities are similar, so we call all of them processes.
• the process concept can be categorized into 4 ways are
• i)The process ii) process states
• iii) process control block(PCB) iv) threads
i) The process:
• a process is a program in execution. A process is more than the program code, which is
sometimes known as the text section.
• It also includes the current activity, as represented by the value of the program counter and
the contents of the processor’s registers.
• A process generally also includes the process stack, which contains temporary data (such as
function parameters, return addresses, and local variables), and a data section, which
contains global variables.
• A process may also include a heap, which is memory that is dynamically allocated during
process run time. The structure of a process in memory is shown
A program by itself is not a process. A program is a
passive entity, such as a file containing a list of
instructions stored on disk (often called an executable
file).
• Waiting. The process is waiting for some event to occur (such as an I/O
completion or reception of a signal).
Program counter : The counter indicates the address of the next instruction
to be executed for this process.
CPU registers : The registers vary in number and type, depending on the computer
architecture. They include accumulators(alu), index registers(modifying operand add
during run),stack register, and general-purpose registers(store temp data in time of
operation), plus any condition-code information. Along with the program counter, this
state information must be saved when an interrupt occurs, to allow the process to be
continued correctly afterward
CPU-scheduling information. This information includes a process priority,pointers to
scheduling queues, and any other scheduling parameters.
Memory-management information. This information may include such items as the value
of the base and limit registers and the page tables, or the segment tables, depending on the
memory system used by the operating system.
Accounting information. This information includes the amount of CPU and real time used,
time limits, account numbers, job or process numbers,and so on.
I/O status information. This information includes the list of I/O devices
allocated to the process, a list of open files, and so on.
Iv) Threads:
• A thread is a basic unit of CPU utilization; it comprises a thread ID, a program
counter, a register set, and a stack.
• It shares with other threads belonging to the same process its code section,
data section, and other operating-system resources, such as open files and
signals.
• A traditional (or heavyweight) process has a single thread of control. If a
process has multiple threads of control, it can perform more than one task at
a time.
Process Scheduling
• i) scheduling queues
• Ii) schedulers
• Iii) context switching
i) scheduling queues
• As processes enter the system, they are put into a job queue, which consists of
all processes in the system.
• The processes that are residing in main memory and are ready and waiting to
execute are kept on a list called the ready queue. This queue is generally stored
as a linked list.
• A ready-queue header contains pointers to the first and final PCBs in the list.
Each PCB includes a pointer field that points to the next PCB in the ready queue.
• The system also includes other queues. When a process is allocated the CPU, it
executes for a while and eventually quits, is interrupted, or waits for the
occurrence of a particular event, such as the completion of an I/O request.
.
• Suppose the process makes an I/O request to a shared device, such as a
disk.Since there are many processes in the system, the disk may be busy
with the I/O request of some other process. The process therefore may have
to wait for the disk. The list of processes waiting for a particular I/O device
is called a device queue.
• The OS maintains 3 types of process scheduling queues
• i)job queue ii) ready queue iii)device queue
• job queue − This queue keeps all the processes in the system.
• Ready queue − This queue keeps a set of all processes residing in main memory,
ready and waiting to execute. A new process is always put in this queue.
• Device queues − The processes which are blocked due to unavailability of an I/O
device constitute this queue.
A common representation of process scheduling is a queueing diagram,
such as that in Figure 3.6. Each rectangular box represents a queue. Two
types of queues are present: the ready queue and a set of device queues.
The circles represent the resources that serve the queues, and the arrows
indicate the flow of processes in the system.
• A new process is initially put in the ready queue. It waits there until it is
selected for execution. Once the process is allocated the CPU and is executing,
one of several events could occur:
• The process could issue an I/O request and then be placed in an I/O
queue.
• The process could create a new child process and wait for the child’s
termination.
• The process could be removed forcibly from the CPU, as a result of an
interrupt, and be put back in the ready queue.
• In the first two cases, the process eventually switches fromthe waiting state to
the ready state and is then put back in the ready queue. A process continues
this cycle until it terminates, at which time it is removed from all queues and
has its PCB and resources deallocated.
ii) Schedulers
• A process migrates among the various scheduling queues throughout its
lifetime. The operating system must select, for scheduling purposes, processes
from these queues in some fashion. The selection process is carried out by the
appropriate scheduler.
• This is also known as CPU Scheduler and runs very frequently. The
primary aim of this scheduler is to enhance CPU performance and
increase process execution rate.
• CPU scheduler selects a process among the processes that are ready to
execute and allocates CPU to one of them.
• Short-term schedulers, also known as dispatchers, make the decision of
which process to execute next.
• Short-term schedulers are faster than long-term schedulers.
Medium Term Scheduler
• This scheduler removes the processes from memory (and from active
contention for the CPU), and thus reduces the degree of multiprogramming.
• At some later time, the process can be reintroduced into memory and its
execution can be continued where it left off. This scheme is
called swapping.
• The process is swapped out, and is later swapped in, by the medium term
scheduler.
Comparision of schedulars
S.N. Long-Term Scheduler Short-Term Scheduler Medium-Term Scheduler
2 Speed is lesser than short term Speed is fastest among other two Speed is in between both short and
scheduler long term scheduler.
3 It controls the degree of It provides lesser control over degree It reduces the degree of
multiprogramming of multiprogramming multiprogramming.
4 It is almost absent or minimal in time It is also minimal in time sharing It is a part of Time sharing systems.
sharing system system
5 It selects processes from pool and It selects those processes which are It can re-introduce the process into
loads them into memory for execution ready to execute memory and execution can be
continued.
iii) Context Switch
• Switching the CPU to another process requires saving the state of the old process
and loading the saved state for the new process. This task is known as a Context Switch.
• The context of a process is represented in the Process Control Block(PCB) of a process; it
includes the value of the CPU registers, the process state and memory-management
information.
• When a context switch occurs, the Kernel saves the context of the old process in its PCB and
loads the saved context of the new process scheduled to run.
• Context switch time is pure overhead, because the system does no useful work while
switching.
• Its speed varies from machine to machine, depending on the memory speed, the number of
registers that must be copied, and the existence of special instructions(such as a single
instruction to load or store all registers). Typical speeds range from 1 to 1000 microseconds.
• Context Switching has become such a performance bottleneck that
programmers are using new structures(threads) to avoid it whenever and
wherever possible
Operations on Processes
• The processes in most systems can execute concurrently, and they may be created and deleted
dynamically. Thus, these systems must provide a mechanism for process creation and
termination.
• The main operations on processes are
• i) process creation
• ii) process termination
• Process Creation
• Through appropriate system calls process is created , such as fork, processes may create other
processes. The process which creates other process, is termed the parent of the other process,
while the created sub-process is termed its child.
• Each process is given an integer identifier, termed as process identifier, or PID. The parent PID (PPID) is also
stored for each process.
• A child process may receive some amount of shared resources with its parent depending on
system implementation.
• There are two options for the parent process after creating the child :
• Wait for the child process to terminate before proceeding. Parent process makes
a wait() system call, for either a specific child process or for any particular child
process, which causes the parent process to block until the wait() returns. UNIX shells
normally wait for their children to complete before issuing a new prompt.
• Run concurrently with the child, continuing to process without waiting. It is also
possible for the parent to run for a while, and then wait for the child later, which might
occur in a sort of a parallel processing operation.
• There are also two possibilities in terms of the address space of the new
process:
• The child process is a duplicate of the parent process.
• The child process has a program loaded into it.
Process Termination
• By making the exit(system call), typically returning an int, processes may request their own
termination. This int is passed along to the parent if it is doing a wait(), and is typically zero
on successful completion and some non-zero code in the event of any problem.
• A parent may terminate the execution of one of its children for a variety of reasons, such as
these:
The inability of the system to deliver the necessary system resources.
In response to a KILL command or other unhandled process interrupts.
A parent may kill its children if the task assigned to them is no longer needed i.e. if the need of having a
child terminates.
If the parent exits, the system may or may not allow the child to continue without a parent (In UNIX
systems, orphaned processes are generally inherited by init, which then proceeds to kill them.)
Interprocess Communication
• Processes executing concurrently in the operating system may be either independent
processes or cooperating processes.
• process is cooperating if it can affect or be affected by the other processes executing in the
system. Clearly, any process that shares data with other processes is a cooperating process.
There are several reasons for providing an environment that allows process
cooperation:
• Information sharing: Since several users may be interested in the same piece of information
(for instance, a shared file), we must provide an environment to allow concurrent access to
such information.
• Computation speedup: If we want a particular task to run faster, we must break it into
subtasks, each of which will be executing in parallel with the others. Notice that such a
speedup can be achieved only if the computer has multiple processing cores.
• Modularity: We may want to construct the system in a modular fashion, dividing the
system functions into separate processes or threads.
• Convenience: Even an individual user may work on many tasks at the same time. For
instance, a user may be editing, listening to music, and compiling in parallel.
• Cooperating processes require an inter process communication(IPC)
mechanism that will allow them to exchange data and information.
• In version one, the producer can produce an unlimited amount of items. This is
called the Unbounded Buffer Producer-Consumer Problem.
• In the other version, there is a predetermined limit to the buffer size. When
the buffer is full, the producer must wait until there is some space in the buffer
before it can make a new item. This is called bounded buffer problem.
ii. Message Passing Model
•In the message-passing model, the cooperating processes communicate by
exchanging messages with one another. This model is useful for exchanging a
smaller amount of data. It can also be implemented more easily than a shared
memory model.
•
• It is typically implemented using system calls and is more time-consuming. The
message-passing model is typically useful in a distributed environment. The
processes may reside on different computers connected by a network.
• The chat program used on the internet is an example of the message passing
system. Users can communicate by exchanging messages with one another. A
message-passing model provides at least two operations:
Send (message)
Receive (message)
• Two processes can communicate with each other by sending and receiving
messages. This communication can take place by establishing a communication
link between these processes.
• This link can be implemented in a variety of ways. We are concerned here not
with the link’s physical implementation (such as shared memory, hardware bus,
or network) but rather with its logical implementation. Here are several methods
for logically implementing a link and the send()/receive() operations:
•a) Direct or indirect communication
• b)Synchronous or asynchronous communication
• c)Automatic or explicit buffering
• Processes that want to communicate must have a way to refer to each other.
They can use either direct or indirect communication.
a. Direct Communication
•In Direct communication, each process that wants to communicate must name
the sender or recipient of the communication. There are two schemes of direct
communication known as symmetric communication and asymmetric
communication.
•The symmetric communication requires that both the sender and receiver
process must name each other to communicate
•Send () and Receive () functions in symmetric communication are defined as
follows:
• Send (P, msg) it is used to send the message msg to the process P.
• Receive (Q, msg) it is used to receive the message msg by the process Q.
• In asymmetric communication, only the sender process must name the
recipient. The recipient process does not need to name the sender process.
• Send () and Receive () functions in asymmetric communication are defined as
follows:
• Send (P, msg) it is used to send the message msg to the process P.
• Receive (id, msg) it is used to receive message msg from any process. The variable id refers to
the name of the process with which the communication has taken place.
• The properties of the communication link in direct communication are as
follows:
• A link is automatically established between the pair of processes that want to communicate.
The processes only need to know the identity of each other.
• A link is established between exactly two processes.
• Only one link is established between one pair of processes.
In-Direct Communication
•In-indirect communication, the message is sent and received from mailboxes or
ports. A mailbox is an object used by the processes to store and remove messages.
Each mailbox has a unique identification. Two processes can communicate using
this method only if they have a shared mailbox. The send() and receive()
primitives are defined as follows:
• send(A, message)—Send a message to mailbox A.
• receive(A, message)—Receive a message from mailbox A.
•In this scheme, a communication link has the following properties:
• A link is established between a pair of processes only if both members of the pair have a
shared mailbox.
• A link may be associated with more than two processes.
• Between each pair of communicating processes, a number of different links may exist,
with each link corresponding to one mailbox.
• A mailbox can be owned by a processor or the Operating System. If the mailbox
is owned by a process then it disappears when that process is terminated.
• The Operating System must provide a mechanism for the following:
• Create a new mailbox
• Send and receive messages via mailbox
• Delete a mailbox
• There are some problems with the mailbox system that occur when more than
two processes share a mailbox. Suppose that process 1 sends a message to a
mailbox that is shared with process 2 and process 3. Process 2 and process 3
both try to receive the message. Who will get it?
• Some solutions to the above problem are :
• To allow a link to be established with at most two processes
• Allow only one process at a time to execute a receive operation
• To allow the system to arbitrarily select the receiver. The sender must be notified who
received the message.
b.Synchronization
•The communication between different processes takes place through calls to
send() and receive () functions. Different design options to implement each
function include message passing by blocking or nonblocking. These are also
known as synchronous or asynchronous.
•Blocking send. The sending process is blocked until the message is received by
the receiving process .
• Nonblocking send. The sending process sends the message and resumes
operation.
• Blocking receive. The receiver blocks until a message is available.
• Nonblocking receive. The receiver retrieves either a valid message or a null.
c. Buffering
•Whether communication is direct or indirect, messages exchanged by
communicating processes reside in a temporary queue. Basically, such queues
can be implemented in three ways:
• Zero capacity. The queue has a maximum length of zero; thus, the link cannot
have any messages waiting in it. In this case, the sender must block until the
recipient receives the message.
• Bounded capacity. The queue has finite length n; thus, at most n messages can
reside in it. If the queue is not full when a new message is sent, the message is
placed in the queue (either the message is copied or a pointer to the message is
kept), and the sender can continue execution without waiting. The link’s
capacity is finite, however. If the link is full, the sender must block until space
is available in the queue.
• Sockets
• Remote Procedure Calls
• Pipes
• Remote Method Invocation (Java)
Sockets
• A socket is defined as an endpoint for communication
• All ports below 1024 are well known, used for standard services
•One to one multithreading model maps one user thread to one kernel thread. So, for each
user thread, there is a corresponding kernel thread present in the kernel.
• In this model, thread management is done at the kernel level and the user threads are fully
supported by the kernel.
•Here, if one thread makes the blocking system call, the other threads still run. So, when
compared with many to one model, the one to one model provides more concurrency.
•Draw back of this model is creating used thread needs creating corresponding kernel thread.
•Implementing a kernel thread corresponding to each user thread increases the overhead of
the kernel, resulting in slow thread management.
•This model restricts the number of threads that can be supported by the system.
• The operating system, Linux and Windows family implement one to one
multithreading model.
• 3. Many to Many Multithreading Model
• Many to many multithreading models in operating system maps many user threads to a lesser
or equal number of kernel threads.
• This model does not suffer from any of the drawbacks of many to one model and one to one
model.
• Many to many multithreading models does not restrict the number of user thread created. The
developer can create as many threads as required. Neither the process block if a thread makes
a blocking system call. As the kernel can schedule other thread for execution.
Two-level Model
• Similar to M:M, except that it allows a user thread to be bound to kernel
thread
• Examples
• IRIX
• HP-UX
• Tru64 UNIX
• Solaris 8 and earlier
Thread Libraries
• Thread library provides programmer with API for creating and managing
threads
• Two primary ways of implementing
• Library entirely in user space
• Kernel-level library supported by the OS
There are 3 main Thread Libraries :
1.PThreads
2.Window Threads
3.Java Threads
PThreads
• May be provided either as user-level or kernel-level
• A POSIX standard (IEEE 1003.1c) API for thread creation and synchronization
• Specification, not implementation
• API specifies behavior of the thread library, implementation is up to
development of the library
• Common in UNIX operating systems (Solaris, Linux, Mac OS X)
Pthreads Example
Pthreads Example
Pthreads (Cont.)
Example (Cont.)
Pthreads Code for Joining 10 Threads
Windows Multithreaded C Program
Windows Multithreaded C Program (Cont.)
Java Threads
• Java threads are managed by the JVM
• Typically implemented using the threads model provided by underlying OS
• Java threads may be created by:
•(Q)If one thread in a program calls fork(), does the new process duplicate
all threads, or is the new process single-threaded?
•(A)ome UNIX systems have chosen to have two versions of fork(), one that
duplicates all threads and another that duplicates only the thread that invoked the
fork() system call.
• The exec system call is used to execute a file which is residing in an active
process. When exec is called the previous executable file is replaced and new
file is executed. More precisely, we can say that using exec system call will
replace the old file or program from the process with a new file or program.
• The exec() system call typically works in the same way That is, if a thread
invokes the exec() system call, the program specified in the parameter to exec()
will replace the entire process—including all threads.
• Signal Handling( interrupts might be signals sent explicitly by another process, or might
represent external actions such as a user typing a Control-C.)
• A signal is used in UNIX systems to notify a process that a particular event has
occurred. A signal may be received either synchronously or asynchronously,
depending on the source of and the reason for the event being signaled. All
signals, whether synchronous or asynchronous, follow the same pattern:
• 1. A signal is generated by the occurrence of a particular event.
• 2. The signal is delivered to a process.
• 3. Once delivered, the signal must be handled.
• Examples of synchronous signal include illegal memory access and division
by 0. If a running program performs either of these actions, a signal is generated.
Synchronous signals are delivered to the same process that performed the
operation that caused the signal (that is the reason they are considered
synchronous).
• Examples asynchronous signals include terminating a process with specific
keystrokes (such as <control><C>) and having a timer expire. Typically, an
asynchronous signal is sent to another process. When a signal is generated by
an event external to a running process, that process receives the signal
asynchronously.
• A signal may be handled by one of two possible handlers:
• 1. A default signal handler
• 2. A user-defined signal
• Every signal has a default signal handler that the kernel runs when handling
that signal. This default action can be overridden by a user-defined signal
handler that is called to handle the signal. Signals are handled in different
ways. Some signals (such as changing the size of a window) are simply
ignored; others (such as an illegal memory access) are handled by terminating
the program.
Thread Cancellation
•Threads that are no-longer required can be cancelled by another thread in one of
two techniques:
•Asynchronous cancellation
•Deferred cancellation
Asynchronous Cancellation
•One thread immediately terminates the target thread(A thread that is to be
canceled is often referred to as the target thread.).Allocation of resources and
inter thread data transfer may be challenging for asynchronous cancellation.
Deferred Cancellation
•In this method a flag is sets that indicating the thread should cancel itself when
it is feasible. It’s upon the cancelled thread to check this flag intermittently and
exit nicely when it sees the set flag.
Thread Local Storage :
•Threads belonging to a process share the data of the process. Indeed, this data
sharing provides one of the benefits of multithreaded programming.However, in
some circumstances, each thread might need its own copy of certain data.We will
call such data thread-local storage (or TLS.)
•It is easy to confuse TLS with local variables. However, local variables are
visible only during a single function invocation, whereas TLS data are visible
across function invocations.
• In some ways, TLS is similar to static data. The difference is that TLS data are
unique to each thread. Most thread libraries—including Windows and Pthreads—
provide some form of support for thread-local storage; Java provides support as
well.
Scheduler Activation
•A final issue to be considered with multithreaded programs concerns
communication between the kernel and the thread library, which may be
required by the many-to-many and two-level models.
•Many systems implementing either the many-to-many or the two-level model
place an intermediate data structure between the user and kernel threads. This
data structure—typically known as a lightweight process, or LWP—is shown in
Figure 4.13.
• To the user-thread library, the LWP appears to be a virtual processor on which
the application can schedule a user thread to run. Each LWP is attached to a
kernel thread, and it is kernel threads that the operating system schedules to run
on physical processors. If a kernel thread blocks (such as while waiting for an
I/O operation to complete), the LWP blocks as well. Up the chain, the user-level
thread attached to the LWP also blocks.
• An application may require any number of LWPs to run efficiently.
• One scheme for communication between the user-thread library and the kernel is
known as scheduler activation. It works as follows: The kernel provides an
application with a set of virtual processors (LWPs), and the application can
schedule user threads onto an available virtual processor.
• Furthermore, the kernel must inform an application about certain events. This
procedure is known as an upcall. Upcalls are handled by the thread library with
an upcall handler, and upcall handlers must run on a virtual processor.
CPU Scheduling
Basic Concepts
•In a single-processor system, only one process can run at a time. Others must wait until the
CPU is free and can be rescheduled.
• The objective of multiprogramming is to have some process running at all times, to
maximize CPU utilization. The idea is relatively simple.
•A process is executed until it must wait, typically for the completion of some I/O request.
•In a simple computer system, the CPU then just sits idle. All this waiting time is wasted; no
useful work is accomplished. With multiprogramming, we try to use this time productively.
Several processes are kept in memory at one time.
•one process has to wait, the operating system takes the CPU away from that process and
gives the CPU to another process. This pattern continues. Every time one process has to wait,
another process can take over use of the CPU.
• Scheduling of this kind is a fundamental operating-system function. Almost all
computer resources are scheduled before use.
• The CPU is, of course, one of the primary computer resources. Thus, its
scheduling is central to operating-system design.
• The basic concept of scheduling includes the followings:
• i) CPU–I/O Burst Cycle
• ii) CPU Scheduler
• iii) Preemptive Scheduling
• iv)Dispatcher
i) CPU–I/O Burst Cycle:
• I/O Burst: This is the time that a process spends in waiting for the completion
of the I/O request.
• I/O Burst deals with the WAITING STATE of the process.
ii) CPU Scheduler
•Whenever the CPU becomes idle, the operating system must select one of the
processes in the ready queue to be executed. The selection process is carried out
by the short-term scheduler, or CPU scheduler.
•The scheduler selects a process from the processes in memory that are ready to
execute and allocates the CPU to that process.
•Note that the ready queue is not necessarily a first-in, first-out (FIFO) queue.
•When we consider the various scheduling algorithms, a ready queue can be
implemented as a FIFO queue, a priority queue, a tree, or simply an unordered
linked list.
iii) Preemptive and non preemptive scheduling :
•CPU-scheduling decisions may take place under the following four
circumstances:
1. When a process switches from the running state to the waiting state (for example, as the
result of an I/O request or an invocation of wait() for the termination of a child process)
2. When a process switches from the running state to the ready state (for example, when an
interrupt occurs)
3. When a process switches from the waiting state to the ready state (for example, at
completion of I/O)
4. When a process terminates
• Scheduling under 1 and 4 is non preemptive(once the CPU has been allocated
to a process, the process keeps the CPU until it releases, the CPU either by
terminating or by switching to the waiting state)
• All other are pre-emptive scheduling(Preemptive Scheduling is a scheduling
method where the tasks are mostly assigned with their priorities. Sometimes it is
important to run a task with a higher priority before another lower priority task,
even if the lower priority task is still running. At that time, the lower priority
task holds for some time and resumes when the higher priority task finishes its
execution.)
• Examples of preemptive are RR,SRTF(another mode of SJF),PRIORITY
• Examples of nonpreemptive are FCSF,SJF,PRIORITY
Differences between preemptive and nonpreemptive scheduling
Preemptive scheduling Non-preemptive scheduling
1. In preemptive scheduling, the processes are allocated for a 1. In non-preemptive scheduling, the process is allocated to
short period. the CPU, and the resource will hold the process until it
completes its execution or changes its state to waiting for the
state from ready state.
2. In this, we can interrupt the process in between of 2. In this, we can not interrupt until the process completes its
execution. execution or switches its state.
3. This type of schedule is flexible as each process in the ready 3. This type of scheduling is rigid.
queue gets some time to run CPU.
4. Preemptive scheduling has overheads of scheduling the 4. Non-preemptive scheduling has not overheads of
processes. scheduling the processes.
5. There is a cost associated with the preemptive scheduling. 5. There is no cost associated with non-preemptive
scheduling.
6. In preemptive scheduling, if a process which has high 6. In non-preemptive scheduling, if the process which has
priority arrives in the ready queue, then the process which long burst time is executing, then the other method which
has low priority may starve. has less burst time may starve.
7. examples: RR,srtf(sjf),priority 7. examples: FCFS,SJF
iv) Dispatcher
•Another component involved in the CPU-scheduling function is the dispatcher.
The dispatcher is the module that gives control of the CPU to the process selected
by the short-term scheduler. This function involves the following:
• Switching context
• Switching to user mode
• Jumping to the proper location in the user program to restart that program
Process Priority Burst time Process Priority Arrival time Burst time
P1 3 10 P1 3 0 8
P2 1 1 P2 4 1 2
P3 4 2 P3 4 3 4
P4 5 1 P4 5 4 1
P5 2 5 P5 2 5 6
P6 6 6 5
P7 1 10 1
• Example preemptive priority
Process ARRIVAL TIME Burst time Priority Process Priority Arrival time Burst time
P1 0 4 3 P1 3 0 8
P2 1 2 2 P2 4 1 2
P3 2 3 4 P3 4 3 4
P4 4 2 1 P4 5 4 1
P5 2 5 6
P6 6 6 5
• P7 1 10 1
Round Robin Scheduling Algorithm
•Round Robin scheduling algorithm is one of the most popular scheduling
algorithm which can actually be implemented in most of the operating systems.
• This is the preemptive version of first come first serve scheduling.
•The Algorithm focuses on Time Sharing.
•In this algorithm, every process gets executed in a cyclic way. A certain time
slice is defined in the system which is called time quantum.
•Each process present in the ready queue is assigned the CPU for that time
quantum, if the execution of the process is completed during that time then the
process will terminate else the process will go back to the ready queue and waits
for the next turn to complete the execution.
Advantages
•It can be actually implementable in the system because it is not depending on the
burst time.
•It doesn't suffer from the problem of starvation or convoy effect.
•All the jobs get a fare allocation of CPU.
Disadvantages
•The higher the time quantum, the higher the response time in the system.
•The lower the time quantum, the higher the context switching overhead in the
system.
•Deciding a perfect time quantum is really a very difficult task in the system.
Example preemptive RR calculate ATAT and AWT for following using RR
•Normally, when the multilevel queue scheduling algorithm is used, processes are
permanently assigned to a queue when they enter the system. If there are separate
queues for foreground and background processes, processes do not move from
one queue to the other, since processes do not change their foreground or
background nature.
•This setup has the advantage of low scheduling overhead, but it is inflexible.
The multilevel feedback-queue scheduling algorithm, in contrast, allows a
process to move between queues.
•
• The idea is to separate processes according to the characteristics of their CPU
bursts.
• If a process uses too much CPU time, it will be moved to a lower-priority queue.
• This scheme leaves I/O-bound and interactive processes in the higher-priority queues.
• In addition, a process that waits too long in a lower-priority queue may be moved to a
higher-priority queue.
• This form of aging prevents starvation (see Fig. 6.7).
• A process entering the ready queue is put in queue O. A process in queue 0 is given a
time quantum of 8 milliseconds.
• If it does not finish within this time, it is moved to the tail of queue 1.
• If queue 0 is empty, the process at the head of queue 1 is given a quantum of 16
milliseconds.
• If it does not complete, it is preempted and is put into queue 2.
• Processes in queue 2 are run on an FCFS basis but are run only when queues 0 and 1 are
empty.
• In general, a multilevel feedback-queue scheduler is defined by the following
parameters:
• The number of queues.
• The scheduling algorithm for each queue.
• The method used to determine when to upgrade a process to a higher-priority queue.
• The method used to determine when to demote a process to a lower-priority queue.
• The method used to determine which queue a process will enter when that process needs
service.
• The definition of a multilevel feedback-queue scheduler makes it the most
general CPU-scheduling algorithm. Unfortunately, it is also the most complex
algorithm.
Readers-Writers Problem
Monitors
Monitors
• The Monitor is a
module or package
which encapsulates
shared data structure,
procedures, and the
synchronization
between the
concurrent procedure
invocations.
• Characteristics of Monitors.
• Inside the monitors, we can only execute one process at a time.
• Monitors are the group of procedures, and condition variables that are
merged together in a special type of module.
• If the process is running outside the monitor, then it cannot access the
monitor’s internal variable. But a process can call the procedures of the
monitor.
• Monitors offer high-level of synchronization
• Monitors were derived to simplify the complexity of synchronization
problems.
. There is only one process that can be active at a time inside the monitor.
Monitor- Account example