Download as pdf or txt
Download as pdf or txt
You are on page 1of 26

2 PROCESS MANAGEMENT

2.1 Concept of Process Management

Process managements involve the execution of various tasks such as creation of processes,
scheduling of processes, management of deadlock, and termination of processes. It is responsibility
of operating system to manage all the running processes of the system. Operating system manages
processes by performing tasks such as resource allocation and process scheduling. When a process
runs on computer device memory and CPU of computer are utilized. The operating system also has
to synchronize the different processes of computer system.

A process is basically a program in execution. The execution of a process must progress in a


sequential fashion.

A process can be defined as an entity which represents the basic unit of work to be implemented in
the system.

To put it in simple terms, we write our computer programs in a text file and when we execute this
program, it becomes a process which performs all the tasks mentioned in the program.

When a program is loaded into the memory and it becomes a process, it can be divided into four
sections i.e stack, heap, text and data. The following simplifies the layout of a process inside main
memory.

S/No COMPONENT AND DESCRIPTION

1 Stack
The process stack contains the temporary data such as method/function, parameters,
return address and local variables.

2 Heap
This is dynamically allocated memory to process during its run time.

3 Text
This includes the current activity represented by the value of program counter and the
contents of the processor’s registers.

4 Data
This section contains the global and static variables
Program

A program is a piece of code which may be a single line or millions of lines. A computer program
is usually written by a computer programmer in a programning language.

A computer program is a collection of instructions that performs a specific task when executed by a
computer. When we compare a program with a process, we can conclude that a process is a
dynamic instance of a computer program.

A computer program that performs a well-defined task is also known as an algorithm. A collection
of computer programs, libraries and related data are referred to as a software.

A process consists of set of instruction to be executed called process code. A process is also
associated with some data that is to be processed. The resources that a process required for its
execution is called process components. There is also a state that is associated with a process at a
particular instant of time called process state. Similar to these concepts, there are number of
concepts associated with the process management function of an operating system. Some of those
concepts are given as following.

1. Process State

2. Process Control Block (PCB)

3. Process Operations

4. Process Scheduling

5. Process Synchronization

6. Interprocess Communication

7. Deadlock
Process states

A process state can be defined as the condition of process at a particular instant of time. Process
state defines the current position of process. This help to get detail of our process at a particular
instant. There are basically seven states of process in operating system. The following are the seven
states of process:

1. New
When any program is calls from secondary memory or calls from hard disk to primary
memory or RAM then the new process is created. Basically it specified the time when a
process is created.

2. Ready
In this time interval the state of the process is loaded into the primary memory and
ready for execution.

3. Waiting
In this sate the process is kept on hold and other process are allowed to start 0their
execution. In other words, it specifies the time interval when a process waits for the
allocation of CPU time and the other resources for its execution.

4. Executing
This is the main state of any process in this state the process is executing. In other
words, it is the time interval when a process is being executed by CPU.

5. Blocked
It specifies the time interval when a process is waiting for an event like input / output
operations to complete.

6. Suspended
It specifies the time when a process is ready for execution but has not been placed in the
ready queue by operating system.

7. Terminated
It specifies the time when a process is terminated or ended and all the resources that are
utilized by process and memory that is utilized by the process are free.
The above figure shows that the process is initially in the new state when it is created. After the
process is created the state of process is changed to ready where the process is loaded into the
primary memory or in RAM. Then the state of process is changed to waiting when the process
waits for the allocation of CPU time. The process sates changes to executing state after the CPU
time and other resources allocated to it and the process starts running. After the process running or
executed successfully then the process is ended or the process is terminated and the process state is
changes to terminate.

Process Control Block (PCB)

A Process Control Block is a data structure maintained by the operating system for every process.
The PCB is identified by an integer process ID (PID). A PCB keeps all the information needed to
keep track of a process as listed below in the table.
S/N Information and Description

1 Process State

The current state of the process i.e, whether it is ready, running, waiting, or whatever.

2 Process Privileges

This is required to allow/disallow access to system resources.

3 Process ID

Unique identification for each of the process in the operating system.

4 Pointer

A pointer to parent process.

5 Program Counter

Program counter is a pointer to the address of the next instruction to be executed for this
process.

6 CPU registers

Various CPU registers where process need to be stored for execution for running state.

7 CPU Scheduling Information

Process priority and other scheduling information which is required to schedule the
process.

8 Memory management information

This includes the information of page table, memory limits, segment table depending on
memory used by the operating system.

9 Accounting information

This includes the amount of CPU used for process execution, time limits, execution ID etc.
10 IO status information

This includes a list of I/O devices allocated to the process.

The architecture of a PCB is completely dependent on operating system and may


contain different information in different operating systems. Here is a simplified
diagram of a PCB-

The PCB is maintained for a process throughout its lifetime, and is deleted once the
process terminates.
2.2 Process and CPU Scheduling

Process Scheduling

The process scheduling is the activity of the process manager that handles the removal of the
running process from the CPU and the selection of another process on the basis of a particular
strategy.

Process Scheduling is an essential part of multiprogramming operating systems. Such operating


systems allow more than one process to be loaded into the executable memory at a time and loaded
process shares the CPU using time multiplexing.

Process Scheduling Queues

The OS maintains all PCBs in Process Scheduling Queues. The OS maintains a separate queue for
each of the process states and PCBs of all processes in the same execution state are placed in the
same queue. When the state of a process is changed, its PCB is unlinked from its current queue and
moved to its new state queue.

The operating system maintains the following important process scheduling queues.

 Job queue – This queue keeps all the processes in the system.

 Ready queue – This queue keeps a set of all processes residing in main memory, ready and
waiting to exectute. A new process is always put in this queue.

 Device queues – The processes which are blocked due to unavailability of an I/O device
constitute this queue.
The OS can use different policies to manage each queue (FIFO, Round Robin, Priority, etc). The
OS scheduler determines how to move processes between the ready and run queues which can only
have one entry per processor core on the system, in the above diagram, it has been merged with the
CPU.

Two-State Process Model

Two-state process model refers to running and non-running states which are described below-

1. Running – When a new process is created, it enters into the system as in the running state.

2. Not Running – Process that are not running are kept in queue, waiting for their turn to
execute. Each entry in the queue is a pointer to a particular process. Queue is implemented
by using linked list. Use of dispatcher is a follows. When a process is interrupted, that
process is transferred in the waiting queue. If the process has completed or aborted, the
process is discarded. In either case, the dispatcher then selects a process from the queue to
execute.

Schedulers

Schedulers are special system software which handle process scheduling in various ways. Their
main task is to select the jobs to be submitted into the system and to decide which process to run.
Schedulers are of three types-

 Long-term scheduler

 Short-term scheduler

 Medium term Scheduler

Long term scheduler

Long term scheduler is also known as job scheduler. It chooses the processes from the pool
(secondary memory) and keeps them in the ready queue maintained in the primary memory.

Long Term scheduler mainly controls the degree of Multiprogramming. The purpose of long term
scheduler is to choose a perfect mix of 10 bound and CPU bound processes among the jobs present
in the pool.
If the job scheduler chooses more 10 bound processes then all of the jobs may reside in the blocked
state all the time and the CPU will remain idle most of the time. This will reduce the degree of
Multiprogramming. Therefore, the Job of long term scheduler is very critical and may affect the
system for a very long time.

Short term scheduler

Short term scheduler is also known as CPU scheduler. It selects one of the Jobs from the ready
queue and dispatch to the CPU for the execution.

A scheduling algorithm is used to select which job is going to be dispatched for the execution. The
Job of the short term scheduler can be very critical in the sense that if it selects job whose CPU
burst time is very high then all the jobs after that, will have to wait in the ready queue for a very
long time.

This problem is called starvation which may arise if the short term scheduler makes some mistakes
while selecting the job.

Medium term scheduler

Medium term scheduler takes care of the swapped out processes. If the running state processes
needs some 10 time for the completion then there is a need to change its state from running to
waiting.

Medium term scheduler is used for this purpose. It removes the process from the running state to
make room for the other processes. Such processes are the swapped out processes and this
procedure is called swapping. The medium term scheduler is responsible for suspending and
resuming the processes.

It reduces the degree of multiprogramming. The swapping is necessary to have a perfect mix of
processes in the ready queue.
Comparison among Scheduler

Context Switching

A context switching is the mechanism to store and restore the state or context of a CPU in Process
Control Block so that a process execution can be resumed from the same point at a later time.
Using this technique, a context switcher enables a multiple process to share a single CPU. Context
switching is an essential part of a multitasking operating system fetures.

When the scheduler switches the CPU from executing one process to execute another, the state
from the current running process is stored into the processs control block. After this, the state for
the process to run next is loaded from its own PCB and used to set the PC, registers, etc. At that
point the second process can start executing.
Context switches are computationally intensive since register and memory state must be saved and
restored. To avoid the amount of context switching time, some hardware systems employ two or
more sets of processor registers. When the process is switched, the following information is stored
for later use

 Program counter

 Scheduling information

 Base and limit register value

 Current used register

 Changed state

 I/O state information

 Accounting information
2.3 Operations on Processes

The processes in the system can execute concurrently, and they must be created and deleted
dynamically. Thus, the operating system must provide a mechanism (or facility) for process
creation and termination.

Process Creation

A process may create several new processes, via a create-process system call, during the course of
execution. The creating process is called a parent process, whereas the new processes are called
the children of that process. Each of these new processes may in turn create other processes.

In general, a process will need certain resources (such as CPU time, memory, files, I/O devices) to
accomplish its task. When a process creates a sub process, that sub process may be able to obtain
its resources directly from the operating system, or it may be constrained to a subset of the
resources of the parent process. The parent may have to partition its resources among its children,
or it may be able to share some resources (such as memory or files) among several of its children.
Restricting a child process to a subset of the parent's resources prevents any process from
overloading the system by creating too many sub processes.
When a process is created it obtains, in addition to the various physical and logical resources,
initialization data (or input) that may be passed along from the parent process to the child process.
For example, consider a process whose function is to display the status of a file, say F1, on the
screen of a terminal.

When it is created, it will get, as an input from its parent process, the name of the file F1, and it
will execute using that datum to obtain the desired information.

It may also get the name of the output device. Some operating systems pass resources to child
processes. On such a system, the new process may get two open files, F1 and the terminal device,
and may just need to transfer the datum between the two.

When a process creates a new process, two possibilities exist in terms of execution:

1. The parent continues to execute concurrently with its children.

2. The parent waits until some or all of its children have terminated.

Process Termination

A process terminates when it finishes executing its final statement and asks the operating system to
delete it by using the exit system call. At that point, the process may return data (output) to its
parent process (via the wait system call).

All the resources of the process-including physical and virtual memory, open files, and I/O buffers-
are deallocated by the operating system.

Termination occurs under additional circumstances. A process can cause the termination of another
process via an appropriate system call (for example, abort). Usually, only the parent of the process
that is to be terminated can invoke such a system call. Otherwise, users could arbitrarily kill each
other's jobs. A parent therefore needs to know the identities of its children. Thus, when one process
creates a new process, the identity of the newly created process is passed to the parent. A parent
may terminate the execution of one of its children for a variety of reasons, such as these:

The child has exceeded its usage of some of the resources that it has been allocated. This requires
the parent to have a mechanism to inspect the state of its children.
The task assigned to the child is no longer required. The parent is exiting, and the operating system
does not allow a child to continue if its parent terminates. On such systems, if a process terminates
(either normally or abnormally), then all its children must also be terminated.

Cooperating Process

The concurrent processes executing in the operating system may be either independent processes or
cooperating processes. A process is independent if it cannot affect or be affected by the other
processes executing in the system. Clearly, any process that does not share any data (temporary or
persistent) with any other process is independent. On the other hand, a process is cooperating if it
can affect or be affected by the other processes executing in the system. Clearly, any process that
shares data with other processes is a cooperating process.

We may want to provide an environment that allows process cooperation for several reasons:

 Information sharing: Since several users may be interested in the same piece of
information (for instance, a shared file), we must provide an environment to allow
concurrent access to these types of resources.

 Computation speedup: If we want a particular task to run faster, we must break it into
subtasks, each of which will be executing in parallel with the others. Such a speedup can be
achieved only if the computer has multiple processing elements (such as CPUS or I/O
channels).

 a Modularity: We may want to construct the system in a modular fashion, dividing the
system functions into separate processes or threads.

 Convenience: Even an individual user may have many tasks on which to work at one time.
For instance, a user may be editing, printing, and compiling in parallel.

Concurrent execution of cooperating processes requires mechanisms that allow processes to


communicate with one another and to synchronize their actions. To illustrate the concept of
cooperating processes, let us consider the producer- consumer problem, which is a common
paradigm for cooperating processes.

A producer process produces information that is consumed by a consumer process. For example, a
print program produces characters that are consumed by the printer driver. A compiler may
produce assembly code, which is consumed by an assembler. The assembler, in turn, may produce
object modules, which are consumed by the loader.
To allow producer and consumer processes to run concurrently, we must have available a buffer of
items that can be filled by the producer and emptied by the consumer. A producer can produce one
item while the consumer is consuming another item. The producer and consumer must be
synchronized, so that the consumer does not try to consume an item that has not yet been produced.
In this situation, the consumer must wait until an item is produced.

The unbounded-buffer producer-consumer problem places no practical limit on the size of the
buffer. The consumer may have to wait for new items, but the producer can always produce new
items. The bounded-buffer producer consumer problem assumes a fixed buffer size. In this case,
the consumer must wait if the buffer is empty, and the producer must wait if the buffer is full.

The buffer may either be provided by the operating system through the use of an interprocess-
communication (IPC) facility, or by explicitly coded by the application programmer with the use
of shared memory.
2.4 Interprocess Communication

We showed how cooperating processes can communicate in a shared-memory environment. The


scheme requires that these processes share a common buffer pool, and that the code for
implementing the buffer be written explicitly by the application programmer. Another way to
achieve the same effect is for the operating system to provide the means for cooperating processes
to communicate with each other via an interprocess communication (PC) facility.

IPC provides a mechanism to allow processes to communicate and to synchronize their actions
without sharing the same address space. IPC is particularly useful in a distributed environment
where the communicating processes may reside on different computers connected with a network.
An example is a chat program used on the World Wide Web.

IPC is best provided by a message-passing system, and message systems can be defined in many
ways.

Message-Passing System

The function of a message system is to allow processes to communicate with one another without
the need to resort to shared data. In this scheme, services are provided as ordinary user processes.
That is, the services operate outside of the kernel. Communication among the user processes is
accomplished through the passing of messages. An IPC facility provides at least the two
operations: send (message) and receive(message).

Messages sent by a process can be of either fixed or variable size. If only fixed- sized messages can
be sent, the system-level implementation is straightforward. This restriction, however, makes the
task of programming more difficult. On the other hand, variable-sized messages require a more
complex system-level implementation, but the programming task becomes simpler.

If processes P and Q want to communicate, they must send messages to and receive messages from
each other; a communication link must exist between them. This link can be implemented in a
variety of ways. We are concerned here not with the link's physical implementation (such as shared
memory, hardware

bus, or network), but rather with its logical implementation. Here are several methods for logically
implementing a link and the send/receive operations:

 Direct or indirect communication


 Symmetric or asymmetric communication

 Automatic or explicit buffering

 Send by copy or send by reference

 Fixed-sized or variable-sized messages

We look at each of these types of message systems next.

Naming

Processes that want to communicate must have a way to refer to each other. They can use either
direct or indirect communication.

Direct Communication

With direct communication, each process that wants to communicate must explicitly name the
recipient or sender of the communication. In this scheme, the send and receive primitives are
defined as:

 Send (P,message)-Send a message to process P.

 Receive (Q, message) -Receive a message from process Q.

A communication link in this scheme has the following properties:

 A link is established automatically between every pair of processes that want to


communicate. The processes need to know only each other's identity to
communicate.

 A link is associated with exactly two processes.

 Exactly one link exists between each pair of processes.

This scheme exhibits symmetry in addressing; that is, both the sender and the receiver processes
must name the other to communicate. A variant of this scheme employs asymmetry in addressing.
Only the sender names the recipient; the recipient is not required to name the sender. In this
scheme, the send and receive primitives are defined as follows:

 Send(P,message)- Send a message to process P.

 Receive (id, message) -Receive a message from any process; the variable
id is set to the name of the process with which communication has taken place.

The disadvantage in both symmetric and asymmetric schemes is the limited modularity of the
resulting process definitions. Changing the name of a process may necessitate examining all other
process definitions. All references to the old name must be found, so that they can be modified to
the new name. This situation is not desirable from the viewpoint of separate compilation.

Indirect Communication

With indirect communication, the messages are sent to and received from mailboxes, or ports. A
mailbox can be viewed abstractly as an object into which messages can be placed by processes and
from which messages can be removed. Each mailbox has a unique identification. In this scheme, a
process can communicate with some other process via a number of different mailboxes.

Two processes can communicate only if they share a mailbox. The send and receive primitives are
defined as follows:

 send (A, message) -Send a message to mailbox A.

 receive (A, message) -Receive a message from mailbox A.

In this scheme, a communication link has the following properties:

 A link is established between a pair of processes only if both members of the pair have a
shared mailbox.

 A link may be associated with more than two processes.

 A number of different links may exist between each pair of communicating processes,
with each link corresponding to one mailbox.

Now suppose that processes P1, P2, and P3 all share mailbox A. Process P1 sends a message to A,
while P2 and P3 each execute a receive from A. Which process will receive the message sent by
P1? The answer depends on the scheme that we choose:

 Allow a link to be associated with at most two processes.

 Allow at most one process at a time to execute a receive operation.


 Allow the system to select arbitrarily which process will receive the message (that is,
either P2 or P3, but not both, will receive the message). The system may identify the
receiver to the sender.

A mailbox may be owned either by a process or by the operating system. If the mailbox is owned
by a process (that is, the mailbox is part of the address space of the process), then we distinguish
between the owner (who can only receive messages through this mailbox) and the user (who can
only send messages to the mailbox). Since each mailbox has a unique owner, there can be no
confusion about who should receive a message sent to this mailbox. When a process that owns a
mailbox terminates, the mailbox disappears. Any process that subsequently sends a message to this
mailbox must be notified that the mailbox no longer exists.

On the other hand, a mailbox owned by the operating system is independent and is not attached to
any particular process. The operating system then must provide a mechanism that allows a process
to do the following:

 Create a new mailbox.

 Send and receive messages through the mailbox.

 Delete a mailbox.

The process that creates a new mailbox is that mailbox's owner by default. Initially, the owner is
the only process that can receive messages through this mailbox. However, the ownership and
receive privilege may be passed to other processes through appropriate system calls. Of course, this
provision could result in multiple receivers for each mailbox.

Synchronization

Communication between processes takes place by calls to send and receive primitives. There are
different design options for implementing each primitive. Message passing may be either blocking
or nonblocking-also known as synchronous and asynchronous.

 Blocking send: The sending process is blocked until the message is received by the
receiving process or by the mailbox.

 Nonblocking send: The sending process sends the message and resumes operation.

 Blocking receive: The receiver blocks until a message is available.


 Nonblocking receive: The receiver retrieves either a valid message or a null.

Different combinations of send and receive are possible.

Buffering

Whether the communication is direct or indirect, messages exchanged by communicating processes


reside in a temporary queue. Basically, such a queue can be implemented in three ways:

 Zero capacity: The queue has maximum length 0; thus, the link cannot have any
messages waiting in it. In this case, the sender must block until the recipient
receives the message.

 Bounded capacity: The queue has finite length n; thus, at most n messages can
reside in it. If the queue is not full when a new message is sent, the latter is placed
in the queue (either the message is copied or a pointer to the message is kept), and
the sender can continue execution without waiting. The link has a finite capacity,
however. If the link is full, the sender must block until space is available in the
queue.

 Unbounded capacity: The queue has potentially infinite length; thus, any number of
messages can wait in it. The sender never blocks.

The zero-capacity case is sometimes referred to as a message system with no buffering; the other
cases are referred to as automatic buffering.
2.5 Multiple-Processor Scheduling

In multiple-processor scheduling multiple CPU’s are available and hence Load


Sharing becomes possible. However multiple processor scheduling is more complex as
compared to single processor scheduling. In multiple processor scheduling there are cases when
the processors are identical i.e. HOMOGENEOUS, in terms of their functionality; we can use
any processor available to run any process in the queue.

Approaches to Multiple-Processor Scheduling –

One approach is when all the scheduling decisions and I/O processing are handled by a single
processor which is called the Master Server and the other processors executes only the user
code. This is simple and reduces the need of data sharing. This entire scenario is
called Asymmetric Multiprocessing.

A second approach uses Symmetric Multiprocessing where each processor is self-scheduling.


All processes may be in a common ready queue or each processor may have its own private
queue for ready processes. The scheduling proceeds further by having the scheduler for each
processor examine the ready queue and select a process to execute.

Processor Affinity –

Processor Affinity means a process has an affinity for the processor on which it is currently
running.
When a process runs on a specific processor there are certain effects on the cache memory. The
data most recently accessed by the process populate the cache for the processor and as a result
successive memory access by the process is often satisfied in the cache memory. Now if the
process migrates to another processor, the contents of the cache memory must be invalidated for
the first processor and the cache for the second processor must be repopulated. Because of the
high cost of invalidating and repopulating caches, most of the SMP(symmetric multiprocessing)
systems try to avoid migration of processes from one processor to another and try to keep a
process running on the same processor. This is known as PROCESSOR AFFINITY.
There are two types of processor affinity:

1. Soft Affinity – When an operating system has a policy of attempting to keep a process
running on the same processor but not guaranteeing it will do so, this situation is called
soft affinity.

2. Hard Affinity – Hard Affinity allows a process to specify a subset of processors on


which it may run. Some systems such as Linux implements soft affinity but also provide
some system calls like sched_setaffinity() that supports hard affinity.

Load Balancing

Load Balancing is the phenomena which keeps the workload evenly distributed across all
processors in an SMP system. Load balancing is necessary only on systems where each
processor has its own private queue of process which are eligible to execute. Load balancing is
unnecessary because once a processor becomes idle it immediately extracts a runnable process
from the common run queue. On SMP(symmetric multiprocessing), it is important to keep the
workload balanced among all processors to fully utilize the benefits of having more than one
processor else one or more processor will sit idle while other processors have high workloads
along with lists of processors awaiting the CPU.

There are two general approaches to load balancing:

1. Push Migration – In push migration a task routinely checks the load on each processor
and if it finds an imbalance then it evenly distributes load on each processors by moving
the processes from overloaded to idle or less busy processors.

2. Pull Migration – Pull Migration occurs when an idle processor pulls a waiting task from
a busy processor for its execution.

Multicore Processors –

In multicore processors multiple processor cores are places on the same physical chip. Each
core has a register set to maintain its architectural state and thus appears to the operating system
as a separate physical processor. SMP systems that use multicore processors are faster and
consume less power than systems in which each processor has its own physical chip.
However multicore processors may complicate the scheduling problems. When processor
accesses memory then it spends a significant amount of time waiting for the data to become
available. This situation is called MEMORY STALL. It occurs for various reasons such as
cache miss, which is accessing the data that is not in the cache memory. In such cases the
processor can spend upto fifty percent of its time waiting for data to become available from the
memory. To solve this problem recent hardware designs have implemented multithreaded
processor cores in which two or more hardware threads are assigned to each core. Therefore if
one thread stalls while waiting for the memory, core can switch to another thread.

There are two ways to multithread a processor:

1. Coarse-Grained Multithreading – In coarse grained multithreading a thread executes


on a processor until a long latency event such as a memory stall occurs, because of the
delay caused by the long latency event, the processor must switch to another thread to
begin execution. The cost of switching between threads is high as the instruction pipeline
must be terminated before the other thread can begin execution on the processor core.
Once this new thread begins execution it begins filling the pipeline with its instructions.

2. Fine-Grained Multithreading – this multithreading switches between threads at a much


finer level mainly at the boundary of an instruction cycle. The architectural design of fine
grained systems includes logic for thread switching and as a result the cost of switching
between threads is small.

Virtualization and Threading –

In this type of multiple-processor scheduling even a single CPU system acts like a multiple-
processor system. In a system with Virtualization, the virtualization presents one or more virtual
CPU to each of virtual machines running on the system and then schedules the use of physical
CPU among the virtual machines. Most virtualized environments have one host operating system
and many guest operating systems. The host operating system creates and manages the virtual
machines. Each virtual machine has a guest operating system installed and applications run
within that guest.Each guest operating system may be assigned for specific use cases,applications
or users including time sharing or even real-time operation. Any guest operating-system
scheduling algorithm that assumes a certain amount of progress in a given amount of time will be
negatively impacted by the virtualization. A time sharing operating system tries to allot 100
milliseconds to each time slice to give users a reasonable response time. A given 100 millisecond
time slice may take much more than 100 milliseconds of virtual CPU time. Depending on how
busy the system is, the time slice may take a second or more which results in a very poor
response time for users logged into that virtual machine. The net effect of such scheduling
layering is that individual virtualized operating systems receive only a portion of the available
CPU cycles, even though they believe they are receiving all cycles and that they are scheduling
all of those cycles. Commonly, the time-of-day clocks in virtual machines are incorrect because
timers take no longer to trigger than they would on dedicated CPU’s.

Virtualizations can thus undo the good scheduling-algorithm efforts of the operating systems
within virtual machines.
2.6 Disk Scheduling

As we know, a process needs two type of time, CPU time and 10 time. For 1/0, it requests the
Operating system to access the disk.

However, the operating system must be fare enough to satisfy each request and at the same time,
operating system must maintain the efficiency and speed of process execution.

The technique that operating system uses to determine the request which is to be satisfied next is
called disk scheduling.

Let's discuss some important terms related to disk scheduling.

Seek Time

Seek time is the time taken in locating the disk arm to a specified track where the read/write
request will be satisfied.

Rotational Latency

It is the time taken by the desired sector to rotate itself to the position from where it can access
the R/W heads.

Transfer Time

It is the time taken to transfer the data.

Disk Access Time

Disk access time is given as,

Disk Access Time = Rotational Latency + Seek Time + Transfer Time

Disk Response Time

It is the average of time spent by each request waiting for the 10 operation .
Purpose of Disk Scheduling

The main purpose of disk scheduling algorithm is to select a disk request from the queue of 10
requests and decide the schedule when this request will be processed.

Goal of Disk Scheduling Algorithm

• Fairness

• High throughout

• Minimal traveling head time

Disk Scheduling Algorithms

The list of various disks scheduling algorithm is given below. Each algorithm is carrying some
advantages and disadvantages. The limitation of each algorithm leads to the evolution of a new
algorithm.

• FCFS scheduling algorithm

• SSTF (shortest seek time first) algorithm

• SCA N scheduling

• C-SCAN scheduling

• LOOK Scheduling

• C-LOOK scheduling

You might also like