Os - Module-1

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 100

Master of Computer Applications

ADVANCED OPERATING
SYSTEM
MOD UL E- 1
C HAPTER-1
WH AT I S A N O P E R ATING SYST E M?

• An operating system is program that manages computer hardware.


• It provides a basis for an application program acts as an intermediary
between a user of a computer and the computer hardware.
• . The purpose of an operating system is to provide an environment in which
a user can execute programs in a convenient and efficient manner.
• A more common definition is that the operating system is the one program
running at all times on the computer (usually called the kernel), with all else
being application programs.
• An operating system is concerned with the allocation of resources and
services,such as memory,processors, devices, and information.
Examples of Operating systems are –
• Windows (GUI based, PC)
• Linux (Personal, Workstations)
• macOS (Macintosh), used for Apple’s personal computers
• Android (Google’s Operating System for
smartphones/tablets/smartwatches)
• iOS (Apple’s OS for iPhone, iPad and iPod Touch)
OJE CTIVES OF OPE RATING
S YS TEM

• Operating system performs three functions:


• Convenience: An OS makes a computer more convenient to use.
• Efficiency: An OS allows the computer system resources to be
used in an efficient manner.
• Ability to Evolve: An OS should be constructed in such a way as
to permit the effective development, testing and introduction of
new system functions at the same time without interfering with
service.
COMPUTE R SYSTE M STRUCTURE
CO MP UT ER SYST E M ST RUCTURE

• Computer system can be divided into four components:


• Hardware – provides basic computing resources
• CPU, memory, I/O devices
• Operating system
• Controls and coordinates use of hardware among various applications
and users
• Application programs or System programs
• Application programs – These are the programs used to perform a
specific task that are directly used by the computer users
• System Programs – These are the programs that directly modify or give
commands to computer hardware.
• Users
• People
COMPUTER SYSTEM ORGANIZATION

• Computer-system operation
• Storage structure
• I/O structure
CO MP UT ER-SYSTEM O R GA NI ZATI ON CO NT. .

Computer system operation


• A modern general-purpose computer system consists of one or more CPUs
and a number of device controllers connected through a common bus that
provides access to shared memory
• Each device controller is in charge of a particular device type
• I/O devices and the CPU can execute concurrently, to access memory
CO MP UT ER-SYSTEM O R GA NI ZATI ON CO NT. .

Computer system operation cont..


• To ensure orderly access of shared memory, a memory controller is provided
whose function is to synchronize access to the memory

• Each device controller has a local buffer.The input and output data can be
stored in these buffers.
• The data is moved from memory to the respective device buffers by the CPU
for I/O operations and then this data is moved back from the buffers to
memory.
• The device controllers use an interrupt to inform the CPU that I/O
operation is completed.
• The device driver understands the device controller and provides the rest of
the operating system with a uniform interface to the device
CO MP UT ER-SYSTEM O R GA NI ZATI ON CO NT. .

Computer system operation cont..


Computer system start-up :
• Bootstrap program is loaded at power-up or reboot
• Typically stored in ROM generally known as firmware
• Functions of bootstrap
▪ Initializes all aspects of system
▪ Loads operating system kernel and starts execution
• OS executes its first process called init and waits for some event to occur
• The occurrence of an event is signaled by an interrupt from either hardware
or software.
CO MP UT ER-SYSTEM O R GA NI ZATI ON CO NT. .

Computer system operation cont..


Interrupts and its functions
• An interrupt is a necessary part of Computer System Organization as it is
triggered by hardware and software parts when they need immediate
attention.
• An interrupt can be generated by a device or a program to inform the
operating system to halt its current activities and focus on something else.
• Incoming interrupts are disabled while another interrupt is being processed to
prevent a loss of interrupt
• A trap is a software-generated interrupt caused either by an error or a user
request
CO MP UT ER-SYSTEM O R GA NI ZATI ON CO NT. .

Computer system operation cont..

Interrupt timeline for a single process doing output


CO MP UT ER-SYSTEM O R GA NI ZATI ON CO NT. .

Storage structure
• Main memory – only large storage media that the CPU can access directly
• Random access
• Typically volatile
• Secondary storage – extension of main memory that provides large
nonvolatile storage capacity
• Magnetic disks – rigid metal or glass platters covered with magnetic recording
material
• Disk surface is logically divided into tracks, which are subdivided into
sectors
• The disk controller determines the logical interaction between the device
and the computer
CO MP UT ER-SYSTEM O R GA NI ZATI ON CO NT. .

Storage structure

Storage device hierarchy


CO MP UT ER-SYSTEM O R GA NI ZATI ON CO NT. .

Storage structure
Volatile -Loses its
contents
When power is removed

Non volatile - Retains


its contents
When power is removed

Storage device hierarchy


CO MP UT ER-SYSTEM O R GA NI ZATI ON CO NT. .

I/O Structure
• Storage is only one of many types of I/O devices within a computer.
• A large portion operating system code is dedicated to I/O
management, because of its importance to the reliability and
performance of system and because of varying nature of devices.
• A general purpose computer system consists of CPUs and multiple
device controllers that are connected through a common bus.
• Each device controller is in charge of specific device.
• Device controller maintains both local buffer storage and special
purpose registers.
• OS have a device driver for each device controller
• This device driver understands the controller and presents a uniform
interface to the device to the rest of the OS.
CO MP UT ER-SYSTEM O R GA NI ZATI ON CO NT. .

• I/O structure continued


• To start I/O operation,
1. DEVICE DRIVER loads the appropriate registers within the DEVICE
CONTROLLER.
2. The DEVICE CONTROLLER in turn ,examines the content of these
registers(PCB) to determine what actions to take(e.g, to read a character from
keyboard).
3. The CONTROLLER starts the transfer of data from the device (keyboard) to
its local buffer storage area (it’s inside the controller).
4. Once the transfer of data is complete, the DEVICE CONTROLLER informs
the DEVICE DRIVER via an interrupt that it has finished its operation.
5. The DEVICE DRIVER then return its control to the operating system possibly
returning the data or pointer to the data to tell the OS task has been
completed.
CO MP UT ER-SYSTEM O R GA NI ZATI ON CO NT. .

I/O structure continued

How a Modern Computer Works


CO MP UT ER-SYSTEM O R GA NI ZATI ON CO NT. .

I/O structure continued


Direct Memory Access
• Used for high-speed I/O devices able to transmit information at close to
memory speeds
• Device controller transfers blocks of data from buffer storage directly to
main memory without CPU intervention
• Only one interrupt is generated per block, rather than the one interrupt
per byte
O P E RAT ING SYST E M ST RUCTURE

• Multiprogramming needed for efficiency


• Single user cannot keep CPU and I/O devices busy at all times
• Multiprogramming organizes jobs (code and data) so CPU always has one to
execute
• A subset of total jobs in system is kept in memory
• One job selected and run via job scheduling
• When it has to wait (for I/O for example), OS switches to another job
• It provides a environment where all the resources are shared effectively but it
does not provide user interaction with computer system
OPE RATING SYSTE M STRUCTURE CONT..

M E M O RY L AYO U T F O R M U LT I P R O G R AM M I N G S YS T E M
OPE RATING SYSTE M STRUCTURE CONT..

• Timesharing (multitasking) is logical extension in which CPU


switches jobs so frequently that users can interact with each job
while it is running, creating interactive computing
• It requires interactive computer system which provides direct
communication between user and the system
• The user gives instructions to the operating system or to a program
directly using a input device such as keyboard or mouse and waits
for immediate result on the output device like monitor
• Response time should be short, less than 1 second second
• Each user has at least one program executing in memory
process
• If several jobs ready to run at the same time  CPU
scheduling
• If processes don’t fit in memory, swapping moves them in and
out to run
O P E RAT ING-SYSTEM O P E R ATIONS

• Interrupt driven by hardware


• Software error or request creates exception or trap
• Division by zero, invalid memory access, request for operating system service
• Other process problems include infinite loop, processes modifying each other
or the operating system
• Dual-mode operation allows OS to protect itself and other system
components
• To ensure proper execution of operating system it is necessary to distinguish
between execution of operating system code and user defined code
• In this we have two modes of operation:
• User mode and kernel mode
• Kernel mode is also called as privileged mode/ supervisor mode/ system mode
• Mode bit is used to indicate the current mode of control for kernel it is 0 and 1 for
user
• Some instructions designated as privileged, only executable in kernel mode
• System call changes mode to kernel, return from call resets it to user
O PE R AT I N G- S YS TE M O PE R AT I O N S C O N T. .

• Timer is used to prevent user program entering infinite loop or not calling
system services and never returning control to the operating system
• Timer can be set to interrupt the computer after a specified period
• Operating system sets the counter.
• Every time the clock ticks, the counter is decremented.
• When the counter reaches zero, an interrupt occurs.

Transition from User to Kernel Mode


SP E CI AL-PURPOSE SYST E MS

• Real-time embedded systems


• These are most prevalent form of computers
• It is system that requires not only that the computing results to be correct
but also that the results should be produced within a specific deadline period
• Vary considerable, special purpose, limited purpose OS, real-time OS
• Multimedia systems
• Most operating systems are designed to handle conventional data
• Multimedia data consists of audio, video as well as conventional file
• Frames of the video must be delivered (streamed ) according to the certain
time restrictions, eg 30 frames per sec
• Streams of data must be delivered according to time restrictions
• Handheld systems
• PDAs, smart phones, pocket-PC’s
• Handheld devices have limited memory, slow processors, small display
screens
• The amount of physical memory in handheld depends on the device, as a
result operating system and application must manage the memory efficiently
P RO CESS CO NCEP T

• An operating system executes a variety of programs:


• Time-shared systems – user programs or tasks
• Process is a program in execution; process execution must progress in
sequential fashion
• Batch systems work in terms of "jobs".
• Many modern process concepts are still expressed in terms of jobs, ( e.g. job
scheduling ), and the two terms are often used interchangeably
• Attributes held by process include hardware state, memory, CPU etc.
• A process is defined as an entity which represents the basic unit of work to be
implemented in the system.
P RO CESS CO NCEP T CO NT. .

• A process includes:
• Stack-The process Stack contains the temporary data such as method/function parameters,
return address and local variables
• Heap -This is dynamically allocated memory to a process during its run time.
• Text-This includes the current activity represented by the value of Program Counter
• Data-This section contains the global and static variables.
• Program is passive entity, process is active entity
• Program becomes process when executable file loaded into memory
• Execution of program started via GUI mouse clicks, command line entry of its name,
etc.
• One program can be several processes
• Consider multiple users executing the same program
P RO CESS CO NCEP T CO NT. .

• Note that the stack and the heap start at


opposite ends of the process's free space
and grow towards each other.
• If they should ever meet, then either a
stack overflow error will occur, or else a
call to new or malloc will fail due to
insufficient memory available

The process in memory


P RO CESS STAT E

• As a process executes, it changes state


• new: The process is being created
• running: Instructions are being executed
• waiting: The process cannot run at the moment, because it is waiting for some
resource to become available or for some event to occur. For example the process
may be waiting for keyboard input, disk access request, inter-process messages, a
timer to go off, or a child process to finish.
• ready: The process is waiting to be assigned to a processor
• terminated: The process has finished execution
DI AGRA M O F P RO CESS STAT E
P RO CESS CO NT ROL BLO CK (P CB)

Each process is represented in operating system by the process control block (PCB), its also
known as task control block
Information associated with each process
• Process state. The state may be new, ready running, waiting, halted, and so on.
• Program counter. The counter indicates the address of the next instruction to be
executed for this process.
• CPU registers. The registers vary in number and type, depending on the computer
architecture. They include accumulators, index registers, stack pointers, and general-
purpose registers, plus any condition-code information. Along with the program counter,
this state information must be saved when an interrupt occurs, to allow the process to be
continued correctly afterward
• CPU-scheduling information. This information includes a process priority, pointers to
scheduling queues, and any other scheduling parameters.
P RO CESS CO NT ROL BLO CK (P CB)

•Memory-management information. This


information may include such information as the value
of the base and limit registers, the page tables, or the
segment tables, depending on the memory system used
by the operating system
•Accounting information. This information includes
the amount of CPU and real time used, time limits,
account numbers, job or process numbers, and so on.
•I/O status information. This information includes
the list of I/O devices allocated to the process, a list of
open files, and so on.
CP U SWI T CH F RO M P RO CESS TO P RO CESS
P RO CESS SCH E DULING

• The objective of multiprogramming is to have some process running at all


times, to maximize CPU utilization.
• The objective of time sharing is to switch the CPU among processes so
frequently that users can interact with each program while it is running.
• In order to meet these objectives, the process scheduler selects an available
process (possibly from a set of several available processes) for program
execution on the CPU.
• The multiprogramming operating systems allow more than one process to
be loaded into the executable memory at a time and the loaded process
shares the CPU using time multiplexing.
SCH E DULING Q UE UES

• Job queue - As processes enter the system, they are put into a job queue,
which consists of all processes in the system.

• Ready queue - The processes that are residing in main memory and are
ready and waiting to execute are kept on a list called the ready queue.

• This queue is generally stored as a linked list. A ready-queue header


contains pointers to the first and final PCBs in the list. Each PCB includes a
pointer field that points to the next PCB in the ready queue.
Scheduling Queues (cont.)

Ready Queue And Various I/O Device


Queues
SCH E DULING Q UE UES (CO NT.)

• The system also includes other queues. When a process is allocated the CPU, it executes for a while
and eventually quits, is interrupted, or waits for the occurrence of a particular event, such as the
completion of an I/0 request.
• Suppose the process makes an I/O request to a shared device, such as a disk. Since there are many
processes in the system, the disk may be busy with the I/0 request of some other process. The
process therefore may have to wait for the disk.
• Device queue - The list of processes waiting for a particular I/0 device is called a device queue.
Each device has its own device queue
• The process control block contains all the necessary information for representing a process,
including the state of the process, memory-management information, and pointers to the process's
parent and any of its children.
• Each rectangular box represents a queue.
• The circles represent the resources that serve the queues, and the arrows indicate the flow of
processes in the system.
SCHEDULING QUEUES (CONT.)
• Process scheduling can be represented using queuing diagram
SCH E DULING Q UE UES (CO NT.)
• There are two types of queues : the ready queue and a set of device queues.
• A new process is initially put in the ready queue. It waits there until it is selected for
execution, or is dispatched. Once the process is allocated the CPU and is executing, one of
several events could occur:
• The process could issue an I/0 request and then be placed in an I/0 queue.
• The process could create a new sub-process and wait for the sub-process's termination.
• The process could be removed forcibly from the CPU, as a result of an interrupt, and be put
back in the ready queue.

• In the first two cases, the process eventually switches from the waiting state to the ready state
and is then put back in the ready queue.

• A process continues this cycle until it terminates, at which time it is removed from all queues
and has its PCB and resources deallocated.
SCH E DULERS
• A process migrates among the various scheduling queues throughout its lifetime.
• The operating system must select processes from these queues in some fashion. The selection of
the process is carried out by the appropriate scheduler.
• There are two types of schedulers:
• Long-term scheduler - The long-term scheduler is also known as job scheduler. Its job is to
select processes from the job pool and loading them into memory for execution.
• Short-term scheduler - The short-term scheduler is also known as CPU scheduler. It selects
the processes that are ready to execute from ready queue and allocates the CPU to one of them.
• The main difference between these two schedulers lies in frequency of execution.
• The short-term scheduler must select a new process for the CPU frequently. The short-term
scheduler must be fast.
• The long-term scheduler executes much less frequently. The long-term scheduler controls the
degree of multiprogramming (the number of processes in memory). It must maintain the balance
between process creation and process leaving the system. Thus, the long-term scheduler may need
to be invoked only when a process leaves the system.
SCH E DULERS (CO NT. )
Processes can be described as either:
• I/O-bound process – spends more time doing I/O than computations, many short CPU bursts
• CPU-bound process – spends more time doing computations; few very long CPU bursts
It is important that the long-term scheduler select a good process mix of I/O-bound and CPU-bound
processes

Some operating systems, such as time-sharing systems, may use intermediate level of scheduling, known as
medium-term scheduler

Medium-term scheduler –it is used to remove processes from memory to reduce the degree of
multiprogramming.
Swapping - The removed process can be reintroduced into memory, and its execution can be continued
where it left off.This scheme is called swapping.
The process is swapped out, and is later swapped in, by the medium-term scheduler. Swapping may be
necessary to improve the process mix
Schedulers (Cont.)
• For instance,a running process may become suspended if it makes an I/O request.
• A suspended processes cannot make any progress towards completion.

• Addition of Medium Term Scheduling to the queuing diagram


CO NT EXT SWI T CH

• Switching the CPU to another process requires saving the state of the old process and loading the
saved state for the new process. This task is known as a Context Switch.
• The context of a process is represented in the Process Control Block(PCB) of a process. It
includes the value of the CPU registers, the process state and memory-management information.
• When a context switch occurs, the Kernel saves the context of the old process in its PCB and loads
the saved context of the new process scheduled to run.
• Context switch time is pure overhead, because the system does no useful work while switching.
• Its speed varies from machine to machine, depending on the memory speed, the number of registers
that must be copied.
• Context Switching has become such a performance bottleneck that programmers are using new
structures(threads) to avoid it whenever and wherever possible
O P E RAT ION O N P RO CESS
• Process can created and deleted dynamically, they can execute concurrently
• Two types of operations can be performed on process
• Process creation
• Process termination

Process creation
Processes need to be created in the system for different operations.
Process creation can be done by the following events −
• User request for process creation
• System initialization
• Execution of a process creation system call by a running process
O P E RAT ION O N P RO CESS CO NT..
• Process creation cont..
• A process may be created by another process using fork().
• The creating process is called the parent process and the created process is the child process.
• A child process can have only one parent but a parent process may have many children.
• Each process is given an integer identifier, termed as process identifier, or PID. The parent PID
(PPID) is also stored for each process.
• Resource Sharing - In general, a process will need certain resources (CPU time, memory, files,
I/0 devices) to accomplish its task. When a process creates a sub-process, that sub-process may be
able to obtain its resources directly from the OS, or it may be constrained to a subset of the
resources of the parent process.
• The parent may have to partition its resources among its children,
• or it may be able to share some resources (such as memory or files) among several of its children.
O P E RAT ION O N P RO CESS CO NT..

• Process creation-
• Execution - When a process creates a new process, two possibilities exist in
terms of execution:
• The parent continues to execute concurrently with its children, competing equally
for the CPU.
• The parent waits until some or all of its children have terminated.
• There are also two possibilities in terms of the address space of the new
process:
• The child process is a duplicate of the parent process (it has the same program and
data as the parent, an exact clone).
• The child process has a new program loaded into it.
Operation on Process cont..
PROCESS CREATION
Operation on Process cont..
• Process Termination
• A process terminates when it finishes executing its final statement and asks the OS to delete it by
using the exit() system call
• All the resources of the process -including physical and virtual memory, open files, and I/O buffers-
are deallocated by the OS.
• Termination can occur in other circumstances as well. A process can cause the termination of
another process via an appropriate system call.
• Usually, such a system call can be invoked only by the parent of the process that is to be terminated.
• Note that a parent needs to know the identities of its children.Thus, when one process creates a
new process, the identity of the newly created process is passed to the parent.
Operation on Process cont..
• Process Termination
• A parent may terminate the execution of one of its children for a variety of reasons,
such as these:
• The child has exceeded its usage of some of the resources that it has been allocated. (To
determine whether this has occurred, the parent must have a mechanism to inspect the state of
its children.)
• The task assigned to the child is no longer required.
• The parent is exiting, and the operating system does not allow a child to continue if its parent
terminates.
• Some systems do not allow a child to exist if its parent has terminated. In such systems,
if a process terminates (either normally or abnormally), then all its children must also
be terminated.This phenomenon, referred to as cascading termination, is normally
initiated by the operating system.
INTERPROCESS
COMMUNICATION

• Processes executing concurrently in the operating system may be


either
• independent processes or cooperating processes.
• Independent Process
• It cannot affect or be affected by the other processes executing in the
system.
• It does not share data with any other process is independent.
Cooperating Process
• It can affect or be affected by the other processes executing in the
system.
• It shares data with other processes is a cooperating process.
INTERPROCESS COMMUNICATION CONT..

• There are several reasons for providing an


environment that allows process cooperation:
1) Information sharing -Since several users may be interested in the same piece
of information (for instance, a shared file), we must provide an environment to
allow concurrent access to such information.
2) Computation speedup- If we want a particular task to run faster, we must
break it into subtasks, each of which will be executing in parallel with the others.
3) Modularity- We may want to construct the system in a modular fashion,
dividing the system functions into separate processes.
4) Convenience- Even an individual user may work on many tasks at the same
time. For instance, a user may be editing, printing, and compiling in parallel.
INTERPROCESS COMMUNICATION CONT..

• Cooperating processes require an interprocess communication (IPC)


mechanism
• that will allow them to exchange data and information.
• There are two fundamental models of interprocess communication:
• 1) shared memory
• 2) message passing.
I NT E RP ROCESS CO MMUNIC ATION CO NT..
Shared-memory model Message passing model

A region of memory is shared by cooperating The communication takes place by means of


processes. Then these processes can then messages exchanged between the cooperating
exchange information by reading and writing data processes.
to the shared region.

It is faster strategy of communication It is bit slower than compered to shared memory


model
It provides maximum speed of computation as It is time consuming as message passing is
communication is done through shared memory so implemented through kernel intervention (system
system calls are made only to establish the shared calls).
memory.

It is used for communication between processes It is typically used in a distributed environment


on a systems where the communicating processes where communicating processes reside on remote
reside on the same machine as the communicating machines connected through a network.
processes share a common address space.
Interprocess Communication cont..
C OMMUNI C ATIONS MOD EL S

Message Passing Model Shared Memory Model


Interprocess Communication cont..
Shared-Memory Systems
• Interprocess communication using shared memory requires
communicating processes to establish a region of shared memory.
• A shared-memory region resides in the address space of the
process creating the shared memory segment. Other processes
that wish to communicate using this shared memory segment must
attach it to their address space.
• Normally, the operating system tries to prevent one process from
accessing another process's memory.
• Shared memory requires that two or more processes agree to
remove this restriction. They can then exchange information by
reading and writing data in the shared areas.
Interprocess Communication cont..
• Shared-Memory Systems
• The form of the data and the location are determined by these processes and
are not under the operating system's control.
• The processes are also responsible for ensuring that they are not writing to
the same location simultaneously.
• To illustrate the concept of cooperating processes, let's consider the producer-
consumer problem, which is a common paradigm for cooperating processes.
• A producer process produces information that is consumed by a consumer
process.
• The example producer-consumer problem is the client-server paradigm. We
generally think of a server as a producer and a client as a consumer.
• For example, a Web server produces (that is, provides) HTML files and images,
which are consumed (that is, read) by the client Web browser requesting the
resource.
Interprocess Communication cont..
• Shared-Memory Systems
• One solution to the producer-consumer problem uses shared memory.
• To allow producer and consumer processes to run concurrently, we must have available a buffer of
items that can be filled by the producer and emptied by the consumer.
• The buffer will reside in a region of memory that is shared by the producer and consumer processes.
• A producer can produce one item while the consumer is consuming another item. The producer and
consumer must be synchronized, so that the consumer does not try to consume an item that has not
yet been produced.
• Two types of buffers can be used
• Unbounded Buffer
• Bounded Buffer
• Unbounded Buffer- no limit on the size of the buffer. The consumer may have to wait for new items,
but the producer can always produce new items.
• Bounded Buffer- The size of the buffer is fixed. In this case, the consumer must wait if the buffer is
empty, and the producer must wait if the buffer is full.
Interprocess Communication cont..
• Message passing Systems
• It allow processes to communicate and to synchronize their actions
without sharing the same address space.
• It is particularly useful in a distributed environment, where the
communicating processes may reside on different computers
connected by a network.
• For example, a chat program used on the World Wide Web could be
designed so that chat participants communicate with one another by
exchanging messages.
• A message-passing facility provides at least two operations:
send(message) and receive(message).
• Messages sent by a process can be of either fixed or variable size.
Interprocess Communication cont..
• Message passing Systems
• If processes P and Q want to communicate, they must send messages to
and
• receive messages from each other;
• This can happen only if the communication link exist between them. This
link can be implemented in a variety of ways, physical and logical
implementation.
CP U SCH E DULER

• Selects from among the processes in ready queue, and allocates the CPU to one
of them:
• Queue may be ordered in various ways
• CPU scheduling decisions may take place when a process:

1. Switches from running to waiting state


2. Switches from running to ready state
3. Switches from waiting to ready
4. Terminates
S CH E DULING ALGORITHMS
A Process Scheduler schedules different processes to be assigned to the CPU based
on particular scheduling algorithms.
There are five popular process scheduling algorithms
• First-Come, First-Served (FCFS) Scheduling
• Shortest-Job-Next (SJN) Scheduling
• Shortest Remaining Time
• Priority Scheduling
• Round Robin(RR) Scheduling
ALTERNATING S EQU ENC E OF C PU AND I/O
BU RS TS

• process execution consists of a cycle of


CPU execution and I/0 wait.
• Processes alternate between these two
states. Process execution begins with a
CPU burst.
• That is followed by an I/O burst, which
is followed by another CPU burst, then
another I/0 burst, and so on.
• Eventually, the final CPU burst ends
with a system request to terminate
execution
NON-PRE EMPTIVE OR PRE E MPTIVE.

• These algorithms are either non-preemptive or preemptive.


• Non-preemptive algorithms are designed so that once a process enters the
running state, it cannot be preempted until it completes its allotted time
• The preemptive scheduling is based on priority where a scheduler may preempt
a low priority running process anytime when a high priority process enters into a
ready state.
SCHEDULING CRITERIA
• CPU utilization – keep the CPU as busy as possible
• Throughput – Number of processes that complete their execution
per time unit
• Turnaround time – amount of time to execute a particular process
• Waiting time – amount of time a process has been waiting in the
ready queue
• Response time – amount of time it takes from when a request was
submitted until the first response is produced, not output (for time-
sharing environment)
F IRS T COME F IRS T S E RVE (F CF S)

• Jobs are executed on first come, first serve basis.


• It is a non-preemptive scheduling algorithm.
• Easy to understand and implement.
• Its implementation is based on FIFO queue.
• Poor in performance as average wait time is high.
F IRS T COME F IRS T S E RVE (F CF S)
Example-1 Process Burst Time
P1 24
P2 3
P3 3
• Suppose that the processes arrive in the order: P1 , P2 , P3 calculate the average waiting time.

• The Gantt Chart for the schedule is:

P1 P2 P3

0 24 27 30

• Waiting time for P1 = 0; P2 = 24; P3 = 27


• Average waiting time: (0 + 24 + 27)/3 = 17
F IRS T COME F IRS T S E RVE (F CF S)
Example-2 Suppose that the processes arrive in the order:
P2 , P3 , P1
calculate the average waiting time.
FIRST COME FIRST SERVE
(FCFS)

The Gantt chart for the schedule is:

P2 P3 P1

0 3 6 30

• Waiting time for P1 = 6;P2 = 0; P3 = 3


• Average waiting time: (6 + 0 + 3)/3 = 3
• Much better than previous case
• Convoy effect - short process behind long process
• Consider one CPU-bound and many I/O-bound processes
F IRS T COME F IRS T S E RVE (F CF S)

• Example-3 Consider the set of processes whose arrival and burst time are given
below,
• Calculate the average waiting time and turnaround time using FCFS algorithm.
Process ID Arrival Time Burst Time
P1 4 5
P2 6 4
P3 0 3
P4 6 2
P5 5 4
F IRS T COME F IRS T S E RVE (F CF S)

Process Arrival Burst


Turnaround Time= Completion Time- Arrival ID Time Time
Time P1 4 5
Waiting Time = Turnaround Time – Burst
P2 6 4
Time
P3 0 3
P4 6 2
P5 5 4

P3 P1 P5 P2 P4

0 3 4 9 13 17 19
F IRS T COME F IRS T S E RVE (F CF S)

• Average Turnaround Time =8units


• Average Waiting Time =4.4 units
F IRS T COME F IRS T S E RVE (F CF S)

• The FCFS scheduling algorithm is non-preemptive.


• Once the CPU has been allocated to a process, that process keeps the
CPU until it releases the CPU, either by terminating or by requesting
I/0.
• The FCFS algorithm is thus particularly troublesome for time-sharing
systems, where it is important that each user get a share of the CPU at
regular intervals.
• It would be disastrous to allow one process to keep the CPU for an
extended period.
SH O RTEST J O B F I R ST (SJ F )

• This algorithm is associated with each process the length of the process’s next CPU burst
• When the CPU is available, it is assigned to the process that has the smallest next CPU burst.
• This is also known as shortest next CPU Burst algorithm
• This is a non-preemptive, pre-emptive scheduling algorithm.
• It is sometimes called Shortest-remaining-time-first scheduling, because scheduling depends
on the length of the next CPU burst of the process rather than its total length
• Best approach to minimize waiting time.
• Impossible to implement in interactive systems where required CPU time is not known.
• The processer should know in advance how much time process will take.
SH O RTEST J O B F I R ST (NO N-PR EEMP TIVE)

Example-1
Calculate the average waiting time of the problem using SJF algorithm whose process
Id and Burst time are given below:
ProcessArriva l Time Burst Time
P1 0.0 6
P2 2.0 8
P3 4.0 7
P4 5.0 3
SHORTEST JOB FIRST (NON-PREEMPTIVE)

• SJF scheduling Gantt chart

P4 P1 P3 P2

0 3 9 16 24

• Average waiting time = (3 + 16 + 9 + 0) / 4 = 7


SJF EXAMPLE FOR PREEMPTIVE OR
SHORTEST REMAINING TIME

• Example-2 Now we add the concepts of varying arrival times and


preemption to the analysis

Process arriArrival Time Burst Time


P1 0 8
P2 1 4
P3 2 9
P4 3 5
SJF

• Preemptive SJF Gantt Chart

P1 P2 P4 P1 P3

0 1 5 10 17 26

Waiting Time =Total Waiting Time-Number of milliseconds process executed-


Arrival Time
SJF

Waiting Time of Processes

P1=(10-1-0)=9ms
P2=(1-0-1)=0ms
P3=(17-0-2)=15ms
P4=(5-0-3)=2ms

• Average waiting time = [(10-1)+(1-1)+(17-2)+5-3)]/4 = 26/4 = 6.5 ms


P R I O RITY B A SE D SCH E DULING

• Priority scheduling can be either preemptive or non-preemptive.


• When a process arrives at the ready queue, its priority is compared with the priority of
the currently running process.
• A preemptive priority scheduling algorithm will preempt the CPU if the priority of the
newly arrived process is higher than the priority of the currently running process.
• A non-preemptive priority scheduling algorithm and one of the most common scheduling
algorithms in batch systems. Each process is assigned first arrival time (less arrival time
process first) if two processes have same arrival time, then compare to priorities (highest
process first). Also, if two processes have same priority then compare to process number
(less process number first). This process is repeated while all process get executed.
• The difference between preemptive priority scheduling and non-preemptive priority
scheduling is that, in preemptive priority scheduling, the job which is being executed
can be stopped at the arrival of a higher priority job.
• Once all the jobs get available in the ready queue, the algorithm will behave as non-
preemptive priority scheduling, which means the job scheduled will run till the
completion and no preemption will be done.
• The SJF algorithm is a special case of the general priority scheduling algorithm.
• A priority is associated with each process, and the CPU is allocated to the process with
the highest priority.
P R I O RITY B A SE D SCH E DULING CO NT. .

• The priority scheduling is decided in terms of low priority or high priority.


• Priorities are generally indicated by some fixed range of numbers, low numbers
represent high priority.
• The problem with priority scheduling algorithms is indefinite blocking or starvation.
• A process that is ready to run but waiting for the CPU can be blocked.This means a
steady stream of higher-priority processes can prevent a low-priority process from ever
getting the CPU. This can leave some low-priority processes waiting indefinitely.
• A solution to this problem is aging. Aging is a technique of gradually increasing the
priority of processes that wait in the system for a long time.
P R I O RITY B A SE D SCH E DULING CO NT. .
• Example-1 consider the following set of processes, assumed to have arrived at
time 0 in the order P1, P2, · · ·, P5, with the length of the CPU burst given in
milliseconds, calculate the average waiting time.
Process Burst time Priority
P1 10 3
P2 1 1
P3 2 4
P4 1 5
P5 5 2
P R I O RITY B A SE D SCH E DULING CO NT. .
• Using priority scheduling, we would schedule these processes according to the
• following Gantt chart:

P2 P5 P1 P3 P4

0 1 6 16 18 19

The average waiting time is 8.2 milliseconds.


PRIORITY B ASED SCHEDULING CONT..

In the Example, there are 7 processes P1, P2, P3, P4, P5, P6 and P7. Their
priorities, Arrival Time and burst time are given in the table. Calculate turn
around time and Average waiting time.
Process ID Priority Arrival Time Burst Time

1 2 0 3
2 6 2 5
3 3 1 4
4 5 4 2
5 7 6 9
6 4 5 4
7 10 7 10
PRIORITY B ASED SCHEDULING CONT..

1.Turn Around Time = Completion Time - Arrival Time


2.Waiting Time = Turn Around Time - Burst Time
PRIORITY B ASED SCHEDULING CONT..

Process Priority Arrival Burst Completi Turnarou Waiting


Id Time Time on Time nd Time Time

1 2 0 3 3 3 0
2 6 2 5 18 16 11
3 3 1 4 7 6 2
4 5 4 2 13 9 7
5 7 6 9 27 21 12
6 4 5 4 11 6 2
7 10 7 10 37 30 20

Avg Waiting Time = (0+11+2+7+12+2+20)/7 = 54/7 units


ROU ND-ROBIN S CH E DULING

• The round-robin (RR) scheduling algorithm is designed especially for


timesharing systems. It is similar to FCFS scheduling, but preemption is added to
enable the system to switch between processes.
• Each process is provided a fix time to execute, it is called a time quantum or
time slice.
• Once a process is executed for a given time period, it is preempted and other
process executes for a given time period.
• Context switching is used to save states of preempted processes.
• In Round Robin scheduling implementation, the ready queue works in FIFO
manner.
• New processes are added to the tail of the ready queue. The CPU scheduler picks
the first process from the ready queue, sets a timer to interrupt after 1 time
quantum, and dispatches the process.
ROU ND-ROBIN S CH E DULING CONT..

• There are two possibilities for process execution.


1. The process may have a CPU burst of less than 1 time
quantum-
In this case, the process itself will release the CPU voluntarily.The
scheduler will then proceed to the next process in the ready queue.

2. The CPU burst of the currently running process is longer than


1 time quantum-
Here the timer will go off and will cause an interrupt. A context
switch will be executed, and the process will be put at the tail of the ready
queue.The CPU scheduler will then select the next process in the ready
queue.
ROU ND-ROBIN S CH E DULING CONT..

• Example-1 Consider the following set of processes that arrive at time 0, with the
length of the CPU burst given in milliseconds and time quantum is of 4 milliseconds,
calculate the average waiting time.

Process Burst Time


P1 24
P2 3
P3 3

• Gantt Chart

P1 P2 P3 P1 P1 P1 P1 P1

0 4 7 10 14 18 22 26 30
ROU ND-ROBIN S CH E DULING CONT..

• Method 1 Process Burst Turnaroun Waiting


Time d Time Time

• Turn Around Time = P1 24 30-0=30 30-24=6


• Completion Time – Arrival Time P2 3 7-0=7 7-3=4
• Waiting Time= P3 3 10-0=10 10-3=7
• Turn around Time – Burst Time

• Average Turnaround Time=(30+7+10)/3=15.66ms


• Average waiting Time= (6+4+7)/3=5.66ms
ROU ND-ROBIN S CH E DULING CONT..

• 0 4 7 10 14 18 22 26 30
P1 P2 P3 P1 P1 P1 P1 P1

• Method 2
• Waiting time= Last start Time - Arrival Time-
Proces Burst Waiting
• (preemption X Time quantum) s Time Time
P1 24 26-0-
(5X4)=6
• Average waiting Time= (6+4+7)/3=5.66ms P2 3 4-0-(0X4)=4
P3 3 7-0-(0X4)=7
DEADLOCK S SYSTEM M ODEL

• A system model or structure consists of a fixed number of


resources to be circulated among different processes.
• The resources are partitioned into numerous types, each
consisting of some specific quantity of identical instances.
• Memory space, CPU cycles, directories and files, I/O devices
like keyboards, printers and CD-DVD drives are prime
examples of resource types.
• When a system has 2 CPUs, then the resource type CPU got
two instances.

91
DEADLOCK S SYS TEM M ODEL

A process in operating systems uses different resources and uses


resources in following way.
1) Requests a resource
2) Use the resource
2) Releases the resource
• Request: When the request can't be approved immediately
(where the case may be when another process is utilizing the
resource), then the requesting job must remain waited until it
can obtain the resource.
• Use: The process can run on the resource (like when the
resource is a printer, its job/process is to print on the printer).
• Release: The process releases the resource (like, terminating or
exiting any specific process).
92
DE ADLOCK

• Deadlock is a situation where a set of processes are blocked


because each process is holding a resource and waiting for another
resource acquired by some other process.
• Deadlock can arise if the following four conditions hold
simultaneously (Necessary Conditions)
Mutual Exclusion: One or more than one resource is non-sharable
(Only one process can use at a time)
Hold and Wait:A process is holding at least one resource and
waiting for resources.
No Preemption: A resource cannot be taken from a process unless
the process releases the resource.
Circular Wait: A set of processes are waiting for each other in
circular form.

93
DE ADLOCK E XAMPLE

94
Deadlock characterization

• Mutual Exclusion
• There should be a resource that can only be held by one process at a time. In the
diagram below, there is a single instance of Resource 1 and it is held by Process 1
only.

95
DE ADLOCK CHARACTE RI ZATI ON

Hold and Wait

• A process can hold multiple resources and still request more


resources from other processes which are holding them. In the
diagram given below, Process 2 holds Resource 2 and Resource
3 and is requesting the Resource 1 which is held by Process 1.

96
D E AD LOCK CHARACT E RI ZAT I ON

• No Preemption

A resource cannot be preempted from a process by force. A


process can only release a resource voluntarily. In the diagram
below, Process 2 cannot preempt Resource 1 from Process 1. It
will only be released when Process 1 relinquishes it voluntarily
after its execution is complete.

97
D E AD LOCK CHARACT E RI ZAT I ON

Circular Wait
• A process is waiting for the resource held by the second
process, which is waiting for the resource held by the third
process and so on, till the last process is waiting for a resource
held by the first process.
• This forms a circular chain.
• For example: Process 1 is allocated Resource2 and it is
requesting Resource 1.
• Similarly, Process 2 is allocated Resource 1 and it is requesting
Resource 2.
• This forms a circular wait loop

98
D E AD LOCK CHARACT E RI ZAT I ON

99
End of module 1

You might also like