Download as pdf or txt
Download as pdf or txt
You are on page 1of 52

OPERATING SYSTEM

UNIT-1
An operating system (OS) is a program that acts as an interface
between the system hardware and the user. Moreover, it handles all
the interactions between the software and the hardware. All the
working of a computer system depends on the OS at the base level.
Further, it performs all the functions like handling memory, processes,
the interaction between hardware and software, etc.

or
Objectives of OS:
The primary goals of an operating system are as follows:
Convenience – An operating system improves the use of a machine.
Operating systems enable users to get started on the things they wish
to complete quickly without having to cope with the stress of first
configuring the system.
Efficiency – An operating system enables the efficient use of resources.
This is due to less time spent configuring the system.
Ability to evolve – An operating system should be designed in such a
way that it allows for the effective development, testing, and
introduction of new features without interfering with service.
Management of system resources – It guarantees that resources are
shared fairly among various processes and users.
*Features of Operating System (OS)
Here is a list important features of OS:
1.Protected and supervisor mode
2.Allows disk access and file systems Device drivers Networking Security
3.Program Execution
4.Memory management Virtual Memory Multitasking
5.Handling I/O operations
6.Manipulation of the file system
7.Error Detection and handling
8.Resource allocation
9.Information and Resource Protection
*Advantage of Operating System
1.Allows you to hide details of hardware by creating an abstraction
2.Easy to use with a GUI
3.Offers an environment in which a user may execute
programs/applications
4.The operating system must make sure that the computer system
convenient to use
5.Operating System acts as an intermediary among applications and the
hardware components
6.It provides the computer system resources with easy to use format
*Disadvantages of Operating System
1.If any issue occurs in OS, you may lose all the contents which have
been stored in your system.
2.Operating system’s software is quite expensive for small size
organization which adds burden on them. Example Windows
3.It is never entirely secure as a threat can occur at any time
*Operating System Generations:
Operating Systems have evolved over the years. So, their evolution
through the years can be mapped using generations of operating
systems. There are four generations of operating systems. These can be
described as follows −

1.The First Generation ( 1945 - 1955 ): Vacuum Tubes and Plugboards:


The first generation of the operating system is described between the
years 1945 to 1955. This was the time of the Second World War. Even
digital computers were not built till then. For calculation purposes,
people use a machine called calculating engines that are constructed
using mechanical relays. These mechanical relays work very slowly and
for that reason, the mechanical relays were replaced with vacuum
tubes with the passing of time. These are slow machines. All the tasks
related to these machines like designing, building, and maintaining were
managed by a single group of people.
2.The Second Generation ( 1955 - 1965 ): Transistors and Batch
Systems:
The second generation of operating systems, the development of the
shared system with multiprogramming and the beginning of
multiprocessing. Multiprogramming is defined as the system in which
many user programs are stored in the main storage at once and
switching between the jobs is done by the processor. In
multiprogramming, the power of a machine is increased by using many
processors in a single machine. In this generation, real-time systems are
emerged to get a quick and real-time response where computers are
used to control the working and functionality of industries like oil
refineries, coal factories,etc.
3.The Third Generation ( 1965 - 1980 ): Integrated Circuits and
Multiprogramming:
Two types of computer systems were invented during this time (
1965-1980 ), these are a scientific calculator and a commercial
calculator. Both these systems were combined in the system/360 by a
company named IBM. Integrated circuits become in use in computer
systems these times. The use of integrated circuits increased the
performance of systems many times as compared to the
second-generation systems. The price of systems also decreased due to
the use of integrated circuits as making integrated circuits on large
scale requires a setup that costs high but can be produced the circuits
in very large numbers costs cheaper, from that setup.

4.The Fourth Generation ( 1980 - Present ): Personal Computers:


The time from 1980 to the present is termed the fourth generation
of operating systems and systems. Personal computers become in
use as integrated circuits are now easily available. Large-scale
integrated circuits were integrated to make personal computers.
These integrated circuits consist of many thousand transistors in a
small silicon plate that may be of few centimeters in size.
Types of Operating System
An operating system is a well-organized collection of programs that
manages the computer hardware. It is a type of system software that
is responsible for the smooth functioning of the computer system.

1.Batch Operating Systems:


A batch operating system grabs all programs and data in the batch
form and then processes them. The main aim of using a batch
processing system is to decrease the setup time while submitting
similar jobs to the CPU. Batch processing techniques were
implemented in the hard disk and card readers as well. In this case, all
jobs are saved on the hard disk for making the pool of jobs for their
execution as a batch form.
Advantages of Operating System:

• CPU utilization is better with the advancement of the modern


batch operating system.
• Due to the serial job scheduling, a large number of jobs can be
scheduled again and again.
• We can also divide the batch process into several components or
stages to increase the processing speed.
• After the completion of one job from the group, the next job from
the job spool is run without any user interaction

Disadvantages:

• Sometimes manual interventions are required between two batches.


• The CPU utilization is low because the time taken in loading and
unloading batches is very high compared to execution time.
• Sometimes jobs enter into an infinite loop due to some mistake.
• Meanwhile, if one job is taking too much time, other jobs have to
wait.

2. Multi-programming OS:

The operating system which can run multiple processes on a single


processor is called a multiprogramming operating system. There are
different programs that want to get executed. So these programs are
kept in the ready queue. And are assigned to the CPU one by one. If
one process gets blocked then other processes from the ready queue
are assigned to the CPU. The aim of this is optimal resource utilization
and more CPU utilization.

Advantages of Multiprogramming OS
o Throughout the system, it increased as the CPU always had one
program to execute.
o Response time can also be reduced.

Disadvantages of Multiprogramming OS
o Multiprogramming systems provide an environment in which
various systems resources are used efficiently, but they do not
provide any user interaction with the computer system.

Multiprocessing Operating System:

Multiprocessing Operating System is the type of Operating System that


uses multiple processors to operate within a single system. Multiple
CPUs are connected to divide and execute a job more quickly. After the
task is finished, the output from all Processors is compiled to provide a
final result. Jobs are required to share main memory and they may
often share other system resources.

The organization of a typical Multiprocessing Operating System is


shown in the image given below.

Advantages of Multiprocessing Operating System:

Multiprocessing Operating Systems have the following advantages over


the other types of Operating Systems.

• Multiprocessing Operating System uses multiple processors to execute


the tasks which results in faster execution and better performance of
the system.
• In case a processor fails to work, the other processors can continue to
execute the tasks. Thus, Multiprocessing Operating Systems ensure the
high availability of the system.
• Multiprocessing Operating Systems are scalable which means that can
handle the increased amount of workload without affecting the
performance of the system.
• Multiprocessing Operating Systems efficiently utilize the resources.
Disadvantages of Multiprocessing Operating System:

The Multiprocessing Operating System has the following disadvantages.

• Multiprocessing Operating Systems are complex and require specialized


knowledge.
• The cost of a Multiprocessing Operating system can be high because of
the need for specialized hardware resources.
• They may face compatibility issues with software that is not designed to
work with multiprocessing operating systems.
• Achieve Synchronization between multiple processors in a
multiprocessing operating system is a challenging task.
Types of Multiprocessing Operating Systems

Multiprocessing Operating Systems are of the following types.

1. Symmetrical Multiprocessing Operating System


2. Asymmetrical Multiprocessing Operating System

Multitasking Operating System:

The multitasking OS is also known as the time-sharing operating system


as each task is given some time so that all the tasks work
efficiently. This system provides access to a large number of users, and
each user gets the time of CPU as they get in a single system. The tasks
performed are given by a single user or by different users. The time
allotted to execute one task is called a quantum, and as soon as the
time to execute one task is completed, the system switches over to
another task.
Advantages of Multitasking OS:

• Each task gets equal time for execution.


• The idle time for the CPU will be the lowest.
• There are very few chances for the duplication of the software.

Disadvantages of Multitasking OS:

• Processes with higher priority cannot be executed first as equal


priority is given to each process or task.
• Various user data is needed to be taken care of from
unauthorized access.
• Sometimes there is a data communication problem.

Examples of Multitasking OS: UNIX, etc.


Network OS

Network operating systems are the systems that run on a server and
manage all the networking functions. They allow sharing of various
files, applications, printers, security, and other networking functions
over a small network of computers like LAN or any other private
network. In the network OS, all the users are aware of the
configurations of every other user within the network, which is why
network operating systems are also known as tightly coupled systems.

Advantages of Network OS

• New technologies and hardware can easily upgrade the systems.


• Security of the system is managed over servers.
• Servers can be accessed remotely from different locations and systems.
• The centralized servers are stable.

Disadvantages of Network OS:

• Server costs are high.


• Regular updates and maintenance are required.
• Users are dependent on the central location for the maximum number of
operations.

Examples of Network OS: Microsoft Windows server 2008, LINUX, etc.

Real-Time OS

Real-Time operating systems serve real-time systems. These operating systems


are useful when many events occur in a short time or within certain deadlines,
such as real-time simulations.

Types of the real-time OS are:


• Hard real-time OS:

The hard real-time OS is the operating system for mainly the applications in which
the slightest delay is also unacceptable. The time constraints of such applications
are very strict. Such systems are built for life-saving equipment like parachutes
and airbags, which immediately need to be in action if an accident happens.

• Soft real-time OS:

The soft real-time OS is the operating system for applications where time
constraint is not very strict.

In a soft real-time system, an important task is prioritized over less


important tasks, and this priority remains active until the completion of
the task. Furthermore, a time limit is always set for a specific job,
enabling short time delays for future tasks, which is acceptable. For
Example, virtual reality, reservation systems, etc.

Advantages of Real-Time OS

• It provides more output from all the resources as there is


maximum utilization of systems.
• It provides the best management of memory allocation.
• These systems are always error-free.
• These operating systems focus more on running applications than
those in the queue.
• Shifting from one task to another takes very little time.

Disadvantages of Real-Time OS

• System resources are extremely expensive and are not so good.


• The algorithms used are very complex.
• Only limited tasks can run at a single time.
• In such systems, we cannot set thread priority as these systems
cannot switch tasks easily.

Examples of Real-Time OS: Medical imaging systems, robots, etc.

Distributed Operating System:

A distributed operating system is one in which several computer


systems connected through a single communication channel.
Moreover, these systems have their individual processors and memory.
Furthermore, these processors communicate through high-speed buses
or telephone lines. These individual systems that connect through a
single channel are considered as a single unit. We can also call them
loosely coupled systems. The individual components or systems of the
network are nodes.

Advantages of Distributed OS

• The load on the system decreases.


• If one system stops it will not affect the other.
• The system shares a workload that makes calculations easy.
• The size of the system can be set according to requirements.
Disadvantages of Distributed OS

• The cost for set up is more.


• Failure of the main system will affect the whole system.

• Programming is complex.

Time Sharing Operating System:

Time sharing operating system is a type of operating system. An


operating system is basically, a program that acts as an interface
between the system hardware and the user. Moreover, it handles all the
interactions between the software and the hardware.

It allows the user to perform more than one task at a time, each task
getting the same amount of time to execute. Hence, the name time
sharing OS. Moreover, it is an extension of multiprogramming systems. In
multiprogramming systems, the aim is to make the maximum use of the
CPU. On the other hand, here the aim is to achieve the minimum
response time of CPU.
Advantages of Time Sharing Operating System

• Response time of CPU reduces.


• Idle time of CPU reduces.
• Each task/process gets an equal time slot to execute.
Disadvantages of Time Sharing Operating System

• The data of each program should be secure so that they don’t mix.
• Communication is very important to maintain. Lack of
communication can affect the whole working.

*System Call in OS:

A system call is a mechanism that provides the interface between a


process and the operating system. It is a programmatic method in
which a computer program requests a service from the kernel of the
OS.
System call offers the services of the operating system to the user
programs via API (Application Programming Interface). System calls
are the only entry points for the kernel system.

The system call interface is the entry point for a program to interact with the
operating system. When a program makes a system call, it is transferring
control from the user mode to the kernel mode. The kernel mode is the
privileged mode of operation in which the operating system executes, and it has
access to all hardware resources. By making a system call, a program is asking
the operating system to perform a task on its behalf, such as reading or writing
data to a file, allocating memory, or creating a new process.

System calls are defined in the API of the operating system and are usually
written in a low-level language such as C or Assembly. These system calls are
designed to be as simple and efficient as possible so that they can be executed
quickly. They are usually implemented as a software interrupt or a trap, which
transfers control from the user-level program to the operating system.

Working of a System Call


A system call is a way for a user-level process to request services from the
kernel (operating system). The process makes a request to the operating
system through a system call by executing a special instruction that triggers the
operating system to perform the requested service.
Here’s a high-level diagram explaining the working of a system call:

• User-level process: The process executing in user mode needs to perform a


specific operation that requires the services of the operating system.
• System call request: The process makes a request to the operating system by
executing a special instruction, typically through a library function (e.g. read(),
write(), open(), etc.).
• Interrupt: The system call instruction triggers a software interrupt, causing the
processor to switch from user mode to kernel mode.
• System Call Handler: The operating system has a system call handler routine
that is executed in response to the interrupt. The system call handler performs
the requested operation.
• System Services: The operating system provides the requested services to
the process, such as accessing hardware resources or allocating memory.
• Return to User Mode: The processor switches back to user mode, and the
results of the system calls are returned to the user-level process.

Types of System Calls in OS:


These are the types of system calls in os and mainly they are five types:
1. Process Control
Process control system calls: These system calls are used to create, manage,
and control processes. Examples include fork(), exec(), wait(), kill(), and
getpid().

2. File Management
Device management system calls: These system calls are used to manage and
manipulate I/O devices such as printers, keyboards, and disk drives. Examples
include ioctl() and select().

3. Device Management
Device management system calls: These system calls are used to manage and
manipulate I/O devices such as printers, keyboards, and disk drives. Examples
include ioctl() and select().

4. Information Management
Information maintenance system calls: These system calls are used to retrieve
information about the system, the processes running on the system, and the
status of various resources. Examples include getuid(), getgid(), and getpid().

5. Communication
These system calls are used for inter-process communication (IPC) and
resource working. Examples include socket(), bind(), listen(), and accept().

Examples of Windows and Unix system calls


System Boot:
The process of starting a computer is known as Booting. It can be triggered by either
hardware, such as a button press, or software. A CPU has no software in its main memory
when it is turned on; thus, some programs must load software into memory before
running. This can be accomplished by the CPU's hardware or firmware or by a separate
processor in the computer system.

Restarting a computer is also known as rebooting, and it can be "hard" or


"soft," depending on whether the power to the CPU is switched from off to on. A soft boot
may optionally clear RAM to zero on some computers. Hardware, such as a button press
or a software command, can start hard and soft Booting. When the operative runtime
system, often the operating system and some programs, is reached, Booting is complete.

OR
1. Boot Loader: Only code stored in the system’s memory can be executed by
computers powered by the central processor unit. Nonvolatile memories are
used to store the code and data for modern operating systems and application
programmes. A computer must initially rely only on the code and data stored in
nonvolatile parts of the system’s memory when it is first turned on. The
hardware of the computer is unable to carry out numerous complex system
tasks since the operating system is not actually installed at boot time.

The boot loader, sometimes known as a bootstrap loader, is the programme


that initiates the series of events that result in the loading of the full operating
system.

2. Boot Devices: The device from which the operating system is loaded is
known as the boot device. The Basic Input/Output System (BIOS) of a
contemporary PC supports booting from a variety of sources. These consist of a
USB device, network interface card, local hard drive, optical drive, floppy drive,
and a network card. A boot order can be set up by the user in the BIOS. When
the boot order is as follows:

• CD Drive
• Hard Disk Drive
• Network

3. Boot Sequence: Every personal computer uses the same basic boot
process. The CPU first executes a memory instruction for the BIOS. The
BIOS start-up program is transferred via a jump instruction in that
instruction. This programme performs a power-on self-test (POST) to
ensure that the hardware the computer will use is in good working order.
When it comes across a bootable device, the BIOS continues the
programmed boot sequence. When BIOS discovers a bootable device, it
loads the boot sector and switches control to it. It will be a master boot
record (MBR) if the boot device is a hard drive.

The MBR code looks for an active partition in the partition table. If
one is identified, the MBR code loads and executes the boot
sector of that partition. The boot sector is frequently operating
system specific. However, in most operating systems, its primary
job is to load and execute the operating system kernel, which
allows the operating system to continue booting. Assume there is
no active partition or the boot sector of the active partition is
faulty. The MBR may then load a secondary boot loader, which
will select a partition and load its boot sector, which normally
loads the operating system kernel.

Types of Booting:
There are two types of booting in an operating system.

1. Cold Booting: When a computer begins for the first time or when it is in a shutdown
state, and the power button is pressed to restart the system, this is known as cold
Booting. During cold Booting, the system reads all of the instructions from the ROM
(BIOS), and the Operating System is loaded into the system automatically. This type of
Booting takes longer than Hot or Warm Booting.

2. Warm Booting: The warm or hot booting process occurs when


computer systems reach a state of no response or hang, and the system
is then permitted to restart while on. Rebooting is another term for it.
There are several causes for this condition, and the only way to fix it is
to restart the computer. When we install new software or hardware,
we may need to reboot. The system requires a reboot to set software
or hardware configuration changes, or sometimes systems may
perform improperly or may not respond appropriately. The system
must be forced to restart in this instance. To reboot the computer,
press the Ctrl+Alt+Del key combination.

Booting Process in Operating System:

• Power-on and System Initialization:

• When the power button is pressed or the system is reset, the computer’s
firmware (BIOS or UEFI) is invoked. It performs a Power-On Self-Test (POST)
to check hardware components, including memory, storage devices, and
peripherals. It then initializes the system and identifies bootable devices.

• Bootloader Execution:

• After the firmware completes the initial system checks, it searches for a
bootable device that contains the bootloader. The bootloader is a small
program responsible for loading the operating system into memory. The
firmware hands over control to the bootloader, which is typically stored in the
Master Boot Record (MBR) or EFI System Partition (ESP).

• Loading the Operating System Kernel:

• The bootloader locates and loads the operating system kernel into memory.
The kernel is the core of the operating system that manages hardware,
memory, and other essential functions. The bootloader passes control to the
kernel, and the operating system begins its initialization.

• System Initialization:

• The kernel initializes the necessary system components, such as drivers,


memory management, file systems, and network interfaces. It sets up the
environment required for the operating system to function correctly.
Configuration files and system services are loaded, and the system transitions
from a basic state to a fully functional state.

• User Mode Initialization:

• Once the kernel completes its initialization, it starts the user mode initialization.
User-specific settings, login prompts, and user applications/services are
loaded. The graphical user interface (GUI) or command-line interface (CLI) is
presented to the user, enabling interaction with the operating system.

• System Operation:

• After the booting process is complete, the operating system is ready for use.
The user can now run applications, access files, browse the internet, and
perform various tasks. The operating system manages system resources,
facilitates multitasking, and provides an interface for user interaction.

System Program:
System Programming can be defined as the act of building Systems Software
using System Programming Languages. According to Computer Hierarchy, one
which comes at last is Hardware. Then it is Operating System, System Programs,
and finally Application Programs. Program Development and Execution can be
done conveniently in System Programs. Some of the System Programs are
simply user interfaces, others are complex. It traditionally lies between the user
interface and system calls.

OR

System Programs are the collection of software required for program execution.
They are also known as system utilities. System programs can be simple,
involving only system calls as well as complex ones.

Types of system programs in os


There are six major categories into which system programs are divided.
These include –File management, status information, file modification,
Programming Language supporters, Program Loading and Execution,
and Communications.

File Management

These are the programs to perform various operations in files. These


operations include create, delete, copy, manipulate or rename files and
directories.
Status Information

These are the programs that ask for information such as date and time,
number of users, or other information. These are the programs that
either copy this information to a file or in a graphical user interface,
display it on the screen. The registry is also involved in those that are
used to provide configuration-related information.

File Modification

These are the programs that are used to work with the files. These
include programs such as text editors that are used to create or modify
the contents of the file or perform operations such as searching and
replacing the contents of the file.

Programming Language supporters

Assemblers, loaders, linkers, and compilers are also provided by the


operating system. These are generally used by programmers or people
that have a good programming language.

Program Loading and Execution

These include a set of programs that are required for the execution of
programs. Programs are loaded into the main memory and then
loaders, linkage editors, and other programs are required to perform
the execution. In some cases, debugging systems are also provided.

Communications

These are the programs that are used to maintain a connection


between two or more users or two or different systems. They allow
users to share information or even big files. This is done widely through
the use of the internet and electronic sharing systems.
Some examples of system program in O.S. are –
• Windows 10
• Mac OS X
• Ubuntu
• Linux
• Unix
• Android
• Anti-virus
• Disk formatting
• Computer language translators

Protection and Security in OS:

Features Security Protection

Definition It is a technique used in It is a technique used in operating systems to


operating systems to control hazards and maintain the system's
address threats from proper functioning.
outside the system to
maintain its proper
functioning.

Focus It mainly focuses on external It mainly focuses on the internal threats of the
threats to the system. system.

Policy It specifies whether or not a It outlines which users are permitted to access a
specific user is allowed to certain resource.
access the system.

Functionality It offers a technique for It offers a technique for controlling access to


protecting system and user processes, programs, and user resources.
resources from
unauthorized access.
Mechanism Security techniques include It includes techniques like modifying a
adding, deleting users, resource's protection information and
determining whether or not determining whether a user may access it.
a certain user is authorized,
employing anti-malware
software, etc.

Queries It is a wide phrase that It comes with security and covers less complex
handles more complicated queries.
queries.

Process Management In Operating System:

Process is the execution of a program that performs the actions


specified in that program. It can be defined as an execution unit where a
program runs. The OS helps you to create, schedule, and terminates
the processes which is used by CPU. A process created by the main
process is called a child process.

Process operations can be easily controlled with the help of


PCB(Process Control Block). Consider it as the brain of the process,
which contains all the crucial information related to processing like
process id, priority, state, CPU registers, etc.

Process Management:
Process management involves various tasks like creation, scheduling,
termination of processes, and a dead lock. Process is a program that is
under execution, which is an important part of modern-day operating
systems. The OS must allocate resources that enable processes to
share and exchange information. It also protects the resources of each
process from other methods and allows synchronization among
processes.

It is the job of OS to manage all the running processes of the system. It


handles operations by performing tasks like process scheduling and
such as resource allocation.

Process Architecture

Process architecture Image

• Stack: The Stack stores temporary data like function parameters,


returns addresses, and local variables.
• Heap Allocates memory, which may be processed during its run
time.
• Data: It contains the variable.
• Text:Text Section includes the current activity, which is
represented by the value of the Program Counter.

Process Control Blocks


PCB stands for Process Control Block. It is a data structure that is
maintained by the Operating System for every process. The PCB
should be identified by an integer Process ID (PID). It helps you to store
all the information required to keep track of all the running processes.

It is also accountable for storing the contents of processor registers.


These are saved when the process moves from the running state and
then returns back to it. The information is quickly updated in the PCB by
the OS as soon as the process makes the state transition.

Process States

Process States Diagram


A process state is a condition of the process at a specific instant of time.
It also defines the current position of the process.

There are mainly seven stages of a process which are:

• New: The new process is created when a specific program calls


from secondary memory/ hard disk to primary memory/ RAM

• Ready: In a ready state, the process should be loaded into the
primary memory, which is ready for execution.

• Waiting: The process is waiting for the allocation of CPU time and
other resources for execution.

• Executing: The process is an execution state.

• Blocked: It is a time interval when a process is waiting for an event
like I/O operations to complete.

• Suspended: Suspended state defines the time when a process is
ready for execution but has not been placed in the ready queue by
OS.

• Terminated: Terminated state specifies the time when a process
is terminated

Process Control Block (PCB)


Every process is represented in the operating system by a process
control block, which is also called a task control block.
Process Control Block (PCB)

• Process state: A process can be new, ready, running, waiting,


etc.
• Program counter: The program counter lets you know the
address of the next instruction, which should be executed for that
process.
• CPU registers: This component includes accumulators, index
and general-purpose registers, and information of condition code.
• CPU scheduling information: This component includes a
process priority, pointers for scheduling queues, and various other
scheduling parameters.
• Accounting and business information: It includes the amount
of CPU and time utilities like real time used, job or process
numbers, etc.
• Memory-management information: This information includes
the value of the base and limit registers, the page, or segment
tables. This depends on the memory system, which is used by the
operating system.
• I/O status information: This block includes a list of open files,
the list of I/O devices that are allocated to the process, etc.

Scheduling Criteria in OS:


A CPU scheduling algorithm tries to maximize and minimize the
following:

The aim of the scheduling algorithm is to maximize and minimize the


following:
Maximize:

• CPU utilization - It makes sure that the CPU is operating at its


peak and is busy.
• Throughoutput - It is the number of processes that complete their
execution per unit of time.

Minimize:

• Waiting time- It is the amount of waiting time in the queue.


• Response time- Time retired for generating the first request after
submission.
• Turnaround time- It is the amount of time required to execute a
specific process.

Types of Scheduling Criteria in an Operating System

There are different CPU scheduling algorithms with different


properties. The choice of algorithm is dependent on various different
factors. There are many criteria suggested for comparing CPU schedule
algorithms, some of which are:

• CPU utilization
• Throughput
• Turnaround time
• Waiting time
• Response time

CPU utilization- The object of any CPU scheduling algorithm is to keep


the CPU busy if possible and to maximize its usage. In theory, the range
of CPU utilization is in the range of 0 to 100 but in real-time, it is
actually 50 to 90% which relies on the system’s load.

Throughput- It is a measure of the work that is done by the CPU which


is directly proportional to the number of processes being executed and
completed per unit of time. It keeps on varying which relies on the
duration or length of processes.

Turnaround time- An important Scheduling criterion in OS for any


process is how long it takes to execute a process. A turnaround time is
the elapsed from the time of submission to that of completion. It is the
summation of time spent waiting to get into the memory, waiting for a
queue to be ready, for the I/O process, and for the execution of
the CPU. The formula for
calculating TurnAroundTime=Compilationtime−Arrivaltime.

Waiting time- Once the execution starts, the scheduling process does
not hinder the time that is required for the completion of the process.
The only thing that is affected is the waiting time of the process, i.e the
time that is spent by a process waiting in a queue. The formula for
calculating KaTeX parse error: Expected 'EOF', got '–' at position 31:
…Turnaround Time –̲ Burst Time.

Response time- Turnaround time is not considered as the best criterion


for comparing scheduling algorithms in an interactive system. Some
outputs of the process might produce early while computing other
results simultaneously. Another criterion is the time that is taken from
process submission to generate the first response. This is
called response time and the formula for calculating it is, KaTeX parse
error: Expected 'EOF', got '–' at position 79: …for the first) –̲ Arrival
Time.

CPU Scheduling:
CPU Scheduling is a process of determining which process will own
CPU for execution while another process is on hold. The main task of
CPU scheduling is to make sure that whenever the CPU remains idle,
the OS at least select one of the processes available in the ready queue
for execution. The selection process will be carried out by the CPU
scheduler. It selects one of the processes in memory that are ready for
execution.
Types of CPU Scheduling
Here are two kinds of Scheduling methods:

In Preemptive Scheduling, the tasks are mostly assigned with their


priorities. Sometimes it is important to run a task with a higher priority
before another lower priority task, even if the lower priority task is still
running. The lower priority task holds for some time and resumes when
the higher priority task finishes its execution.

Non-Preemptive Scheduling
In this type of scheduling method, the CPU has been allocated to a
specific process. The process that keeps the CPU busy will release the
CPU either by switching context or terminating. It is the only method
that can be used for various hardware platforms. That’s because it
doesn’t need special hardware (for example, a timer) like preemptive
scheduling.

Types of CPU scheduling Algorithm:


There are mainly six types of process scheduling algorithms
1. First Come First Serve (FCFS)
2. Shortest-Job-First (SJF) Scheduling
3. Shortest Remaining Time
4. Priority Scheduling
5. Round Robin Scheduling
6. Multilevel Queue Scheduling

First Come First Serve


First Come First Serve is the full form of FCFS. It is the easiest and
most simple CPU scheduling algorithm. In this type of algorithm, the
process which requests the CPU gets the CPU allocation first. This
scheduling method can be managed with a FIFO queue.
As the process enters the ready queue, its PCB (Process Control
Block) is linked with the tail of the queue. So, when CPU becomes free,
it should be assigned to the process at the beginning of the queue.

Characteristics of FCFS method


• It offers non-preemptive and pre-emptive scheduling algorithm.
• Jobs are always executed on a first-come, first-serve basis
• It is easy to implement and use.
• However, this method is poor in performance, and the general
wait time is quite high.
Advantages of FCFS:
• Easy to implement
• First come, first serve method
Disadvantages of FCFS:
• FCFS suffers from Convoy effect.
• The average waiting time is much higher than the other algorithms.
• FCFS is very simple and easy to implement and hence not much efficient.
Shortest Remaining Time
The full form of SRT is Shortest remaining time. It is also known as SJF
preemptive scheduling. In this method, the process will be allocated to
the task, which is closest to its completion. This method prevents a
newer ready state process from holding the completion of an older
process.

Characteristics of SRT scheduling method


• This method is mostly applied in batch environments where short
jobs are required to be given preference.
• This is not an ideal method to implement it in a shared system
where the required CPU time is unknown.

Advantages of SJF
1. Maximum throughput
2. Minimum average waiting and turnaround time

Disadvantages of SJF
1. May suffer with the problem of starvation
2. It is not implementable because the exact Burst time for a process can't be known
in advance.
Priority Based Scheduling
Priority scheduling is a method of scheduling processes based on
priority. In this method, the scheduler selects the tasks to work as per
the priority.

Priority scheduling also helps OS to involve priority assignments. The


processes with higher priority should be carried out first, whereas jobs
with equal priorities are carried out on a round-robin or FCFS basis.
Priority can be decided based on memory requirements, time
requirements, etc.

Round-Robin Scheduling
Round robin is the oldest, simplest scheduling algorithm. The name of
this algorithm comes from the round-robin principle, where each person
gets an equal share of something in turn. It is mostly used for
scheduling algorithms in multitasking. This algorithm method helps for
starvation free execution of processes.

Characteristics of Round-Robin Scheduling


• Round robin is a hybrid model which is clock-driven
• Time slice should be minimum, which is assigned for a specific
task to be processed. However, it may vary for different
processes.
• It is a real time system which responds to the event within a
specific time limit.

Shortest Job First


SJF is a full form of (Shortest job first) is a scheduling algorithm in which
the process with the shortest execution time should be selected for
execution next. This scheduling method can be preemptive or
non-preemptive. It significantly reduces the average waiting time for
other processes awaiting execution.
Characteristics of SJF Scheduling
• It is associated with each job as a unit of time to complete.
• In this method, when the CPU is available, the next process or job
with the shortest completion time will be executed first.
• It is Implemented with non-preemptive policy.
• This algorithm method is useful for batch-type processing, where
waiting for jobs to complete is not critical.
• It improves job output by offering shorter jobs, which should be
executed first, which mostly have a shorter turnaround time.

Multiple-Level Queues Scheduling


This algorithm separates the ready queue into various separate
queues. In this method, processes are assigned to a queue based on a
specific property of the process, like the process priority, size of the
memory, etc.

However, this is not an independent scheduling OS algorithm as it


needs to use other types of algorithms in order to schedule the jobs.

Characteristic of Multiple-Level Queues Scheduling


• Multiple queues should be maintained for processes with some
characteristics.
• Every queue may have its separate scheduling algorithms.
• Priorities are given for each queue.

Difference Between Shortest job First and Shortest Remaining


Job First:
Shortest Job First: Shortest Remaining Job First:

It is a non-preemptive algorithm. It is a preemptive algorithm.


It involves less overheads than It involves more overheads
SRJF. than SJF.

It is slower in execution than It is faster in execution than


SRJF. SJF.

It leads to comparatively lower It leads to increased throughput


throughput. as execution time is less.

It minimizes the average waiting It may or may not minimize the


time for each process. average waiting time for each
process.

Short processes are executed Shorter processes run fast and


first and then followed by longer longer processes show poor
processes. response time.

Threads in Operating System:


A thread is a sequential flow of tasks within a process. Each thread has
its own set of registers and stack space. There can be multiple threads
in a single process having the same or different functionality. Threads
are also termed lightweight processes.

Thread is a sequential flow of tasks within a process. Threads in OS


can be of the same or different types. Threads are used to increase the
performance of the applications.

Each thread has its own program counter, stack, and set of registers.
But the threads of a single process might share the same code and
data/file. Threads are also termed as lightweight processes as they
share common resources.
Eg: While playing a movie on a device the audio and video are
controlled by different threads in the background.

OR
Single Thread Multithread process

The above diagram shows the difference between a single-threaded


process and a multithreaded process and the resources that are shared
among threads in a multithreaded process.

Components of Thread

A thread has the following three components:

1. Program Counter
2. Register Set
3. Stack space

Multithreading:
In Multithreading, the idea is to divide a single process into multiple threads
instead of creating a whole new process. Multithreading is done to achieve
parallelism and to improve the performance of the applications as it is faster in
many ways which were discussed above. The other advantages of multithreading
are mentioned below.

• Resource Sharing: Threads of a single process share the same resources


such as code, data/file.
• Responsiveness: Program responsiveness enables a program to run even if
part of the program is blocked or executing a lengthy operation. Thus,
increasing the responsiveness to the user.
• Economy: It is more economical to use threads as they share the resources
of a single process. On the other hand, creating processes is expensive.

Types of Threads
Threads are of two types. These are described below.
• User Level Thread
• Kernel Level Thread

User Level Threads

User Level Thread is a type of thread that is not created using system calls. The
kernel has no work in the management of user-level threads. User-level threads
can be easily implemented by the user. In case when user-level threads are
single-handed processes, kernel-level thread manages them. Let’s look at the
advantages and disadvantages of User-Level Thread.
Advantages of User-Level Threads
• Implementation of the User-Level Thread is easier than Kernel Level Thread.
• Context Switch Time is less in User Level Thread.
• User-Level Thread is more efficient than Kernel-Level Thread.
• Because of the presence of only Program Counter, Register Set, and Stack
Space, it has a simple representation.

Disadvantages of User-Level Threads


• There is a lack of coordination between Thread and Kernel.
• Inc case of a page fault, the whole process can be blocked.

Kernel Level Threads

A kernel Level Thread is a type of thread that can recognize the Operating system
easily. Kernel Level Threads has its own thread table where it keeps track of the
system. The operating System Kernel helps in managing threads. Kernel
Threads have somehow longer context switching time. Kernel helps in the
management of threads.
Advantages of Kernel-Level Threads
• It has up-to-date information on all threads.
• Applications that block frequency are to be handled by the Kernel-Level
Threads.
• Whenever any process requires more time to process, Kernel-Level Thread
provides more time to it.
Disadvantages of Kernel-Level threads
• Kernel-Level Thread is slower than User-Level Thread.
• Implementation of this type of thread is a little more complex than a user-level
Thread.

Advantages of Threading:

• Threads improve the overall performance of a program.


• Threads increases the responsiveness of the program
• Context Switching time in threads is faster.
• Threads share the same memory and resources within a process.
• Communication is faster in threads.
• Threads provide concurrency within a process.
• Enhanced throughput of the system.
• Since different threads can run parallelly, threading enables the utilization
of the multiprocessor architecture to a greater extent and increases
efficiency.

Issues with Threading:


There are a number of issues that arise with threading. Some of them are mentioned
below:

• The semantics of fork() and exec() system calls: The fork() call is used to
create a duplicate child process. During a fork() call the issue that arises is
whether the whole process should be duplicated or just the thread which made
the fork() call should be duplicated. The exec() call replaces the whole
process that called it including all the threads in the process with a new
program.

• Thread cancellation: The termination of a thread before its completion is


called thread cancellation and the terminated thread is termed as target thread.
Thread cancellation is of two types:
1. Asynchronous Cancellation: In asynchronous cancellation, one
thread immediately terminates the target thread.
2. Deferred Cancellation: In deferred cancellation, the target thread
periodically checks if it should be terminated.

• Signal handling: In UNIX systems, a signal is used to notify a process that a


particular event has happened. Based on the source of the signal, signal
handling can be categorized as:
1. Asynchronous Signal: The signal which is generated outside the
process which receives it.
2. Synchronous Signal: The signal which is generated and delivered in
the same process.
Thread polls:

Multithreading in a web server, whenever the server receives a request


it creates a separate thread to service the request.
Some of the problems that arise in creating a thread are as follows −
• The amount of time required to create the thread prior to serving
the request together with the fact that this thread will be
discarded once it has completed its work.
• If all concurrent requests are allowed to be serviced in a new
thread, there is no bound on the number of threads concurrently
active in the system.
• Unlimited thread could exhaust system resources like CPU time or
memory.

Process vs Thread:

Process simply means any program in execution while the thread is a segment
of a process. The main differences between process and thread are mentioned
below:

Process Thread

Processes use more resources and hence they Threads share resources and hence they are termed as
are termed as heavyweight processes. lightweight processes.

Processes are totally independent and don’t A thread may share some memory with its peer threads.
share memory.

Each process is treated as a new process by the The operating system takes all the user-level threads
operating system as a single process.
If one process gets blocked by the operating If any user-level thread gets blocked, all of its peer
system, then the other process can continue threads also get blocked because OS takes all of them
the execution. as a single process.

If a process gets blocked, remaining processes If a user level thread gets blocked, all of its peer threads also
can continue execution.
get blocked.

All the different processes are treated All user level peer threads are treated as a single task by the
separately by the operating system.
operating system.

Processes require more time for creation. Threads require less time for creation.

You might also like