Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 53

What is an operating system?

Operating system (OS) is a program or set of programs, which acts as an interface


between a user of the computer & the computer hardware and acts as resource
manager. –
The main purpose of an OS is to provide an environment in which we can execute
programs effectively.

The main goals of the OS


(i) To make the computer system convenient to use

(ii)To make the use of computer hardware in efficient way. –


Operating System is a system software which may be viewed as a collection of
software consisting of procedures for operating the computer & providing an
environment for execution of programs. - It’s an interface between user &
computer. - OS makes everything in the computer to work together smoothly &
efficiently.
OS ROLES
1. Interface
2. Resource Manager
• Interface – Interface between User and Computer – The language of user
and computer are different. – OS plays the role of mediator/translator and
makes communication between User and Computer possible.

Resource Manager - There needs a coordination between different parts of a


computer to accomplish a particular task. - This is made possible by OS. It
monitors and controls every part of the computer.
Operating System as Extended Machine
Let us understand how the operating system works as an Extended Machine.
 At the Machine level the structure of a computer’s system is
complicated to program, mainly for input or output. Programmers do
not deal with hardware. They will always mainly focus on
implementing software. Therefore, a level of abstraction is supposed
to be maintained.
 Operating systems provide a layer of abstraction for using disk such
as files.
 This level of abstraction allows a program to create, write, and read
files, without dealing with the details of how the hardware actually
works.
 The level of abstraction is the key to managing the complexity.
 Good abstractions turn an impossible task into two manageable tasks.
 The first is to define and implement the abstractions.
 The second is to solve the problem at hand.
 Operating system provides abstractions to application programs in a
top down view.
For example − It is easier to deal with photos, emails, songs, and Web pages than
with the details of these files on disks.

The diagram given below shows the functioning of OS as an extended machine −


Operating System as Resource Manager
 Let us understand how the operating system works as a Resource
Manager.
 Now-a-days all modern computers consist of processors, memories,
timers, network interfaces, printers, and so many other devices.
 The operating system provides for an orderly and controlled allocation
of the processors, memories, and I/O devices among the various
programs in the bottom-up view.
 Operating system allows multiple programs to be in memory and run
at the same time.
 Resource management includes multiplexing or sharing resources in
two different ways: in time and in space.
 In time multiplexed, different programs take a chance of using CPU.
First one tries to use the resource, then the next one that is ready in the
queue and so on. For example: Sharing the printer one after another.
 In space multiplexing, Instead of the customers taking a chance, each
one gets part of the resource. For example − Main memory is divided
into several running programs, so each one can be resident at the same
time.
 The diagram given below shows the functioning of OS as a resource
manager −
A computer system consists of various resources: processor, memory, channel and
input/output devices, programs and data files. The operating system manages these
resources effectively. As a resource manager, the operating system performs the
following functions-

 Keeps track of the status of each resource.


 Decides which job should get the resource and for how much time.
 Allocates the resources to the job decided.
 Reclaims the resource after the job uses it for the allocated time.
These functions for each of the four resources, namely, processor, memory,
devices, and files are given below.

Processor Management Functions:


 Keeps track of the processor by recording whether the processor is
busy and if so who is using it.
 Decides which job should use the processor and for how much time.
 Allocates processor to the job decided.
 Reclaims processor after use for the allotted time.
Memory Management Functions:
 Keeps track of memory by recording which memory locations are in
use by which program and which memory locations are free.
 Decides which job should get memory and for how much time in case
of multi-programming.
 Allocates the memory space to the job.

 Reclaims memory after use to make it available to other jobs.


Device Management Functions:
 Keeps track of input/output devices and channels. That is, which
device is in use and by which job.
 Decides which job should use the device, and for how much time.
 Allocates the device to the job.
 Reclaims device after use.
File Management:
 Keeps track of files, that is, which files are in use and by which jobs.
 Decides which job should use the files and for what purpose
(read/write/append/execute).
 Allocates file for use.
 Reclaims file, which means, closes the file.

OS Architecture
 Kernel: The kernel in the operating system is responsible for managing the
resources of the system such as memory, CPU, and input-output devices.
The kernel is responsible for the implementation of the essential functions.
 Shell: The shell in an Operating System acts as an interface for the user to
interact with the computer system. The shell can be a command line
interface or a graphical interface.

These two are major components of an Operating System.

Different Types of OS Architecture


The Operating System Architecture is of four types. These types are mentioned
below.

 Monolithic Architecture
 Layered Architecture
 Microkernel Architecture
 Hybrid Architecture

Monolithic Architecture
Monolithic Architecture is the oldest and the simplest type of Operating System
Architecture. In this architecture, each and every component is contained in a
single kernel only. The various components in this OS Architecture communicate
with each other via function calls.
In a monolithic architecture, the operating system kernel is designed to provide all
operating system services, including memory management, process scheduling,
device drivers, and file systems, in a single, large binary. This means that all code
runs in kernel space, with no separation between kernel and user-level processes.

Overall, a monolithic architecture can provide high performance and simplicity but
may come with some trade-offs in terms of security, stability, and flexibility. The
choice between a monolithic and microkernel architecture depends on the specific
needs and requirements of the operating system being developed
Characteristics of a monolithic architecture:
 Single Executable: The entire application is packaged and deployed as a
single executable file. All components and modules are bundled together.
 Tight Coupling: The components and modules within the application are
highly interconnected and dependent on each other. Changes made to one
component may require modifications in other parts of the application.
 Shared Memory: All components within the application share the same
memory space. They can directly access and modify shared data
structures.
 Monolithic Deployment: The entire application is deployed as a single
unit. Updates or changes to the application require redeploying the entire
monolith.
 Centralized Control Flow: The control flow within the application is
typically managed by a central module or a main function. The flow of
execution moves sequentially from one component to another

Advantages of Monolithic Architecture

The advantages of the Monolithic Architecture of the Operating System are given
below.
1. High performance: Monolithic kernels can provide high performance
since system calls can be made directly to the kernel without the overhead
of message passing between user-level processes.
2. Simplicity: The design of a monolithic kernel is simpler since all
operating system services are provided by a single binary. This makes it
easier to develop, test and maintain.
3. Broad hardware support: Monolithic kernels have broad hardware
support, which means that they can run on a wide range of hardware
platforms.
4. Low overhead: The monolithic kernel has low overhead, which means
that it does not require a lot of system resources, making it ideal for
resource-constrained devices.
5. Easy access to hardware resources: Since all code runs in kernel space,
it is easy to access hardware resources such as network interfaces,
graphics cards, and sound cards.
6. Fast system calls: Monolithic kernels provide fast system calls since there
is no overhead of message passing between user-level processes.
7. Good for general-purpose operating systems: Monolithic kernels are
good for general-purpose operating systems that require a high degree of
performance and low overhead.
8. Easy to develop drivers: Developing device drivers for monolithic
kernels is easier since they are integrated into the kernel.
Disadvantages of Monolithic Architecture

1. Large and Complex Applications: For large and complex application in


monolithic, it is difficult for maintenance because they are dependent on
each other.
2. Slow Development: It is because, for modify an application we have to
redeploy whole application instead of updates part. It takes more time or
slow development.
3. Unscalable: Each copy of the application will access the hole data which
make more memory consumption. We cannot scale each component
independently.
4. Unreliable: If one services goes down, then it affects all the services
provided by the application. It is because all services of applications are
connected to each other.
5. Inflexible: Really difficult to adopt new technology.It is because we have
to change hole application technology.

Example: Let’s take an example of an e-commerce site.

Layered Architecture
In a layered architecture, the operating system is divided into layers, with each
layer performing a specific set of functions. The layers are organized in a
hierarchical order, with each layer depending on the layer below it. The layering
approach makes the system easier to maintain and modify, as each layer can be
modified independently without affecting the other layers.

Each of the layers must have its own specific function to perform. There are some
rules in the implementation of the layers as follows.
1. The outermost layer must be the User Interface layer.
2. The innermost layer must be the Hardware layer.
3. A particular layer can access all the layers present below it but it cannot
access the layers present above it. That is layer n-1 can access all the
layers from n-2 to 0 but it cannot access the nth layer.

Thus if the user layer wants to interact with the hardware layer, the response will be
traveled through all the layers from n-1 to 1. Each layer must be designed and
implemented such that it will need only the services provided by the layers below it.

Advantages of Layered Architecture

Here are some of the advantages of Layered Architecture.

Advantages :
There are several advantages to this design :
1. Modularity
This design promotes modularity as each layer performs only the tasks it is
scheduled to perform.
2. Easy debugging
As the layers are discrete so it is very easy to debug. Suppose an error
occurs in the CPU scheduling layer, so the developer can only search that
particular layer to debug, unlike the Monolithic system in which all the
services are present together.
3. Easy update :
A modification made in a particular layer will not affect the other layers.
4. No direct access to hardware :

The hardware layer is the innermost layer present in the design. So a user
can use the services of hardware but cannot directly modify or access it,
unlike the Simple system in which the user had direct access to the
hardware.
5. Abstraction :
Every layer is concerned with its own functions. So the functions and
implementations of the other layers are abstract to it.
Disadvantages :
Though this system has several advantages over the Monolithic and Simple design,
there are also some disadvantages as follows.
1. Complex and careful implementation :

As a layer can access the services of the layers below it, so the
arrangement of the layers must be done carefully. For example, the
backing storage layer uses the services of the memory management layer.
So it must be kept below the memory management layer. Thus with great
modularity comes complex implementation.
2. Slower in execution :

If a layer wants to interact with another layer, it sends a request that has to
travel through all the layers present in between the two interacting layers.
Thus it increases response time, unlike the Monolithic system which is
faster than this. Thus an increase in the number of layers may lead to a
very inefficient design.

Example – The Windows NT operating system uses this layered approach as


a part of it.

Microkernel Architecture
Process management, networking, file system interaction, and device management
are executed outside the kernel in this architecture, while memory management and
synchronization are executed inside the kernel. The processes inside the kernel
have a relatively high priority, and the components are highly modular, so even if
one or more components fail, the operating system continues to function.
Advantages of Microkernel –
 Modularity: Because the kernel and servers can be developed and
maintained independently, the microkernel design allows for greater
modularity. This can make adding and removing features and services
from the system easier.
 Fault isolation: The microkernel design aids in the isolation of faults and
their prevention from affecting the entire system. If a server or other
component fails, it can be restarted or replaced without causing any
disruptions to the rest of the system.
 Performance: Because the kernel only contains the essential functions
required to manage the system, the microkernel design can improve
performance. This can make the system faster and more efficient.
 Security: The microkernel design can improve security by reducing the
system’s attack surface by limiting the functions provided by the kernel.
Malicious software may find it more difficult to compromise the system as
a result of this.
 Reliability: Microkernels are less complex than monolithic kernels, which
can make them more reliable and less prone to crashes or other issues.
 Scalability: Microkernels can be easily scaled to support different
hardware architectures, making them more versatile.
 Portability: Microkernels can be ported to different platforms with
minimal effort, which makes them useful for embedded systems and other
specialized applications.
Eclipse IDE is a good example of Microkernel Architecture.
Advantages of a microkernel architecture:

1. More secure operating system due to reduced attack surface


2. Better system stability, as crashes in user-level processes do not affect the
entire system
3. More modular and flexible, making it easier to customize the operating
system
4. Simplified development process, as services are developed and tested as
independent user-level processes

Disadvantages of a microkernel architecture:

1. Slower message passing between user-level processes can affect


performance, especially in high-performance applications
2. Increased complexity due to the modular design can make it more difficult
to develop and maintain the operating system
3. Limited performance optimization due to separation of kernel and user-
level processes
4. Higher memory usage compared to a monolithic kernel
5. Overall, a microkernel architecture provides advantages in terms of
security, flexibility, and modularity, but may come with some trade-offs in
6. terms of performance and complexity. The choice between a microkernel
and a monolithic kernel architecture depends on the specific needs and
requirements of the operating system being developed.

Hybrid Architecture
As the name implies, hybrid architecture is a hybrid of all the architectures
discussed thus far, and therefore it contains characteristics from all of those
architectures, which makes it highly valuable in modern operating systems.

The hybrid architecture is comprised of three levels.


 Hardware abstraction layer: This is the lowest level interface between the
kernel and hardware.
 Microkernel Layer: This is the conventional microkernel, which includes
CPU scheduling, memory management, and inter-process communication.
 Application Layer: This layer acts as an interface between the user and the
microkernel. It includes features such as a file server, error detection, I/O
device management, and so on.
Advantages of Hybrid Architecture

Here are the advantages that Hybrid OS Architecture provides us.

 Combines the benefits of multiple architectures, such as microkernel and


monolithic kernel, allowing for better performance, scalability, and
flexibility.
 Offers a higher level of security by isolating critical components in separate
modules and reducing the attack surface.
 Allows for easier integration of different software components, as it
supports multiple programming paradigms and facilitates communication
between them.
Disadvantages of Hybrid Architecture

The disadvantages of the Hybrid architecture are listed below.

 Can be complex to design and maintain, as it requires managing multiple


subsystems with different architectures and interfaces.
 This architecture May result in slower system performance due to the
increased overhead associated with managing multiple subsystems.
 Can be more prone to compatibility issues, as different subsystems may
require different versions of libraries and other dependencies.

Types of Operating Systems


1. Batch Operating System
Batch processing was very popular in the 1970s. This type of operating system does
not interact with the computer directly. There is an operator which takes similar jobs
having the same requirement and groups them into batches. It is the responsibility of
the operator to sort jobs with similar needs.
In batch operating system,
 Firstly,user prepares his job using punch cards.
 Then, he submits the job to the computer operator.
 Operator collects the jobs from different users and sort the jobs into batches
with similar needs.
 Then, operator submits the batches to the processor one by one.
 All the jobs of one batch are executed together.

Types of Batch Operating System:

There are different types of Batch Operating systems:

Simple Batched System: This type of batch operating system is the most basic and
has no direct communication between users.

Multiplexed Batch System: This type of batch operating system allows multiple
users to use it at the same time.

Time-Shared Batch System: This type of batch operating system shares the
resources among users, meaning that each user gets a specific amount of time to use
the resources.

Batch Operating System


Advantages of Batch Operating System
 It is very difficult to guess or know the time required for any job to
complete. Processors of the batch systems know how long the job would
be when it is in the queue.
 Multiple users can share the batch systems.
 The idle time for the batch system is very less.
 It is easy to manage large work repeatedly in batch systems.
Disadvantages of Batch Operating System
 The computer operators should be well known with batch systems.
 Batch systems are hard to debug.
 The other jobs will have to wait for an unknown time if any job fails.
 There is a lack of interaction between the user and the job.
 CPU is being often idle, because the speed of the mechanical I/O devices is
slower than the CPU.
 It is difficult to provide the desired priority.

Examples of Batch Operating Systems: Payroll Systems, Bank Statements,


etc. Batch processing system is more suitable to payroll, because batch system is very
useful for calculating the salaries of all employees in the end of month.

2. Multi-Programming Operating System

Multiprogramming OS is an ability of an operating system that executes more than


one program using a single processor machine.
More than one task or program or jobs are present inside the main memory at one
point of time.
multiprogramming means having multiple active processes in the main memory. A
multiprogramming operating system runs multiple programs at the same time on a
single processor. Tasks generally require CPU time and I/O time. So, if the running
process performs I/O or some other event that does not require a CPU, then instead of
sitting idle, the CPU makes a context switch and picks another task, and this will
continue.
Let’s P1 and P2 are two programs present in the main memory. The OS picks one
program and starts executing it.
During execution if the P1 program requires I/O operation, then the OS will simply
switch over to P2 program. If the p2 program requires I/O then again it switches to P3
and so on.
If there is no other program remaining after P3 then the CPU will pass its control back
to the previous program
MultiProgramming

Features of Multiprogramming
1. Need Single CPU for implementation.
2. Context switch between process.
3. Switching happens when current process undergoes waiting state.
4. CPU idle time is reduced.
5. High resource utilization.
6. High Performance.
Advantages of Multi-Programming Operating System
 Multi Programming increases the Throughput of the System.
 It helps in reducing the response time.
 CPU utilization is high because the CPU is never goes to idle state.
 Memory utilization is efficient.
Disadvantages of Multi-Programming Operating System
 There is not any facility for user interaction of system resources with the
system.
 CPU scheduling is compulsory because lots of jobs are ready to run on CPU
simultaneously.
examples are Windows O/S, UNIX O/S, Microcomputers such as XENIX, MP/M, and
ESQview.

3. Multi-Processing Operating System

Multi-Processing Operating System is a type of Operating System in which more


than one CPU is used for the execution of resources. It betters the throughput of the
System.
Multiple CPUs are interconnected so that a job can be divided among them for faster
execution. When a job finishes, results from all CPUs are collected and compiled to
give the final output. Jobs needed to share main memory and they may also share
other system resources among themselves. Multiple CPUs can also be used to run
multiple jobs simultaneously.

Multiprocessing

Advantages of Multi-Processing Operating System


 It increases the throughput of the system.
 As it has several processors, so, if one processor fails, we can proceed
with another processor.
Disadvantages of Multi-Processing Operating System
 Due to the multiple CPU, it can be more complex and somehow difficult
to understand.
4. Multi-Tasking Operating System

Multitasking Operating System is simply a multiprogramming Operating System


with having facility of a Round-Robin Scheduling Algorithm. It can run multiple
programs simultaneously.

Features of Multi-Tasking Operating System


 Time Sharing – In this, many processes are allocated with resources of
computer in respective time slots, processors time is shared with multiple
processes.
 Context Switching – context switching is a process of saving the context
of one process and loading the context of another process. In simpler
terms it is loading another process when the prior process has finished its
execution.
 Multi-Threading – Multithreading is the ability of a program or an
operating system to enable more than one user at a time without requiring
multiple copies of the program running on the computer.
 Hardware Interrupt – When a process or an event requires urgent
attention, hardware or software will signal with an interrupt. It informs the
processor that a high-priority task has arisen that necessitates interrupting
the running process.
Types of Multi-Tasking Operating System
Two types of Multi-Tasking Operating System available as shown below:
 Pre-emptive Multi-Tasking Operating System: In pre-emptive
multitasking, the operating system can initiate a context switching from
the running process to another process. In other words, the operating
system allows stopping the execution of the currently running process and
allocating the CPU to some other process. The OS uses some criteria to
decide for how long a process should execute before allowing another
process to use the operating system. The mechanism of taking control of
the operating system from one process and giving it to another process is
called pre-emption. Here are some Examples UNIX, Windows 95,
Windows NT operating system.
 Non-pre-emptive Multi-Tasking Operating System: Non-pre-emptive
Multi-Tasking Operating System is also known as cooperative
multitasking, this operating system never initiates context switching from
the running process to another process. A context switch occurs only when
the processes voluntarily yield control periodically or when idle or
logically blocked to allow multiple applications to execute simultaneously.
allocates an entire CPU to a single process till the time a process gets
completed. Here, CPU control remains largely with one program for a
longer time duration. This type of multitasking operating system works best
with applications requiring intensive CPU resources for a continuous time
period.
 Also, in this multitasking, all the processes cooperate for the scheduling
scheme to work. Example – Macintosh OS version 8.0-9.2.2 and Windows
3.x operating system.

Advantages of Multi-Tasking Operating System


 Multiple Programs can be executed simultaneously in Multi-Tasking
Operating System.
 It comes with proper memory manageme nt.
Disadvantages of Multi-Tasking Operating System
 The system gets heated in case of heavy programs multiple times.
 Main memory(RAM) have to store multiple processes during multi tasking so
there can be memory boundation if the main memory is overloaded.
5. Time-Sharing Operating Systems
Each task is given some time to execute so that all the tasks work smoothly. Each
user gets the time of the CPU as they use a single system. These systems are also
known as Multitasking Systems. The task can be from a single user or different users
also. The time that each task gets to execute is called quantum. After this time
interval is over OS switches over to the next task.
Time-Sharing OS

Advantages of Time-Sharing OS
 Each task gets an equal opportunity.
 Fewer chances of duplication of software.
 CPU idle time can be reduced.
 Resource Sharing: Time-sharing systems allow multiple users to share
hardware resources such as the CPU, memory, and peripherals, reducing
the cost of hardware and increasing efficiency.
 Improved Productivity: Time-sharing allows users to work concurrently,
thereby reducing the waiting time for their turn to use the computer. This
increased productivity translates to more work getting done in less time.
 Improved User Experience: Time-sharing provides an interactive
environment that allows users to communicate with the computer in real
time, providing a better user experience than batch processing.
Disadvantages of Time-Sharing OS
 Reliability problem.
 One must have to take care of the security and integrity of user programs
and data.
 Data communication problem.
 High Overhead: Time-sharing systems have a higher overhead than other
operating systems due to the need for scheduling, context switching, and
other overheads that come with supporting multiple users.
 Complexity: Time-sharing systems are complex and require advanced
software to manage multiple users simultaneously. This complexity
increases the chance of bugs and errors.
 Security Risks: With multiple users sharing resources, the risk of security
breaches increases. Time-sharing systems require careful management of
user access, authentication, and authorization to ensure the security of data
and software.
Examples of Time-Sharing OS
 IBM VM/CMS
6. Multi-User Operating System
Multiuser operating system, multiple numbers of users can access different resources
of a computer at the same time. The access is provided using a network that consists
of various personal computers attached to a mainframe computer system. A multi-
user operating system allows the permission of multiple users for accessing a single
machine at a time. The various personal computers can send and receive information
to the mainframe computer system. Thus, the mainframe computer acts as the server
and other personal computers act as clients for that server.
Types of Multi-user Operating System

A multi-user operating system is of 3 types which are as follows:


1. Distributed Systems: in this, different computers are managed in such a way so
that they can appear as a single computer. So, a sort of network is formed through
which they can communicate with each other.
2. Time-sliced Systems: in this, a short period is assigned to each task, i.e. each user
is given a time slice of the CPU time. As we know these time slices are tiny, so it
appears to the users that they all are using the mainframe computer at the same time.
3. Multiprocessor Systems: in this, the operating system utilises more than one
processor.
Example: Linux, Unix, Windows XP

7. Distributed Operating System


These types of operating system is a recent advancement in the world of computer
technology and are being widely accepted all over the world and, that too, at a great
pace. Various autonomous interconnected computers communicate with each other
using a shared communication network. Independent systems possess their own
memory unit and CPU. These are referred to as loosely coupled systems or
distributed systems. These systems’ processors differ in size and function. The major
benefit of working with these types of the operating system is that it is always
possible that one user can access the files or software which are not actually present
on his system but some other system connected within this network i.e., remote
access is enabled within the devices connected in that network .

Distributed OS
Advantages of Distributed Operating System
Failure of one will not affect the other network communication, as all
systems are independent of each other.
 Electronic mail increases the data exchange speed.
 Since resources are being shared, computation is highly fast and durable.
 Load on host computer reduces.
 These systems are easily scalable as many systems can be easily added to
the network.
 Delay in data processing reduces.
Disadvantages of Distributed Operating System
 Failure of the main network will stop the entire communication.
 To establish distributed systems the language is used not well-defined yet.
 These types of systems are not readily available as they are very
expensive. Not only that the underlying software is highly complex and
not understood well yet.
Examples of Distributed Operating Systems are LOCUS, etc.

8. Network Operating System


These systems run on a server and provide the capability to manage data, users,
groups, security, applications, and other networking functions. These types of
operating systems allow shared access to files, printers, security, applications, and
other networking functions over a small private network. One more important aspect
of Network Operating Systems is that all the users are well aware of the underlying
configuration, of all other users within the network, their individual connections, etc.
and that’s why these computers are popularly known as tightly coupled systems .

Network Operating System

Advantages of Network Operating System


 Highly stable centralized servers.

 Security concerns are handled through servers.

 New technologies and hardware up-gradation are easily integrated into the system.

 Server access is possible remotely from different locations and types of systems.
Disadvantages of Network Operating System

 Servers are costly.

 User has to depend on a central location for most operations.

 Maintenance and updates are required regularly.

Examples of Network Operating Systems are Microsoft Windows Server 2003,


Microsoft Windows Server 2008, UNIX, Linux, Mac OS X, Novell NetWare, BSD, etc.
8. Real-Time Operating System
These types of OSs serve real-time systems. The time interval required to process
and respond to inputs is very small. This time interval is called response time.
Real-time systems are used when there are time requirements that are very strict
like missile systems, air traffic control systems, robots, etc.
Types of Real-Time Operating Systems
 Hard Real-Time Systems:
Hard Real-Time OSs are meant for applications where time constraints are
very strict and even the shortest possible delay is not acceptable. These
systems are built for saving life like automatic parachutes or airbags which
are required to be readily available in case of an accident. Virtual memory
is rarely found in these systems.
 Soft Real-Time Systems:
These OSs are for applications where time-constraint is less strict.

Advantages of RTOS
 Maximum Consumption: Maximum utilization of devices and systems,
thus more output from all the resources.
 Task Shifting: The time assigned for shifting tasks in these systems is
very less. For example, in older systems, it takes about 10 microseconds in
shifting from one task to another, and in the latest systems, it takes 3
microseconds.
 Focus on Application: Focus on running applications and less importance
on applications that are in the queue.
 Real-time operating system in the embedded system: Since the size of
programs is small, RTOS can also be used in embedded systems like in
transport and others.
 Error Free: These types of systems are error-free.
 Memory Allocation: Memory allocation is best managed in these types of
systems.
Disadvantages of RTOS
 Limited Tasks: Very few tasks run at the same time and their
concentration is very less on a few applications to avoid errors.
 Use heavy system resources: Sometimes the system resources are not so
good and they are expensive as well.
 Complex Algorithms: The algorithms are very complex and difficult for
the designer to write on.
 Device driver and interrupt signals: It needs specific device drivers and
interrupts signal to respond earliest to interrupts.
 Thread Priority: It is not good to set thread priority as these systems are
very less prone to switching tasks.
Examples of Real-Time Operating Systems are Scientific experiments, medical
imaging systems, industrial control systems, weapon systems, robots, air traffic
control systems, etc

System Calls
System calls are usually made when a process in user mode requires access to a
resource. Then it requests the kernel to provide the resource via a system call.
When a system call is executed, it is typically treated by the hardware as a software
interrupt. Control passes through the interrupt vector to a service routine in the
operating system, and the mode bit is set to kernel mode. The system-call service
routine is a part of the operating system. The kernel examines the interrupting
instruction to determine what system call has occurred; a parameter indicates what
type of service the user program is requesting. Additional information needed for the
request may be passed in registers, on the stack, or in memory (with pointers to the
memory locations passed in registers). The kernel verifies that the parameters are
correct and legal, executes the request, and returns control to the instruction following
the system call.

Types of System Calls


There are mainly five types of system calls. These are explained in detail as follows −

Process Control
These system calls deal with processes such as process creation, process termination etc.
File Management
These system calls are responsible for file manipulation such as creating a file, reading a file,
writing into a file etc.

Device Management
These system calls are responsible for device manipulation such as reading from device
buffers, writing into device buffers etc.

Information Maintenance
These system calls handle information and its transfer between the operating system and the
user program.

Communication
These system calls are useful for interprocess communication. They also deal with creating
and deleting a communication connection.
Some of the examples of all the above types of system calls in Windows and Unix are given
as follows −

Types of System
Windows Linux
Calls

Process Control CreateProcess()ExitProcess()WaitForSingleObject() fork()exit()wait()

File Management CreateFile()ReadFile()WriteFile()CloseHandle() open()read()write()close()

Device
SetConsoleMode()ReadConsole()WriteConsole() ioctl()read()write()
Management

Information
GetCurrentProcessID()SetTimer()Sleep() getpid()alarm()sleep()
Maintenance

Communication CreatePipe()CreateFileMapping()MapViewOfFile() pipe()shmget()mmap()

PROCESS
A process is an active program i.e a program that is under execution. It is more than the
program code as it includes the program counter, process stack, registers, program code etc.
Compared to this, the program code is only the text section.
A program is not a process by itself as the program is a passive entity, such as file contents,
while the process is an active entity containing program counter, resources etc.
CPU and I/O Bound Processes: If the process is intensive in terms of CPU
operations, then it is called CPU bound process. Similarly, If the process is intensive
in terms of I/O operations then it is called I/O bound process.
Process State-
As a process executes, it changes state.
The state of a process is defined in part by the current activity of that process.
A process may be in one of the following states:
• New. The process is being created.
• Running. Instructions are being executed
. • Waiting. The process is waiting for some event to occur (such as an I/O completion
or reception of a signal).
• Ready. The process is waiting to be assigned to a processor.
• Terminated. The process has finished execution. These names are arbitrary, and they
vary across operating systems. The states that they represent are found on all systems,
however. Certain operating systems also more finely delineate process states. It is
important to realize that only one process can be running on any processor at any
instant. Many processes may be ready and waiting, however.

Process Control Block –


Each process is represented in the operating system by a process control block (PCB)
—also called a task control block.
• Process state. The state may be new, ready, running, waiting, halted, and so on.
• Program counter. The counter indicates the address of the next instruction to be
executed for this process.
• CPU registers. The registers vary in number and type, depending on the computer
architecture. They include accumulators, index registers, stack pointers, and general-
purpose registers, plus any condition-code information. Along with the program
counter, this state information must be saved when an interrupt occurs, to allow the
process to be continued correctly afterward.
•CPU-scheduling information. This information includes a process priority, pointers to
scheduling queues, and any other scheduling parameters. (
• Memory-management information. This information may include such items as the
value of the base and limit registers and the page tables, or the segment tables,
depending on the memory system used by the operating system.
 Accounting information. This information includes the amount of CPU and real time
used, time limits, account numbers, job or process numbers, and so on.
 I/O status information. This information includes the list of I/O devices allocated to
the process, a list of open files, and so on.
Process State Transitions

There can be various events that lead to a state transition for a process. The possible
state transitions are given below:
1. Null -> New: A new process is created for the execution of a process.
2. New -> Ready: The system will move the process from new to ready state
and now it is ready for execution. Here a system may set a limit so that
multiple processes can’t occur otherwise there may be a performance
issue.
3. Ready -> Running: The OS now selects a process for a run and the
system chooses only one process in a ready state for execution.
4. Running -> Exit: The system terminates a process if the process indicates
that is now completed or if it has been aborted.
5. Running -> Ready: The reason for which this transition occurs is that
when the running process has reached its maximum running time for
uninterrupted execution. An example of this can be a process running in
the background that performs some maintenance or other functions
periodically.
6. Running -> Blocked: A process is put in the blocked state if it requests
for something it is waiting. Like, a process may request some resources
that might not be available at the time or it may be waiting for an I/O
operation or waiting for some other process to finish before the process
can continue.
7. Blocked -> Ready: A process moves from blocked state to the ready state
when the event for which it has been waiting.
8. Ready -> Exit: This transition can exist only in some cases because, in
some systems, a parent may terminate a child’s process at any time.
Context Switching
In order for a process execution to be continued from the same point at a later time,
context switching is a mechanism to store and restore the state or context of a CPU
in the Process Control block. A context switcher makes it possible for multiple
processes to share a single CPU using this method. A multitasking operating system
must include context switching among its features.
The state of the currently running process is saved into the process control block
when the scheduler switches the CPU from executing one process to another. The
state used to set the PC, registers, etc. for the process that will run next is then
loaded from its own PCB. After that, the second can start processing.
Dispatcher Another component involved in the CPU-scheduling function is the dispatcher.
The dispatcher is the module that gives control of the CPUto the process selected by the
short-term scheduler. This function involves the following:
• Switching context
• Switching to user mode
• Jumping to the proper location in the user program to restart that program

Process Schedulers in Operating System

Process scheduling is the activity of the process manager that handles the removal of
the running process from the CPU and the selection of another process on the basis
of a particular strategy.
Process scheduling is an essential part of a Multiprogramming operating system.
Such operating systems allow more than one process to be loaded into the executable
memory at a time and the loaded process shares the CPU using time multiplexing.
Categories in Scheduling
Scheduling falls into one of two categories:
 Non-preemptive: In this case, a process’s resource cannot be taken before
the process has finished running. When a running process finishes and
transitions to a waiting state, resources are switched.
 Preemptive: In this case, the OS assigns resources to a process for a
predetermined period of time. The process switches from running state to
ready state or from waiting for state to ready state during resource
allocation. This switching happens because the CPU may give other
processes priority and substitute the currently active process for the
higher priority process.
There are three types of process schedulers.
Long Term or Job Scheduler
It brings the new process to the ‘Ready State’. It controls the Degree of Multi-
programming, i.e., the number of processes present in a ready state at any point in
time. It is important that the long-term scheduler make a careful selection of both
I/O and CPU-bound processes. I/O-bound tasks are which use much of their time in
input and output operations while CPU-bound processes are which spend their time
on the CPU. The job scheduler increases efficiency by maintaining a balance
between the two. They operate at a high level and are typically used in batch-
processing systems.
Short-Term or CPU Scheduler
It is responsible for selecting one process from the ready state for scheduling it on
the running state. Note: Short-term scheduler only selects the process to schedule it
doesn’t load the process on running. Here is when all the scheduling algorithms are
used. The CPU scheduler is responsible for ensuring no starvation due to high burst
time processes.The dispatcher is responsible for loading the process selected by the
Short-term scheduler on the CPU (Ready to Running State) Context switching is
done by the dispatcher only. A dispatcher does the following:
1. Switching context.
2. Switching to user mode.
3. Jumping to the proper location in the newly loaded program.
Medium-Term Scheduler
It is responsible for suspending and resuming the process. It mainly does swapping
(moving processes from main memory to disk and vice versa). Swapping may be
necessary to improve the process mix or because a change in memory requirements
has overcommitted available memory, requiring memory to be freed up. It is helpful
in maintaining a perfect balance between the I/O bound and the CPU bound. It
reduces the degree of multiprogramming.

Short term Medium Term


Long Term Scheduler schedular Scheduler

It is a process-swapping
It is a job scheduler It is a CPU scheduler
scheduler.

Speed lies in between both


Generally, Speed is lesser Speed is the fastest
short and long-term
than short term scheduler among all of them.
schedulers.

It controls the degree of It gives less control over It reduces the degree of
multiprogramming how much multiprogramming.
multiprogramming is
Short term Medium Term
Long Term Scheduler schedular Scheduler

done.

It is barely present or
It is a minimal time- It is a component of
nonexistent in the time-
sharing system. systems for time sharing.
sharing system.

It can re-enter the process It can re-introduce the


It selects those
into memory, allowing for process into memory and
processes which are
the continuation of execution can be
ready to execute
execution. continued.

Scheduling Criteria

Different CPU-scheduling algorithms have different properties, and the choice of a


particular algorithm may favor one class of processes over another. In choosing which
algorithm to use in a particular situation, we must consider the properties of the various
algorithms. Many criteria have been suggested for comparing CPU-scheduling
algorithms. Which characteristics are used for comparison can make a substantial
difference in which algorithm is judged to be best. The criteria include the following:
• CPU utilization. We want to keep the CPU as busy as possible. Conceptually, CPU
utilization can range from 0 to 100 percent. In a real system, it should range from 40
percent (for a lightly loaded system) to 90 percent (for a heavily loaded system).
• Throughput. If the CPU is busy executing processes, then work is being done. One
measure of work is the number of processes that are completed per time unit, called
throughput. For long processes, this rate may be one process per hour; for short
transactions, it may be ten processes per second.
• Turnaround time. From the point of view of a particular process, the important criterion
is how long it takes to execute that process. The interval from the time of submission of a
process to the time of completion is the turnaround time. Turnaround time is the sum of
the periods spent waiting to get into memory, waiting in the ready queue, executing on the
CPU, and doing I/O.

• Waiting time. The CPU-scheduling algorithm does not affect the amount of time during
which a process executes or does I/O. It affects only the amount of time that a process
spends waiting in the ready queue. Waiting time is the sum of the periods spent waiting in
the ready queue.
• Response time. In an interactive system, turnaround time may not be the best criterion.
Thus, another measure is the time from the submission of a request until the first response
is produced. This measure, called response time, is the time it takes to start responding,
not the time it takes to output the response. The turnaround time is generally limited by
the speed of the output device. It is desirable to maximize CPU utilization and throughput
and to minimize turnaround time, waiting time, and response time.
 Arrival Time: Time at which the process arrives in the ready queue.
 Completion Time: Time at which process completes its execution.
 Burst Time: Time required by a process for CPU execution.
 Turn Around Time: Time Difference between completion time and
arrival time.
Turn Around Time = Completion Time – Arrival Time
 Waiting Time(W.T): Time Difference between turn around time and
burst time.
Waiting Time = Turn Around Time – Burst Time

1. First Come First Serve:


FCFS considered to be the simplest of all operating system scheduling algorithms.
First come first serve scheduling algorithm states that the process that requests the
CPU first is allocated the CPU first and is implemented by using FIFO queue. When
the CPU is free, it is allocated to the process at the head of the queue. The running process is
then removed from the queue.
Characteristics of FCFS:
 FCFS supports non-preemptive .
 Tasks are always executed on a First-come, First-serve concept.
 FCFS is easy to implement and use.
 This algorithm is not much efficient in performance, and the wait time is
quite high.
Advantages of FCFS:
 Easy to implement
 First come, first serve method
Disadvantages of FCFS:
 FCFS suffers from Convoy effect.
 The average waiting time is much higher than the other algorithms.
 FCFS is very simple and easy to implement and hence not much efficient.
 Convoy Effect is phenomenon associated with the First Come First Serve
(FCFS) algorithm, in which the whole Operating System slows down due to
few slow processes.

FCFS algorithm is non-preemptive in nature, that is, once CPU time has been
allocated to a process, other processes can get CPU time only after the current
process has finished. This property of FCFS scheduling leads to the situation called
Convoy Effect.
Example-1: Consider the following table of arrival time and burst time for five
processes P1, P2, P3, P4 and P5.
Processes Arrival Time Burst Time

P1 0 4

P2 1 3

P3 2 1

P4 3 2

P5 4 5

Gantt chart for above execution:

Gantt chart for First come First serve Scheduling


Waiting Time = Start time – Arrival time
P1 = 0 – 0 = 0
P2 = 4 – 1 = 3
P3 = 7 – 2 = 5
P4 = 8 – 3 = 5
P5 = 10 – 4 = 6
Average waiting time = (0 + 3 + 5 + 5+ 6 )/ 5 = 19 / 5 = 3.8
2. Shortest Job First (or SJF)

The shortest job first (SJF) or shortest job next, is a scheduling policy that selects the
waiting process with the smallest execution time to execute next. SJN, also known as
Shortest Job Next (SJN), can be preemptive or non-preemptive.
Characteristics of SJF Scheduling:
 Shortest Job first has the advantage of having a minimum average waiting
time among all scheduling algorithms.
 It is a Greedy Algorithm.
 It may cause starvation if shorter processes keep coming. This problem
can be solved using the concept of ageing.
 It is practically infeasible as Operating System may not know burst times
and therefore may not sort them. While it is not possible to predict
execution time, several methods can be used to estimate the execution
time for a job, such as a weighted average of previous execution times.
 SJF can be used in specialized environments where accurate estimates of
running time are available.
Algorithm:
 Sort all the processes according to the arrival time.
 Then select that process that has minimum arrival time and minimum
Burst time.
 After completion of the process make a pool of processes that arrives
afterward till the completion of the previous process and select that
process among the pool which is having minimum Burst time.
Advantages of SJF:
 SJF is better than the First come first serve (FCFS) algorithm as it reduces
the average waiting time.
 SJF is generally used for long term scheduling
 It is suitable for the jobs running in batches, where run times are already
known.
 SJF is probably optimal in terms of average turnaround time.
Disadvantages of SJF:
 SJF may cause very long turn-around times or starvation.
 In SJF job completion time must be known earlier, but sometimes it is
hard to predict.
 Sometimes, it is complicated to predict the length of the upcoming CPU
request.
 It leads to the starvation that does not reduce average turnaround time.
 Consider the following table of arrival time and burst time for five
processes P1, P2, P3, P4 and P5.

Burst
Process Time Arrival Time

P1 6 ms 2 ms

P2 2 ms 5 ms

P3 8 ms 1 ms

P4 3 ms 0 ms

P5 4 ms 4 ms

 The Shortest Job First CPU Scheduling Algorithm will work on the basis of
steps as mentioned below:
 Gantt chart for above execution:


 Gantt chart

 Now, let’s calculate the average waiting time for above example:
 P4 = 0 – 0 = 0
 P1 = 3 – 2 = 1
 P2 = 9 – 5 = 4
 P5 = 11 – 4 = 7
 P3 = 15 – 1 = 14
 Average Waiting Time = 0 + 1 + 4 + 7 + 14/5 = 26/5 = 5.2

3. Priority scheduling
Priority scheduling is a non-preemptive algorithm and one of the most common
scheduling algorithms in batch systems. Each process is assigned first arrival time
(less arrival time process first) if two processes have same arrival time, then
compare to priorities (highest process first). Also, if two processes have same
priority then compare to process number (less process number first). This process is
repeated while all process get executed.
Implementation –
1. First input the processes with their arrival time, burst time and priority.
2. First process will schedule, which have the lowest arrival time, if two or
more processes will have lowest arrival time, then whoever has higher
priority will schedule first.
3. Now further processes will be schedule according to the arrival time and
priority of the process. (Here we are assuming that lower the priority
number having higher priority). If two process priority are same then sort
according to process number.
Note: In the question, They will clearly mention, which number will have
higher priority and which number will have lower priority.
4. Once all the processes have been arrived, we can schedule them based on
their priority.
Priorities can be defined either internally or externally. Internally defined priorities use
some measurable quantity or quantities to compute the priority of a process. For example,
time limits, memory requirements, the number of open files, and the ratio of average I/O
burst to average CPU burst have been used in computing priorities. External priorities are
set by criteria outside the operating system, such as the importance of the process, the
type and amount of funds being paid for computer use, the department sponsoring the
work, and other, often political, factors.
Priority scheduling can be either preemptive or nonpreemptive. When a process arrives at
the ready queue, its priority is compared with the priority of the currently running
process. A preemptive priority scheduling algorithm will preempt the CPU if the
priority of the newly arrived process is higher than the priority of the currently running
process.
A nonpreemptive priority scheduling algorithm will simply put the new process at the
head of the ready queue.
A major problem with priority scheduling algorithms is indefinite blocking, or starvation.
A process that is ready to run but waiting for the CPU can be considered blocked. A
priority scheduling algorithm can leave some lowpriority processes waiting indefinitely.
A solution to the problem of indefinite blockage of low-priority processes is aging. Aging
involves gradually increasing the priority of processes that wait in the system for a long
time. For example, if priorities range from 127 (low) to 0 (high), we could increase the
priority of a waiting process by 1 every 15 minutes. Eventually, even a process with an
initial priority of 127 would have the highest priority in the system and would be
executed.
Gantt Chart –

Average Waiting Time is : 5.6


Average Turn Around time is : 8.8
4. Round-Robin Scheduling
The round-robin (RR) scheduling algorithm is designed especially for timesharing
systems. It is similar to FCFS scheduling, but preemption is added to enable the
system to switch between processes. A small unit of time, called a time quantum or
time slice, is defined. A time quantum is generally from 10 to 100 milliseconds in
length. The ready queue is treated as a circular queue. CPU Scheduling The CPU
scheduler goes around the ready queue, allocating the CPU to each process for a
time interval of up to 1 time quantum. To implement RR scheduling, we again treat
the ready queue as a FIFO queue of processes. New processes are added to the tail
of the ready queue. The CPU scheduler picks the first process from the ready
queue, sets a timer to interrupt after 1 time quantum, and dispatches the process.
One of two things will then happen. The process may have a CPU burst of less than
1 time quantum. In this case, the process itself will release the CPU voluntarily. The
scheduler will then proceed to the next process in the ready queue. If the CPU burst
of the currently running process is longer than 1 time quantum, the timer will go off
and will cause an interrupt to the operating system. A context switch will be
executed, and the process will be put at the tail of the ready queue. The CPU
scheduler will then select the next process in the ready queue.
Round Robin is a CPU scheduling algorithm where each process is assigned a fixed
time slot in a cyclic way. It is basically the preemptive version of First come First
Serve CPU Scheduling algorithm .

 Round Robin CPU Algorithm generally focuses on Time Sharing


technique.
 The period of time for which a process or job is allowed to run in a pre-
emptive method is called time quantum.
 Each process or job present in the ready queue is assigned the CPU for
that time quantum, if the execution of the process is completed during that
time then the process will end else the process will go back to the waiting
table and wait for its next turn to complete the execution.

Characteristics of Round Robin CPU Scheduling


Algorithm:
 It is simple, easy to implement, and starvation-free as all processes get fair
share of CPU.
 One of the most commonly used technique in CPU scheduling as a core.
 It is preemptive as processes are assigned CPU only for a fixed slice of
time at most.
 The disadvantage of it is more overhead of context switching.
Advantages of Round Robin CPU Scheduling Algorithm:
 There is fairness since every process gets equal share of CPU.
 The newly created process is added to end of ready queue.
 A round-robin scheduler generally employs time-sharing, giving each job
a time slot or quantum.
 While performing a round-robin scheduling, a particular time quantum is
allotted to different jobs.
 Each process get a chance to reschedule after a particular quantum time in
this scheduling.
Disadvantages of Round Robin CPU Scheduling
Algorithm:
 There is Larger waiting time and Response time.
 There is Low throughput.
 There is Context Switches.
 Gantt chart seems to come too big (if quantum time is less for scheduling.
For Example:1 ms for big scheduling.)
 Time consuming scheduling for small quantum.

Examples to show working of Round Robin Scheduling


Algorithm:
Example-1: Consider the following table of arrival time and burst time for four
processes P1, P2, P3, and P4 and given Time Quantum = 2
Process Burst Time Arrival Time

P1 5 ms 0 ms

P2 4 ms 1 ms

P3 2 ms 2 ms

P4 1 ms 4 ms

The Round Robin CPU Scheduling Algorithm will work on the basis of steps as
mentioned below:
Gantt chart will be as following below:

Gantt chart for Round Robin Scheduling Algorithm

B
rocesses AT T CT TAT WT

P1 0 5 12 12-0 = 12 12-5 = 7

P2 1 4 11 11-1 = 10 10-4 = 6

P3 2 2 6 6-2 = 4 4-2 = 2

P4 4 1 9 9-4 = 5 5-1 = 4

Now,
 Average Turn around time = (12 + 10 + 4 + 5)/4 = 31/4 = 7.7
 Average waiting time = (7 + 6 + 2 + 4)/4 = 19/4 = 4.7

5. Multilevel Queue Scheduling


A multilevel queue scheduling algorithm partitions the ready queue into several
separate queues . The processes are permanently assigned to one queue,
generally based on some property of the process, such as memory size, process
priority, or process type. Each queue has its own scheduling algorithm. For
example, separate queues might be used for foreground and background
processes. The foreground queue might be scheduled by an RR algorithm,
while the background queue is scheduled by an FCFS algorithm. In addition,
there must be scheduling among the queues, which is commonly implemented
as fixed-priority preemptive scheduling.
For example, the foreground queue may have absolute priority over the
background queue. Let’s look at an example of a multilevel queue scheduling
algorithm with five queues, listed below in order of priority:
1. System processes
2. Interactive processes
3. Interactive editing processes
4. Batch processes
5. Student processes

Each queue has absolute priority over lower-priority queues. No process in the
batch queue, for example, could run unless the queues for system processes,
interactive processes, and interactive editing processes were all empty. If an
interactive editing process entered the ready queue while a batch process was
running, the batch process would be preempted. Another possibility is to time-
slice among the queues. Here, each queue gets a certain portion of the CPU
time, which it can then schedule among its various processes. For instance, in
the foreground–background queue example, the foreground queue can be given
80 percent of the CPU time for RR scheduling among its processes, while the
background queue receives 20 percent of the CPU to give to its processes on an
FCFS basis.

6. Multilevel Feedback Queue Scheduling


Normally, when the multilevel queue scheduling algorithm is used, processes
are permanently assigned to a queue when they enter the system. If there are
separate queues for foreground and background processes, for example,
processes do not move from one queue to the other, since processes do not
change their foreground or background nature. This setup has the advantage of
low scheduling overhead, but it is inflexible. The multilevel feedback queue
scheduling algorithm, in contrast, allows a process to move between queues.
The idea is to separate processes according to the characteristics of their CPU
bursts. If a process uses too much CPU time, it will be moved to a lower-
priority queue. This scheme leaves I/O-bound and interactive processes in the
higher-priority queues. In addition, a process that waits too long in a lower-
priority queue may be moved to a higher-priority queue. This form of aging
prevents starvation. For example, consider a multilevel feedback queue
scheduler with three queues, numbered from 0 to 2. The scheduler first
executes all

processes in queue 0. Only when queue 0 is empty will it execute processes in


queue 1. Similarly, processes in queue 2 will be executed only if queues 0 and
1 are empty. A process that arrives for queue 1 will preempt a process in queue
2. A process in queue 1 will in turn be preempted by a process arriving for
queue 0. A process entering the ready queue is put in queue 0. A process in
queue 0 is given a time quantum of 8 milliseconds. If it does not finish within
this time, it is moved to the tail of queue 1. If queue 0 is empty, the process at
the head of queue 1 is given a quantum of 16 milliseconds. If it does not
complete, it is preempted and is put into queue 2. Processes in queue 2 are run
on an FCFS basis but are run only when queues 0 and 1 are empty. This
scheduling algorithm gives highest priority to any process with a CPU burst of
8 milliseconds or less. Such a process will quickly get the CPU, finish its CPU
burst, and go off to its next I/O burst. Processes that need more than 8 but less
than 24 milliseconds are also served quickly, although with lower priority than
shorter processes. Long processes automatically sink to queue 2 and are served
in FCFS order with any CPU cycles left over from queues 0 and 1. In general, a
multilevel feedback queue scheduler is defined by the following parameters:
• The number of queues
• The scheduling algorithm for each queue
• The method used to determine when to upgrade a process to a higher priority
queue
• The method used to determine when to demote a process to a lower priority
queue
• The method used to determine which queue a process will enter when that
process needs service The definition of a multilevel feedback queue scheduler
makes it the most general CPU-scheduling algorithm. It can be configured to
match a specific system under design.

THREAD
A thread is a basic unit of CPU utilization; it comprises a thread ID, a program counter, a register set,
and a stack. It shares with other threads belonging to the same process its code section, data section,
and other operating-system resources, such as open files and signals. A traditional (or heavyweight)
process has a single thread of control. If a process has multiple threads of control, it can perform more
than one task at a time.

Most software applications that run on modern computers are multithreaded. An


application typically is implemented as a separate process with several threads of control.
A web browser might have one thread display images or text while another thread
retrieves data from the network, for example. A word processor may have a thread for
displaying graphics, another thread for responding to keystrokes from the user, and a third
thread for performing spelling and grammar checking in the background. Applications
can also be designed to leverage processing capabilities on multicore systems. Such
applications can perform several CPU-intensive tasks in parallel across the multiple
computing cores. In certain situations, a single application may be required to perform
several similar tasks. For example, a web server accepts client requests for web pages,
images, sound, and so forth. A busy web server may have several (perhaps thousands of)
clients concurrently accessing it. If the web server ran as a traditional single-threaded
process, it would be able to service only one client at a time, and a client might have to
wait a very long time for its request to be serviced.
The benefits of multithreaded programming can be broken down into four major
categories:
1. Responsiveness. Multithreading an interactive application may allow a program to
continue running even if part of it is blocked or is performing a lengthy operation, thereby
increasing responsiveness to the user. This quality is especially useful in designing user
interfaces. For instance, consider what happens when a user clicks a button that results in
the performance of a time-consuming operation. A single-threaded application would be
unresponsive to the user until the operation had completed. In contrast, if the time-
consuming operation is performed in a separate thread, the application remains
responsive to the user.
2. Resource sharing. Processes can only share resources through techniques such as
shared memory and message passing. Such techniques must be explicitly arranged by the
programmer. However, threads share the memory and the resources of the process to
which they belong by default. The benefit of sharing code and data is that it allows an
application to have several different threads of activity within the same address space
. 3. Economy. Allocating memory and resources for process creation is costly. Because
threads share the resources of the process to which they belong, it is more economical to
create and context-switch threads. Empirically gauging the difference in overhead can be
difficult, but in general it is significantly more time consuming to create and manage
processes than threads. In Solaris, for example, creating a process is about thirty times
slower than is creating a thread, and context switching is about five times slower.
4.Scalability. The benefits of multithreading can be even greater in a multiprocessor
architecture, where threads may be running in parallel on different processing cores. A
single-threaded process can run on only one processor, regardless how many are
available. We explore this issue further in the following section.

Types of Threads
Threads are of two types. These are described below.
 User Level Thread
 Kernel Level Thread

User Level Thread is a type of thread that is not created using system calls. The
kernel has no work in the management of user-level threads. User-level threads can
be easily implemented by the user. In case when user-level threads are single-handed
processes, kernel-level thread manages them. Let’s look at the advantages and
disadvantages of User-Level Thread.
Advantages of User-Level Threads
 Implementation of the User-Level Thread is easier than Kernel Level
Thread.
 Context Switch Time is less in User Level Thread.
 User-Level Thread is more efficient than Kernel-Level Thread.
 Because of the presence of only Program Counter, Register Set, and Stack
Space, it has a simple representation.
Disadvantages of User-Level Threads
 There is a lack of coordination between Thread and Kernel.
 Inc case of a page fault, the whole process can be blocked.

Kernel Level Threads

A kernel Level Thread is a type of thread that can recognize the Operating system
easily. Kernel Level Threads has its own thread table where it keeps track of the
system. The operating System Kernel helps in managing threads. Kernel Threads
have somehow longer context switching time. Kernel helps in the management of
threads.
Advantages of Kernel-Level Threads
 It has up-to-date information on all threads.
 Applications that block frequency are to be handled by the Kernel-Level
Threads.
 Whenever any process requires more time to process, Kernel-Level Thread
provides more time to it.
Disadvantages of Kernel-Level threads
 Kernel-Level Thread is slower than User-Level Thread.
 Implementation of this type of thread is a little more complex than a user-
level thread.
Components of Threads
These are the basic components of the Operating System.
 Stack Space
 Register Set
 Program Counter
Difference between User-Level Thread V/S Kernel-Level Thread.

S.
No. Parameters User Level Thread Kernel Level Thread

Kernel threads are


Implemented User threads are
1. implemented by Operating
by implemented by users.
System (OS).

The operating System Kernel threads are


2. Recognize doesn’t recognize user-level recognized by Operating
threads. System.

Implementation of Kernel-
Implementation of User
3. Implementation Level thread is
threads is easy.
complicated.

Context switch Context switch time is


4. Context switch time is less.
time more.

Hardware Context switch requires no Hardware support is


5.
support hardware support. needed.
S.
No. Parameters User Level Thread Kernel Level Thread

If one kernel thread


If one user-level thread
performs a blocking
Blocking performs a blocking
6. operation then another
operation operation then the entire
thread can continue
process will be blocked.
execution.

Multithread applications
Kernels can be
7. Multithreading cannot take advantage of
multithreaded.
multiprocessing.

User-level threads can be Kernel-level level threads


Creation and
8. created and managed more take more time to create
Management
quickly. and manage.

Operating Any operating system can Kernel-level threads are


9.
System support user-level threads. operating system-specific.

The application code does


The thread library contains not contain thread
the code for thread creation, management code. It is
Thread
10. message passing, thread merely an API to the kernel
Management
scheduling, data transfer, and mode. The Windows
thread destroying operating system makes use
of this feature.

Example: Java thread,


11. Example Example: Window Solaris.
POSIX threads.

12. Advantages  User Level  Scheduling


Threads are multiple threads
simple and quick that belong to
to create. the same process
 Can run on any on different
operating system processors is
 They perform possible in
S.
No. Parameters User Level Thread Kernel Level Thread

kernel-level
better than kernel
threads.
threads since they
 Multithreading
don’t need to
can be there for
make system calls
kernel routines.
to create threads.
 When a thread at
 In user-level
the kernel level
threads, switching
is halted, the
between threads
kernel can
does not need
schedule another
kernel mode
thread for the
privileges.
same process.

 Transferring
 Multithreaded
control within a
applications on
process from one
user-level threads
thread to another
cannot benefit
necessitates a
from
mode switch to
multiprocessing.
13. Disadvantages kernel mode.
 If a single user-
 Kernel-level
level thread
threads take
performs a
more time to
blocking
create and
operation, the
manage than
entire process is
user-level
halted.
threads.

In kernel-level threads have


In user-level threads, each
their own stacks and their
Memory thread has its own stack, but
14. own separate address
management they share the same address
spaces, so they are better
space.
isolated from each other.

User-level threads are less Kernel-level threads can be


fault-tolerant than kernel- managed independently, so
15. Fault tolerance level threads. If a user-level if one thread crashes, it
thread crashes, it can bring doesn’t necessarily affect
down the entire process. the others.
S.
No. Parameters User Level Thread Kernel Level Thread

User-level threads don’t take


It can access to the system-
full advantage of the system
level features like I/O
Resource resources, As they don’t have
16. operations.So it can take
utilization direct access to the system-
full Advantages of System
level features like I/O
Resources.
operations

User-level threads are more Kernel-level threads are


17. Portability portable than kernel-level less portable than User-
threads. level threads.

Difference between Process and Thread:


S.N
O Process Thread

Process means any program is


Thread means a segment of a process.
1. in execution.

The process takes more time


The thread takes less time to terminate.
2. to terminate.

It takes more time for


It takes less time for creation.
3. creation.

It also takes more time for


It takes less time for context switching.
4. context switching.

The process is less efficient in Thread is more efficient in terms of


5. terms of communication. communication.

We don’t need multi programs in action for


Multiprogramming holds the
multiple threads because a single process
concepts of multi-process.
6. consists of multiple threads.

7. The process is isolated. Threads share memory.


S.N
O Process Thread

The process is called the A Thread is lightweight as each thread in a


8. heavyweight process. process shares code, data, and resources.

Process switching uses an Thread switching does not require calling an


interface in an operating operating system and causes an interrupt to
9. system. the kernel.

If one process is blocked then


If a user-level thread is blocked, then all
it will not affect the execution
other user-level threads are blocked.
10. of other processes

The process has its own Thread has Parents’ PCB, its own Thread
Process Control Block, Stack, Control Block, and Stack and common
11. and Address Space. Address space.

Since all threads of the same process share


Changes to the parent process address space and other resources so any
do not affect child processes. changes to the main thread may affect the
12. behavior of the other threads of the process.

No system call is involved, it is created using


A system call is involved in it.
13. APIs.

The process does not share


Threads share data with each other.
14. data with each other.
Multithreading Model:
Multithreading allows the application to divide its task into individual threads. In multi-
threads, the same process or task can be done by the number of threads, or we can say
that there is more than one thread to perform the task in multithreading. With the use of
multithreading, multitasking can be achieved.

The main drawback of single threading systems is that only one task can be performed at a
time, so to overcome the drawback of this single threading, there is multithreading that allows
multiple tasks to be performed.

For example:

There exists three established multithreading models classifying these relationships


are:

o Many to one multithreading model


o One to one multithreading model
o Many to Many multithreading models

Many to one multithreading model:


The many to one model maps many user levels threads to one kernel thread. This type of
relationship facilitates an effective context-switching environment, easily implemented
even on the simple kernel with no thread support.
The disadvantage of this model is that since there is only one kernel-level thread schedule
at any given time, this model cannot take advantage of the hardware acceleration offered
by multithreaded processes or multi-processor systems. In this, all the thread management
is done in the userspace. If blocking comes, this model blocks the whole system.

In the above figure, the many to one model associates all user-level threads to single kernel-
level threads.

One to one multithreading model


The one-to-one model maps a single user-level thread to a single kernel-level thread. This
type of relationship facilitates the running of multiple threads in parallel. However, this
benefit comes with its drawback. The generation of every new user thread must include
creating a corresponding kernel thread causing an overhead, which can hinder the
performance of the parent process. Windows series and Linux operating systems try to
tackle this problem by limiting the growth of the thread count.
In the above figure, one model associates that one user-level thread to a single kernel-level
thread.

Many to Many Model multithreading model


In this type of model, there are several user-level threads and several kernel-level threads.
The number of kernel threads created depends upon a particular application. The
developer can create as many threads at both levels but may not be the same. The many to
many model is a compromise between the other two models. In this model, if any thread
makes a blocking system call, the kernel can schedule another thread for execution. Also,
with the introduction of multiple threads, complexity is not present as in the previous
models. Though this model allows the creation of multiple kernel threads, true
concurrency cannot be achieved by this model. This is because the kernel can schedule
only one process at a time.
Many to many versions of the multithreading model associate several user-level threads to
the same or much less variety of kernel-level threads in the above figure.

Benefits of Multithreading:
 Multithreading can improve the performance and efficiency of a program
by utilizing the available CPU resources more effectively. Executing
multiple threads concurrently, it can take advantage of parallelism and
reduce overall execution time.
 Multithreading can enhance responsiveness in applications that involve
user interaction. By separating time-consuming tasks from the main
thread, the user interface can remain responsive and not freeze or become
unresponsive.
 Multithreading can enable better resource utilization. For example, in a
server application, multiple threads can handle incoming client requests
simultaneously, allowing the server to serve more clients concurrently.
 Multithreading can facilitate better code organization and modularity by
dividing complex tasks into smaller, manageable units of execution. Each
thread can handle a specific part of the task, making the code easier to
understand and maintain.
Drawbacks of Multithreading
Multithreading is complex and many times difficult to handle. It has a few
drawbacks. These are:
 If you don’t make use of the locking mechanisms properly, while
investigating data access issues there is a chance of problems arising like
data inconsistency and dead-lock.
 If many threads try to access the same data, then there is a chance that the
situation of thread starvation may arise. Resource contention issues are
another problem that can trouble the user.
 Display issues may occur if threads lack coordination when displaying
data.

You might also like