Professional Documents
Culture Documents
Operating System Unit 1-2
Operating System Unit 1-2
OS Architecture
Kernel: The kernel in the operating system is responsible for managing the
resources of the system such as memory, CPU, and input-output devices.
The kernel is responsible for the implementation of the essential functions.
Shell: The shell in an Operating System acts as an interface for the user to
interact with the computer system. The shell can be a command line
interface or a graphical interface.
Monolithic Architecture
Layered Architecture
Microkernel Architecture
Hybrid Architecture
Monolithic Architecture
Monolithic Architecture is the oldest and the simplest type of Operating System
Architecture. In this architecture, each and every component is contained in a
single kernel only. The various components in this OS Architecture communicate
with each other via function calls.
In a monolithic architecture, the operating system kernel is designed to provide all
operating system services, including memory management, process scheduling,
device drivers, and file systems, in a single, large binary. This means that all code
runs in kernel space, with no separation between kernel and user-level processes.
Overall, a monolithic architecture can provide high performance and simplicity but
may come with some trade-offs in terms of security, stability, and flexibility. The
choice between a monolithic and microkernel architecture depends on the specific
needs and requirements of the operating system being developed
Characteristics of a monolithic architecture:
Single Executable: The entire application is packaged and deployed as a
single executable file. All components and modules are bundled together.
Tight Coupling: The components and modules within the application are
highly interconnected and dependent on each other. Changes made to one
component may require modifications in other parts of the application.
Shared Memory: All components within the application share the same
memory space. They can directly access and modify shared data
structures.
Monolithic Deployment: The entire application is deployed as a single
unit. Updates or changes to the application require redeploying the entire
monolith.
Centralized Control Flow: The control flow within the application is
typically managed by a central module or a main function. The flow of
execution moves sequentially from one component to another
The advantages of the Monolithic Architecture of the Operating System are given
below.
1. High performance: Monolithic kernels can provide high performance
since system calls can be made directly to the kernel without the overhead
of message passing between user-level processes.
2. Simplicity: The design of a monolithic kernel is simpler since all
operating system services are provided by a single binary. This makes it
easier to develop, test and maintain.
3. Broad hardware support: Monolithic kernels have broad hardware
support, which means that they can run on a wide range of hardware
platforms.
4. Low overhead: The monolithic kernel has low overhead, which means
that it does not require a lot of system resources, making it ideal for
resource-constrained devices.
5. Easy access to hardware resources: Since all code runs in kernel space,
it is easy to access hardware resources such as network interfaces,
graphics cards, and sound cards.
6. Fast system calls: Monolithic kernels provide fast system calls since there
is no overhead of message passing between user-level processes.
7. Good for general-purpose operating systems: Monolithic kernels are
good for general-purpose operating systems that require a high degree of
performance and low overhead.
8. Easy to develop drivers: Developing device drivers for monolithic
kernels is easier since they are integrated into the kernel.
Disadvantages of Monolithic Architecture
Layered Architecture
In a layered architecture, the operating system is divided into layers, with each
layer performing a specific set of functions. The layers are organized in a
hierarchical order, with each layer depending on the layer below it. The layering
approach makes the system easier to maintain and modify, as each layer can be
modified independently without affecting the other layers.
Each of the layers must have its own specific function to perform. There are some
rules in the implementation of the layers as follows.
1. The outermost layer must be the User Interface layer.
2. The innermost layer must be the Hardware layer.
3. A particular layer can access all the layers present below it but it cannot
access the layers present above it. That is layer n-1 can access all the
layers from n-2 to 0 but it cannot access the nth layer.
Thus if the user layer wants to interact with the hardware layer, the response will be
traveled through all the layers from n-1 to 1. Each layer must be designed and
implemented such that it will need only the services provided by the layers below it.
Advantages :
There are several advantages to this design :
1. Modularity
This design promotes modularity as each layer performs only the tasks it is
scheduled to perform.
2. Easy debugging
As the layers are discrete so it is very easy to debug. Suppose an error
occurs in the CPU scheduling layer, so the developer can only search that
particular layer to debug, unlike the Monolithic system in which all the
services are present together.
3. Easy update :
A modification made in a particular layer will not affect the other layers.
4. No direct access to hardware :
The hardware layer is the innermost layer present in the design. So a user
can use the services of hardware but cannot directly modify or access it,
unlike the Simple system in which the user had direct access to the
hardware.
5. Abstraction :
Every layer is concerned with its own functions. So the functions and
implementations of the other layers are abstract to it.
Disadvantages :
Though this system has several advantages over the Monolithic and Simple design,
there are also some disadvantages as follows.
1. Complex and careful implementation :
As a layer can access the services of the layers below it, so the
arrangement of the layers must be done carefully. For example, the
backing storage layer uses the services of the memory management layer.
So it must be kept below the memory management layer. Thus with great
modularity comes complex implementation.
2. Slower in execution :
If a layer wants to interact with another layer, it sends a request that has to
travel through all the layers present in between the two interacting layers.
Thus it increases response time, unlike the Monolithic system which is
faster than this. Thus an increase in the number of layers may lead to a
very inefficient design.
Microkernel Architecture
Process management, networking, file system interaction, and device management
are executed outside the kernel in this architecture, while memory management and
synchronization are executed inside the kernel. The processes inside the kernel
have a relatively high priority, and the components are highly modular, so even if
one or more components fail, the operating system continues to function.
Advantages of Microkernel –
Modularity: Because the kernel and servers can be developed and
maintained independently, the microkernel design allows for greater
modularity. This can make adding and removing features and services
from the system easier.
Fault isolation: The microkernel design aids in the isolation of faults and
their prevention from affecting the entire system. If a server or other
component fails, it can be restarted or replaced without causing any
disruptions to the rest of the system.
Performance: Because the kernel only contains the essential functions
required to manage the system, the microkernel design can improve
performance. This can make the system faster and more efficient.
Security: The microkernel design can improve security by reducing the
system’s attack surface by limiting the functions provided by the kernel.
Malicious software may find it more difficult to compromise the system as
a result of this.
Reliability: Microkernels are less complex than monolithic kernels, which
can make them more reliable and less prone to crashes or other issues.
Scalability: Microkernels can be easily scaled to support different
hardware architectures, making them more versatile.
Portability: Microkernels can be ported to different platforms with
minimal effort, which makes them useful for embedded systems and other
specialized applications.
Eclipse IDE is a good example of Microkernel Architecture.
Advantages of a microkernel architecture:
Hybrid Architecture
As the name implies, hybrid architecture is a hybrid of all the architectures
discussed thus far, and therefore it contains characteristics from all of those
architectures, which makes it highly valuable in modern operating systems.
Simple Batched System: This type of batch operating system is the most basic and
has no direct communication between users.
Multiplexed Batch System: This type of batch operating system allows multiple
users to use it at the same time.
Time-Shared Batch System: This type of batch operating system shares the
resources among users, meaning that each user gets a specific amount of time to use
the resources.
Features of Multiprogramming
1. Need Single CPU for implementation.
2. Context switch between process.
3. Switching happens when current process undergoes waiting state.
4. CPU idle time is reduced.
5. High resource utilization.
6. High Performance.
Advantages of Multi-Programming Operating System
Multi Programming increases the Throughput of the System.
It helps in reducing the response time.
CPU utilization is high because the CPU is never goes to idle state.
Memory utilization is efficient.
Disadvantages of Multi-Programming Operating System
There is not any facility for user interaction of system resources with the
system.
CPU scheduling is compulsory because lots of jobs are ready to run on CPU
simultaneously.
examples are Windows O/S, UNIX O/S, Microcomputers such as XENIX, MP/M, and
ESQview.
Multiprocessing
Advantages of Time-Sharing OS
Each task gets an equal opportunity.
Fewer chances of duplication of software.
CPU idle time can be reduced.
Resource Sharing: Time-sharing systems allow multiple users to share
hardware resources such as the CPU, memory, and peripherals, reducing
the cost of hardware and increasing efficiency.
Improved Productivity: Time-sharing allows users to work concurrently,
thereby reducing the waiting time for their turn to use the computer. This
increased productivity translates to more work getting done in less time.
Improved User Experience: Time-sharing provides an interactive
environment that allows users to communicate with the computer in real
time, providing a better user experience than batch processing.
Disadvantages of Time-Sharing OS
Reliability problem.
One must have to take care of the security and integrity of user programs
and data.
Data communication problem.
High Overhead: Time-sharing systems have a higher overhead than other
operating systems due to the need for scheduling, context switching, and
other overheads that come with supporting multiple users.
Complexity: Time-sharing systems are complex and require advanced
software to manage multiple users simultaneously. This complexity
increases the chance of bugs and errors.
Security Risks: With multiple users sharing resources, the risk of security
breaches increases. Time-sharing systems require careful management of
user access, authentication, and authorization to ensure the security of data
and software.
Examples of Time-Sharing OS
IBM VM/CMS
6. Multi-User Operating System
Multiuser operating system, multiple numbers of users can access different resources
of a computer at the same time. The access is provided using a network that consists
of various personal computers attached to a mainframe computer system. A multi-
user operating system allows the permission of multiple users for accessing a single
machine at a time. The various personal computers can send and receive information
to the mainframe computer system. Thus, the mainframe computer acts as the server
and other personal computers act as clients for that server.
Types of Multi-user Operating System
Distributed OS
Advantages of Distributed Operating System
Failure of one will not affect the other network communication, as all
systems are independent of each other.
Electronic mail increases the data exchange speed.
Since resources are being shared, computation is highly fast and durable.
Load on host computer reduces.
These systems are easily scalable as many systems can be easily added to
the network.
Delay in data processing reduces.
Disadvantages of Distributed Operating System
Failure of the main network will stop the entire communication.
To establish distributed systems the language is used not well-defined yet.
These types of systems are not readily available as they are very
expensive. Not only that the underlying software is highly complex and
not understood well yet.
Examples of Distributed Operating Systems are LOCUS, etc.
New technologies and hardware up-gradation are easily integrated into the system.
Server access is possible remotely from different locations and types of systems.
Disadvantages of Network Operating System
Advantages of RTOS
Maximum Consumption: Maximum utilization of devices and systems,
thus more output from all the resources.
Task Shifting: The time assigned for shifting tasks in these systems is
very less. For example, in older systems, it takes about 10 microseconds in
shifting from one task to another, and in the latest systems, it takes 3
microseconds.
Focus on Application: Focus on running applications and less importance
on applications that are in the queue.
Real-time operating system in the embedded system: Since the size of
programs is small, RTOS can also be used in embedded systems like in
transport and others.
Error Free: These types of systems are error-free.
Memory Allocation: Memory allocation is best managed in these types of
systems.
Disadvantages of RTOS
Limited Tasks: Very few tasks run at the same time and their
concentration is very less on a few applications to avoid errors.
Use heavy system resources: Sometimes the system resources are not so
good and they are expensive as well.
Complex Algorithms: The algorithms are very complex and difficult for
the designer to write on.
Device driver and interrupt signals: It needs specific device drivers and
interrupts signal to respond earliest to interrupts.
Thread Priority: It is not good to set thread priority as these systems are
very less prone to switching tasks.
Examples of Real-Time Operating Systems are Scientific experiments, medical
imaging systems, industrial control systems, weapon systems, robots, air traffic
control systems, etc
System Calls
System calls are usually made when a process in user mode requires access to a
resource. Then it requests the kernel to provide the resource via a system call.
When a system call is executed, it is typically treated by the hardware as a software
interrupt. Control passes through the interrupt vector to a service routine in the
operating system, and the mode bit is set to kernel mode. The system-call service
routine is a part of the operating system. The kernel examines the interrupting
instruction to determine what system call has occurred; a parameter indicates what
type of service the user program is requesting. Additional information needed for the
request may be passed in registers, on the stack, or in memory (with pointers to the
memory locations passed in registers). The kernel verifies that the parameters are
correct and legal, executes the request, and returns control to the instruction following
the system call.
Process Control
These system calls deal with processes such as process creation, process termination etc.
File Management
These system calls are responsible for file manipulation such as creating a file, reading a file,
writing into a file etc.
Device Management
These system calls are responsible for device manipulation such as reading from device
buffers, writing into device buffers etc.
Information Maintenance
These system calls handle information and its transfer between the operating system and the
user program.
Communication
These system calls are useful for interprocess communication. They also deal with creating
and deleting a communication connection.
Some of the examples of all the above types of system calls in Windows and Unix are given
as follows −
Types of System
Windows Linux
Calls
Device
SetConsoleMode()ReadConsole()WriteConsole() ioctl()read()write()
Management
Information
GetCurrentProcessID()SetTimer()Sleep() getpid()alarm()sleep()
Maintenance
PROCESS
A process is an active program i.e a program that is under execution. It is more than the
program code as it includes the program counter, process stack, registers, program code etc.
Compared to this, the program code is only the text section.
A program is not a process by itself as the program is a passive entity, such as file contents,
while the process is an active entity containing program counter, resources etc.
CPU and I/O Bound Processes: If the process is intensive in terms of CPU
operations, then it is called CPU bound process. Similarly, If the process is intensive
in terms of I/O operations then it is called I/O bound process.
Process State-
As a process executes, it changes state.
The state of a process is defined in part by the current activity of that process.
A process may be in one of the following states:
• New. The process is being created.
• Running. Instructions are being executed
. • Waiting. The process is waiting for some event to occur (such as an I/O completion
or reception of a signal).
• Ready. The process is waiting to be assigned to a processor.
• Terminated. The process has finished execution. These names are arbitrary, and they
vary across operating systems. The states that they represent are found on all systems,
however. Certain operating systems also more finely delineate process states. It is
important to realize that only one process can be running on any processor at any
instant. Many processes may be ready and waiting, however.
There can be various events that lead to a state transition for a process. The possible
state transitions are given below:
1. Null -> New: A new process is created for the execution of a process.
2. New -> Ready: The system will move the process from new to ready state
and now it is ready for execution. Here a system may set a limit so that
multiple processes can’t occur otherwise there may be a performance
issue.
3. Ready -> Running: The OS now selects a process for a run and the
system chooses only one process in a ready state for execution.
4. Running -> Exit: The system terminates a process if the process indicates
that is now completed or if it has been aborted.
5. Running -> Ready: The reason for which this transition occurs is that
when the running process has reached its maximum running time for
uninterrupted execution. An example of this can be a process running in
the background that performs some maintenance or other functions
periodically.
6. Running -> Blocked: A process is put in the blocked state if it requests
for something it is waiting. Like, a process may request some resources
that might not be available at the time or it may be waiting for an I/O
operation or waiting for some other process to finish before the process
can continue.
7. Blocked -> Ready: A process moves from blocked state to the ready state
when the event for which it has been waiting.
8. Ready -> Exit: This transition can exist only in some cases because, in
some systems, a parent may terminate a child’s process at any time.
Context Switching
In order for a process execution to be continued from the same point at a later time,
context switching is a mechanism to store and restore the state or context of a CPU
in the Process Control block. A context switcher makes it possible for multiple
processes to share a single CPU using this method. A multitasking operating system
must include context switching among its features.
The state of the currently running process is saved into the process control block
when the scheduler switches the CPU from executing one process to another. The
state used to set the PC, registers, etc. for the process that will run next is then
loaded from its own PCB. After that, the second can start processing.
Dispatcher Another component involved in the CPU-scheduling function is the dispatcher.
The dispatcher is the module that gives control of the CPUto the process selected by the
short-term scheduler. This function involves the following:
• Switching context
• Switching to user mode
• Jumping to the proper location in the user program to restart that program
Process scheduling is the activity of the process manager that handles the removal of
the running process from the CPU and the selection of another process on the basis
of a particular strategy.
Process scheduling is an essential part of a Multiprogramming operating system.
Such operating systems allow more than one process to be loaded into the executable
memory at a time and the loaded process shares the CPU using time multiplexing.
Categories in Scheduling
Scheduling falls into one of two categories:
Non-preemptive: In this case, a process’s resource cannot be taken before
the process has finished running. When a running process finishes and
transitions to a waiting state, resources are switched.
Preemptive: In this case, the OS assigns resources to a process for a
predetermined period of time. The process switches from running state to
ready state or from waiting for state to ready state during resource
allocation. This switching happens because the CPU may give other
processes priority and substitute the currently active process for the
higher priority process.
There are three types of process schedulers.
Long Term or Job Scheduler
It brings the new process to the ‘Ready State’. It controls the Degree of Multi-
programming, i.e., the number of processes present in a ready state at any point in
time. It is important that the long-term scheduler make a careful selection of both
I/O and CPU-bound processes. I/O-bound tasks are which use much of their time in
input and output operations while CPU-bound processes are which spend their time
on the CPU. The job scheduler increases efficiency by maintaining a balance
between the two. They operate at a high level and are typically used in batch-
processing systems.
Short-Term or CPU Scheduler
It is responsible for selecting one process from the ready state for scheduling it on
the running state. Note: Short-term scheduler only selects the process to schedule it
doesn’t load the process on running. Here is when all the scheduling algorithms are
used. The CPU scheduler is responsible for ensuring no starvation due to high burst
time processes.The dispatcher is responsible for loading the process selected by the
Short-term scheduler on the CPU (Ready to Running State) Context switching is
done by the dispatcher only. A dispatcher does the following:
1. Switching context.
2. Switching to user mode.
3. Jumping to the proper location in the newly loaded program.
Medium-Term Scheduler
It is responsible for suspending and resuming the process. It mainly does swapping
(moving processes from main memory to disk and vice versa). Swapping may be
necessary to improve the process mix or because a change in memory requirements
has overcommitted available memory, requiring memory to be freed up. It is helpful
in maintaining a perfect balance between the I/O bound and the CPU bound. It
reduces the degree of multiprogramming.
It is a process-swapping
It is a job scheduler It is a CPU scheduler
scheduler.
It controls the degree of It gives less control over It reduces the degree of
multiprogramming how much multiprogramming.
multiprogramming is
Short term Medium Term
Long Term Scheduler schedular Scheduler
done.
It is barely present or
It is a minimal time- It is a component of
nonexistent in the time-
sharing system. systems for time sharing.
sharing system.
Scheduling Criteria
• Waiting time. The CPU-scheduling algorithm does not affect the amount of time during
which a process executes or does I/O. It affects only the amount of time that a process
spends waiting in the ready queue. Waiting time is the sum of the periods spent waiting in
the ready queue.
• Response time. In an interactive system, turnaround time may not be the best criterion.
Thus, another measure is the time from the submission of a request until the first response
is produced. This measure, called response time, is the time it takes to start responding,
not the time it takes to output the response. The turnaround time is generally limited by
the speed of the output device. It is desirable to maximize CPU utilization and throughput
and to minimize turnaround time, waiting time, and response time.
Arrival Time: Time at which the process arrives in the ready queue.
Completion Time: Time at which process completes its execution.
Burst Time: Time required by a process for CPU execution.
Turn Around Time: Time Difference between completion time and
arrival time.
Turn Around Time = Completion Time – Arrival Time
Waiting Time(W.T): Time Difference between turn around time and
burst time.
Waiting Time = Turn Around Time – Burst Time
FCFS algorithm is non-preemptive in nature, that is, once CPU time has been
allocated to a process, other processes can get CPU time only after the current
process has finished. This property of FCFS scheduling leads to the situation called
Convoy Effect.
Example-1: Consider the following table of arrival time and burst time for five
processes P1, P2, P3, P4 and P5.
Processes Arrival Time Burst Time
P1 0 4
P2 1 3
P3 2 1
P4 3 2
P5 4 5
The shortest job first (SJF) or shortest job next, is a scheduling policy that selects the
waiting process with the smallest execution time to execute next. SJN, also known as
Shortest Job Next (SJN), can be preemptive or non-preemptive.
Characteristics of SJF Scheduling:
Shortest Job first has the advantage of having a minimum average waiting
time among all scheduling algorithms.
It is a Greedy Algorithm.
It may cause starvation if shorter processes keep coming. This problem
can be solved using the concept of ageing.
It is practically infeasible as Operating System may not know burst times
and therefore may not sort them. While it is not possible to predict
execution time, several methods can be used to estimate the execution
time for a job, such as a weighted average of previous execution times.
SJF can be used in specialized environments where accurate estimates of
running time are available.
Algorithm:
Sort all the processes according to the arrival time.
Then select that process that has minimum arrival time and minimum
Burst time.
After completion of the process make a pool of processes that arrives
afterward till the completion of the previous process and select that
process among the pool which is having minimum Burst time.
Advantages of SJF:
SJF is better than the First come first serve (FCFS) algorithm as it reduces
the average waiting time.
SJF is generally used for long term scheduling
It is suitable for the jobs running in batches, where run times are already
known.
SJF is probably optimal in terms of average turnaround time.
Disadvantages of SJF:
SJF may cause very long turn-around times or starvation.
In SJF job completion time must be known earlier, but sometimes it is
hard to predict.
Sometimes, it is complicated to predict the length of the upcoming CPU
request.
It leads to the starvation that does not reduce average turnaround time.
Consider the following table of arrival time and burst time for five
processes P1, P2, P3, P4 and P5.
Burst
Process Time Arrival Time
P1 6 ms 2 ms
P2 2 ms 5 ms
P3 8 ms 1 ms
P4 3 ms 0 ms
P5 4 ms 4 ms
The Shortest Job First CPU Scheduling Algorithm will work on the basis of
steps as mentioned below:
Gantt chart for above execution:
Gantt chart
Now, let’s calculate the average waiting time for above example:
P4 = 0 – 0 = 0
P1 = 3 – 2 = 1
P2 = 9 – 5 = 4
P5 = 11 – 4 = 7
P3 = 15 – 1 = 14
Average Waiting Time = 0 + 1 + 4 + 7 + 14/5 = 26/5 = 5.2
3. Priority scheduling
Priority scheduling is a non-preemptive algorithm and one of the most common
scheduling algorithms in batch systems. Each process is assigned first arrival time
(less arrival time process first) if two processes have same arrival time, then
compare to priorities (highest process first). Also, if two processes have same
priority then compare to process number (less process number first). This process is
repeated while all process get executed.
Implementation –
1. First input the processes with their arrival time, burst time and priority.
2. First process will schedule, which have the lowest arrival time, if two or
more processes will have lowest arrival time, then whoever has higher
priority will schedule first.
3. Now further processes will be schedule according to the arrival time and
priority of the process. (Here we are assuming that lower the priority
number having higher priority). If two process priority are same then sort
according to process number.
Note: In the question, They will clearly mention, which number will have
higher priority and which number will have lower priority.
4. Once all the processes have been arrived, we can schedule them based on
their priority.
Priorities can be defined either internally or externally. Internally defined priorities use
some measurable quantity or quantities to compute the priority of a process. For example,
time limits, memory requirements, the number of open files, and the ratio of average I/O
burst to average CPU burst have been used in computing priorities. External priorities are
set by criteria outside the operating system, such as the importance of the process, the
type and amount of funds being paid for computer use, the department sponsoring the
work, and other, often political, factors.
Priority scheduling can be either preemptive or nonpreemptive. When a process arrives at
the ready queue, its priority is compared with the priority of the currently running
process. A preemptive priority scheduling algorithm will preempt the CPU if the
priority of the newly arrived process is higher than the priority of the currently running
process.
A nonpreemptive priority scheduling algorithm will simply put the new process at the
head of the ready queue.
A major problem with priority scheduling algorithms is indefinite blocking, or starvation.
A process that is ready to run but waiting for the CPU can be considered blocked. A
priority scheduling algorithm can leave some lowpriority processes waiting indefinitely.
A solution to the problem of indefinite blockage of low-priority processes is aging. Aging
involves gradually increasing the priority of processes that wait in the system for a long
time. For example, if priorities range from 127 (low) to 0 (high), we could increase the
priority of a waiting process by 1 every 15 minutes. Eventually, even a process with an
initial priority of 127 would have the highest priority in the system and would be
executed.
Gantt Chart –
P1 5 ms 0 ms
P2 4 ms 1 ms
P3 2 ms 2 ms
P4 1 ms 4 ms
The Round Robin CPU Scheduling Algorithm will work on the basis of steps as
mentioned below:
Gantt chart will be as following below:
B
rocesses AT T CT TAT WT
P1 0 5 12 12-0 = 12 12-5 = 7
P2 1 4 11 11-1 = 10 10-4 = 6
P3 2 2 6 6-2 = 4 4-2 = 2
P4 4 1 9 9-4 = 5 5-1 = 4
Now,
Average Turn around time = (12 + 10 + 4 + 5)/4 = 31/4 = 7.7
Average waiting time = (7 + 6 + 2 + 4)/4 = 19/4 = 4.7
Each queue has absolute priority over lower-priority queues. No process in the
batch queue, for example, could run unless the queues for system processes,
interactive processes, and interactive editing processes were all empty. If an
interactive editing process entered the ready queue while a batch process was
running, the batch process would be preempted. Another possibility is to time-
slice among the queues. Here, each queue gets a certain portion of the CPU
time, which it can then schedule among its various processes. For instance, in
the foreground–background queue example, the foreground queue can be given
80 percent of the CPU time for RR scheduling among its processes, while the
background queue receives 20 percent of the CPU to give to its processes on an
FCFS basis.
THREAD
A thread is a basic unit of CPU utilization; it comprises a thread ID, a program counter, a register set,
and a stack. It shares with other threads belonging to the same process its code section, data section,
and other operating-system resources, such as open files and signals. A traditional (or heavyweight)
process has a single thread of control. If a process has multiple threads of control, it can perform more
than one task at a time.
Types of Threads
Threads are of two types. These are described below.
User Level Thread
Kernel Level Thread
User Level Thread is a type of thread that is not created using system calls. The
kernel has no work in the management of user-level threads. User-level threads can
be easily implemented by the user. In case when user-level threads are single-handed
processes, kernel-level thread manages them. Let’s look at the advantages and
disadvantages of User-Level Thread.
Advantages of User-Level Threads
Implementation of the User-Level Thread is easier than Kernel Level
Thread.
Context Switch Time is less in User Level Thread.
User-Level Thread is more efficient than Kernel-Level Thread.
Because of the presence of only Program Counter, Register Set, and Stack
Space, it has a simple representation.
Disadvantages of User-Level Threads
There is a lack of coordination between Thread and Kernel.
Inc case of a page fault, the whole process can be blocked.
A kernel Level Thread is a type of thread that can recognize the Operating system
easily. Kernel Level Threads has its own thread table where it keeps track of the
system. The operating System Kernel helps in managing threads. Kernel Threads
have somehow longer context switching time. Kernel helps in the management of
threads.
Advantages of Kernel-Level Threads
It has up-to-date information on all threads.
Applications that block frequency are to be handled by the Kernel-Level
Threads.
Whenever any process requires more time to process, Kernel-Level Thread
provides more time to it.
Disadvantages of Kernel-Level threads
Kernel-Level Thread is slower than User-Level Thread.
Implementation of this type of thread is a little more complex than a user-
level thread.
Components of Threads
These are the basic components of the Operating System.
Stack Space
Register Set
Program Counter
Difference between User-Level Thread V/S Kernel-Level Thread.
S.
No. Parameters User Level Thread Kernel Level Thread
Implementation of Kernel-
Implementation of User
3. Implementation Level thread is
threads is easy.
complicated.
Multithread applications
Kernels can be
7. Multithreading cannot take advantage of
multithreaded.
multiprocessing.
kernel-level
better than kernel
threads.
threads since they
Multithreading
don’t need to
can be there for
make system calls
kernel routines.
to create threads.
When a thread at
In user-level
the kernel level
threads, switching
is halted, the
between threads
kernel can
does not need
schedule another
kernel mode
thread for the
privileges.
same process.
Transferring
Multithreaded
control within a
applications on
process from one
user-level threads
thread to another
cannot benefit
necessitates a
from
mode switch to
multiprocessing.
13. Disadvantages kernel mode.
If a single user-
Kernel-level
level thread
threads take
performs a
more time to
blocking
create and
operation, the
manage than
entire process is
user-level
halted.
threads.
The process has its own Thread has Parents’ PCB, its own Thread
Process Control Block, Stack, Control Block, and Stack and common
11. and Address Space. Address space.
The main drawback of single threading systems is that only one task can be performed at a
time, so to overcome the drawback of this single threading, there is multithreading that allows
multiple tasks to be performed.
For example:
In the above figure, the many to one model associates all user-level threads to single kernel-
level threads.
Benefits of Multithreading:
Multithreading can improve the performance and efficiency of a program
by utilizing the available CPU resources more effectively. Executing
multiple threads concurrently, it can take advantage of parallelism and
reduce overall execution time.
Multithreading can enhance responsiveness in applications that involve
user interaction. By separating time-consuming tasks from the main
thread, the user interface can remain responsive and not freeze or become
unresponsive.
Multithreading can enable better resource utilization. For example, in a
server application, multiple threads can handle incoming client requests
simultaneously, allowing the server to serve more clients concurrently.
Multithreading can facilitate better code organization and modularity by
dividing complex tasks into smaller, manageable units of execution. Each
thread can handle a specific part of the task, making the code easier to
understand and maintain.
Drawbacks of Multithreading
Multithreading is complex and many times difficult to handle. It has a few
drawbacks. These are:
If you don’t make use of the locking mechanisms properly, while
investigating data access issues there is a chance of problems arising like
data inconsistency and dead-lock.
If many threads try to access the same data, then there is a chance that the
situation of thread starvation may arise. Resource contention issues are
another problem that can trouble the user.
Display issues may occur if threads lack coordination when displaying
data.