Download as doc, pdf, or txt
Download as doc, pdf, or txt
You are on page 1of 98

OPERATING SYSTEMS

SYLLABUS
Unit I: What is Operating System? History and Evolution of OS, Basic OS functions, Resource
Abstraction, Types of Operating Systems– Multiprogramming Systems, Batch Systems, Time
Sharing Systems; Operating Systems for Personal Computers, Workstations and Hand-held
Devices, Process Control & Real time Systems.
Unit Ii: Processor and User Modes, Kernels, System Calls and System Programs, System View
of the Process and Resources, Process Abstraction, Process Hierarchy, Threads, Threading
Issues, Thread Libraries; Process Scheduling, Non-Preemptive and Preemptive Scheduling
Algorithms.
Unit III: Process Management: Deadlock, Deadlock Characterization, Necessary and Sufficient
Conditions for Deadlock, Deadlock Handling Approaches: Deadlock Prevention, Deadlock
Avoidance and Deadlock Detection and Recovery. Concurrent and Dependent Processes,
Critical Section, Semaphores, Methods for Inter- process Communication; Process
Synchronization, Classical Process Synchronization Problems: Producer-Consumer, Reader-
Writer.
Unit IV: Memory Management: Physical and Virtual Address Space; Memory Allocation
Strategies– Fixed and -Variable Partitions, Paging, Segmentation, Virtual Memory.
Unit V: File and I/O Management, OS security : Directory Structure, File Operations, File
Allocation Methods, Device Management, Pipes, Buffer, Shared Memory, Security Policy
Mechanism, Protection, Authentication and Internal Access Authorization Introduction to
Android Operating System, Android Development Framework, Android Application
Architecture, Android Process Management and File System, Small Application Development
using Android Development Framework.
MODEL QUESTION PAPER
OPERATING SYSTEMS
Time: 3Hrs Max.marks:75
Section - A

Answer any 5 question 5X5 = 25M


1. Write about Resource Abstraction.
2. Write about the process and the process state.
3. Explain threading issues.
4. Explain about process Synchronization.
5. Discuss some necessary and sufficient conditions for deadlock.
6. Explain about Virtual memory.
7. Explain about shared memory.
8. Write about file types.
Section - B
Answer following question 5X10 = 50M

9. a) Explain various types of Operating Systems.


(OR)
b) What is Operating System? Explain functions of Operating System.
10. a) Explain in detail about Process Scheduling.

(OR)
b) Explain system view of the process and resources.
11. a) Explain about deadlock Detection and recovery.

(OR)
b) Discuss classical process synchronization problems.
12. a) Explain the following i) Segmentation ii) Fixed and variable partitions.
(OR)
b) Explain in detail about Demand-paging.
13. a) Explain Authentication and Internal Access Authorization.
(OR)
b) Explain Android Development Framework.
Unit-I
Operating systems :
It work as an interface between the user and the computer hardware. OS is the
software which performs the basic tasks like input, output, disk management,
controlling peripherals etc.
(OR)
An Operating System (OS) is an interface between a computer user and computer
hardware. An operating system is a software which performs all the basic tasks like
file management, memory management, process management, handling input and
output, and controlling peripheral devices such as disk drives and printers
Ex: Windows, Linux Operating System, Windows Operating System, VMS, OS/400,
AIX, z/OS, etc.

Operating System Evolution


Operating system is divided into four generations, which are explained as follows −
First Generation (1945-1955)
It is the beginning of the development of electronic computing systems which
are substitutes for mechanical computing systems. Because of the drawbacks in
mechanical computing systems like, the speed of humans to calculate is limited and
humans can easily make mistakes. In this generation there is no operating system,
so the computer system is given instructions which must be done directly.
Example − Type of operating system and devices used is Plug Boards.
Second Generation (1955-1965)
The Batch processing system was introduced in the second generation, where
a job or a task that can be done in a series, and then executed sequentially. In this
generation, the computer system is not equipped with an operating system, but
several operating system functions exist like FMS and IBSYS.
Example − Type of operating system and devices used is Batch systems.
Third Generation (1965-1980)
The development of the operating system was developed to serve multiple
users at once in the third generation. Here the interactive users can communicate
through an online terminal to a computer, so the operating system becomes multi-
user and multiprogramming.
Example − Type of operating system and devices used is Multiprogramming.
Fourth Generation (1980-Now)
In this generation the operating system is used for computer networks where
users are aware of the existence of computers that are connected to one another.
At this generation users are also comforted with a Graphical User Interface
(GUI), which is an extremely comfortable graphical computer interface with the
occurrence of new wearable devices like Smart Watches, Smart Glasses with the
onset of new devices like wearable devices, which includes Smart Watches, Smart
Glasses, VR gears etc, the demand for unconventional operating systems is also
rising.
Example − Type of operating system and devices used is personal computers

Functions Of Operating System:


Following are some of important functions of an operating System.
 Memory Management
 Processor Management
 Device Management
 File Management
 Security
 Control over system performance
 Job accounting
 Error detecting aids
 Coordination between other software and users
Memory Management
Memory management refers to management of Primary Memory or Main Memory.
Main memory is a large array of words or bytes where each word or byte has its own
address.
Main memory provides a fast storage that can be accessed directly by the CPU. For a
program to be executed, it must in the main memory. An Operating System does the
following activities for memory management −
 Keeps tracks of primary memory, i.e., what part of it are in use by whom, what part
are not in use.
 In multiprogramming, the OS decides which process will get memory when and how
much.
 Allocates the memory when a process requests it to do so.
 De-allocates the memory when a process no longer needs it or has been terminated.
Processor Management
In multiprogramming environment, the OS decides which process gets the processor
when and for how much time. This function is called process scheduling. An Operating
System does the following activities for processor management −
 Keeps tracks of processor and status of process. The program responsible for this
task is known as traffic controller.
 Allocates the processor (CPU) to a process.
 De-allocates processor when a process is no longer required.

Device Management
An Operating System manages device communication via their respective drivers. It
does the following activities for device management −
 Keeps tracks of all devices. Program responsible for this task is known as the I/O
controller.
 Decides which process gets the device when and for how much time.
 Allocates the device in the efficient way.
 De-allocates devices.
File Management
A file system is normally organized into directories for easy navigation and usage.
These directories may contain files and other directions.
An Operating System does the following activities for file management −
 Keeps track of information, location, uses, status etc. The collective facilities are
often known as file system.
 Decides who gets the resources.
 Allocates the resources.
 De-allocates the resources.
Security
It means of password and similar other techniques, it prevents unauthorized access
to programs and data.
Control over system performance
Recording delays between request for a service and response from the system.
Job accounting
Keeping track of time and resources used by various jobs and users.
Error detecting aids
Production of dumps, traces, error messages, and other debugging and error
detecting aids.
Coordination between other software's and users
Coordination and assignment of compilers, interpreters, assemblers and other
software to the various users of the computer systems.
Resource Abstraction
Modern computers consist of processors, memories, timers, disks,
 The Operating System as a Resource Manager: In the bottom-up view, the
operating system provides for an orderly mice, network interfaces, printers, and
a wide variety of other devices.
 controlled allocation of the processors, memories, and I/O devices Operating
system allows multiple programs to be in memory and run among the various
programs.
 Resource management includes multiplexing (sharing) resources in at the same
time.
 In time multiplexed, different programs takes turns using CPU. First two
different ways: in time and in space. E.g Sharing the printer.
 When multiple print jobs are queued up for one of them gets to use the
resource, then the another, and so on.
 printing on a single printer, a decision has to be made about which one In space
multiplexing, Instead of the customers taking turns, each one is to be printed
E.g main memory is divided up among several running programs, so gets part
of the resource. each one can be resident at the same time.
An operating system abstraction layer (OSAL)
 It provides an application programming interface (API) to an abstract operating
system making it easier and quicker to develop code for
multiple software or hardware platforms.
 OS abstraction layers deal with presenting an abstraction of the common system
functionality that is offered by any Operating system by the means of providing
meaningful and easy to use Wrapper functions that in turn encapsulate the system
functions offered by the OS to which the code needs porting.
 A well designed OSAL provides implementations of an API for several real-time
operating systems (such as vxWorks, eCos, RTLinux, RTEMS). Implementations may
also be provided for non real-time operating systems, allowing the abstracted
software to be developed and tested in a developer friendly desktop environment.
 In addition to the OS APIs, the OS Abstraction Layer project may also provide
a hardware abstraction layer, designed to provide a portable interface to hardware
devices such as memory, I/O ports, and non-volatile memory.
 To facilitate the use of these APIs, OSALs generally include a directory structure
and build automation (e.g., set of make files) to facilitate building a project for a
particular OS and hardware platform.
 Implementing projects using OSALs allows for development of portable embedded
system software that is independent of a particular real-time operating system.
 It also allows for embedded system software to be developed and tested on desktop
workstations, providing a shorter development and debug time.
Types of Operating Systems
An operating system is a well-organized collection of programs that manages
the computer hardware. It is a type of system software that is responsible for the
smooth functioning of the computer system.
Batch Operating System
 In the 1970s, Batch processing was very popular. In this technique, similar types of
jobs were batched together and executed in time.
 People were used to having a single computer which was called a mainframe.
 In Batch operating system, access is given to more than one person; they submit
their respective jobs to the system for the execution.
 The system put all of the jobs in a queue on the basis of first come first serve and
then executes the jobs one by one. The users collect their respective output when all
the jobs get executed.

The purpose of this operating system was mainly to transfer control from one job to
another as soon as the job was completed. It contained a small set of programs
called the resident monitor that always resided in one part of the main memory. The
remaining part is used for servicing jobs.
Advantages of Batch OS
The use of a resident monitor improves computer efficiency as it eliminates CPU time
between two jobs.
Disadvantages of Batch OS
1. Starvation
Batch processing suffers from starvation.
For Example:

There are five jobs J1, J2, J3, J4, and J5, present in the batch. If the
execution time of J1 is very high, then the other four jobs will never be executed, or
they will have to wait for a very long time. Hence the other processes get starved.
2. Not Interactive
Batch Processing is not suitable for jobs that are dependent on the user's
input. If a job requires the input of two numbers from the console, then it will never
get it in the batch processing scenario since the user is not present at the time of
execution.
Multiprogramming Operating System
 Multiprogramming is an extension to batch processing where the CPU is always kept
busy.
 Each process needs two types of system time: CPU time and IO time.
 In a multiprogramming environment, when a process does its I/O, The CPU can start
the execution of other processes. Therefore, multiprogramming improves the
efficiency of the system.

Advantages of Multiprogramming OS
 Throughout the system, it increased as the CPU always had one program to
execute.
 Response time can also be reduced.
Disadvantages of Multiprogramming OS
Multiprogramming systems provide an environment in which various systems
resources are used efficiently, but they do not provide any user interaction with the
computer system.
Multiprocessing Operating System
In Multiprocessing, Parallel computing is achieved. There are more than one
processors present in the system which can execute more than one process at the
same time. This will increase the throughput of the system.

In Multiprocessing, Parallel computing is achieved. More than one processor


present in the system can execute more than one process simultaneously, which will
increase the throughput of the system.

Advantages of Multiprocessing operating system:


o Increased reliability: Due to the multiprocessing system, processing tasks can be
distributed among several processors. This increases reliability as if one processor
fails, the task can be given to another processor for completion.
o Increased throughout: As several processors increase, more work can be done in
less.
Disadvantages of Multiprocessing operating System
o Multiprocessing operating system is more complex and sophisticated as it takes care
of multiple CPUs simultaneously.
Multitasking Operating System

The multitasking operating system is a logical extension of a


multiprogramming system that enables multiple programs simultaneously. It allows
a user to perform more than one computer task at the same time.

Advantages of Multitasking operating system


o This operating system is more suited to supporting multiple users simultaneously.
o The multitasking operating systems have well-defined memory management.
Disadvantages of Multitasking operating system
o The multiple processors are busier at the same time to complete any task in a
multitasking environment, so the CPU generates more heat.
Network Operating System
An Operating system, which includes software and associated protocols to
communicate with other computers via a network conveniently and cost-effectively,
is called Network Operating System.

Advantages of Network Operating System


o In this type of operating system, network traffic reduces due to the division between
clients and the server.
o This type of system is less expensive to set up and maintain.
Disadvantages of Network Operating System
o In this type of operating system, the failure of any node in a system affects the
whole system.
o Security and performance are important issues. So trained network administrators
are required for network administration.
Real Time Operating System
In Real-Time Systems, each job carries a certain deadline within which the
job is supposed to be completed, otherwise, the huge loss will be there, or even if
the result is produced, it will be completely useless.
The Application of a Real-Time system exists in the case of military
applications, if you want to drop a missile, then the missile is supposed to be
dropped with a certain precision.

Advantages of Real-time operating system:


o Easy to layout, develop and execute real-time applications under the real-time
operating system.
o In a Real-time operating system, the maximum utilization of devices and systems.
Disadvantages of Real-time operating system:
o Real-time operating systems are very costly to develop.
o Real-time operating systems are very complex and can consume critical CPU cycles.
Time-Sharing Operating System
In the Time Sharing operating system, computer resources are allocated in a
time-dependent fashion to several programs simultaneously. Thus it helps to provide
a large number of user's direct access to the main computer. It is a logical extension
of multiprogramming. In time-sharing, the CPU is switched among multiple programs
given by different users on a scheduled basis.

A time-sharing operating system allows many users to be served


simultaneously, so sophisticated CPU scheduling schemes and Input/output
management are required.
Time-sharing operating systems are very difficult and expensive to build.
Advantages of Time Sharing Operating System
o The time-sharing operating system provides effective utilization and sharing of
resources.
o This system reduces CPU idle and response time.
Disadvantages of Time Sharing Operating System
o Data transmission rates are very high in comparison to other methods.
o Security and integrity of user programs loaded in memory and data need to be
maintained as many users access the system at the same time.
Distributed Operating System
The Distributed Operating system is not installed on a single machine, it is
divided into parts, and these parts are loaded on different machines. A part of the
distributed Operating system is installed on each machine to make their
communication possible. Distributed Operating systems are much more complex,
large, and sophisticated than Network operating systems because they also have to
take care of varying networking protocols.

Advantages of Distributed Operating System


o The distributed operating system provides sharing of resources.
o This type of system is fault-tolerant.
Disadvantages of Distributed Operating System
o Protocol overhead can dominate computation cost.

Operating Systems for Personal Computers, Workstations and Hand-held Devices:

Your computer's operating system (OS) manages all of the software and hardware on the
computer. Most of the time, there are several different computer programs running at the same
time, and they all need to access your computer's central processing unit (CPU), memory,
and storage. The operating system coordinates all of this to make sure each program gets what it
needs.
Types of operating systems
Operating systems usually come pre-loaded on any computer you buy. Most people use the
operating system that comes with their computer, but it's possible to upgrade or even change
operating systems. The three most common operating systems for personal computers
are Microsoft Windows, macOS, and Linux.
Modern operating systems use a graphical user interface, or GUI (pronounced gooey). A
GUI lets you use your mouse to click icons, buttons, and menus, and everything is clearly
displayed on the screen using a combination of graphics and text.
Each operating system's GUI has a different look and feel, so if you switch to a different
operating system it may seem unfamiliar at first. However, modern operating systems are
designed to be easy to use, and most of the basic principles are the same.
Microsoft Windows
Microsoft created the Windows operating system in the mid-1980s. There have been many
different versions of Windows, but the most recent ones are Windows 10 (released in
2015), Windows 8 (2012), Windows 7 (2009), and Windows Vista (2007). Windows
comes pre-loaded on most new PCs, which helps to make it the most popular operating
system in the world.
macOS
macOS (previously called OS X) is a line of operating systems created by Apple. It comes
preloaded on all Macintosh computers, or Macs. Some of the specific versions
include Mojave (released in 2018), High Sierra (2017), and Sierra (2016).
According to StatCounter Global Stats, macOS users account for less than 10% of global
operating systems—much lower than the percentage of Windows users (more than 80%). One
reason for this is that Apple computers tend to be more expensive. However, many people do
prefer the look and feel of macOS over Windows.
Linux
Linux (pronounced LINN-ux) is a family of open-source operating systems, which means
they can be modified and distributed by anyone around the world. This is different
from proprietary software like Windows, which can only be modified by the company that owns
it. The advantages of Linux are that it is free, and there are many different distributions—or
versions—you can choose from.
Operating systems for mobile devices or Handheld Devices
The operating systems we've been talking about so far were designed to run
on desktop and laptop computers. Mobile devices such as phones, tablet computers,
and MP3 players are different from desktop and laptop computers, so they run operating
systems that are designed specifically for mobile devices. Examples of mobile operating systems
include Apple iOS and Google Android
Workstation:
A wintel platform desktop or laptop general-purpose computer deployed to state knowledge
(i.e. business or office) workers. Computers used by developers and other high end or specialized
users may differ from this definition and therefore are not required to adhere to this standard.
For the State’s purpose, PC and Workstation are used more or less interchangeably. A state
definition is a desktop or laptop general-purpose computer deployed to a single state knowledge
(i.e. business or office) workers typically running a Windows operating system on an Intel x86 or
equivalent architecture. Traditional RISC/Unix based workstations are considered a very small
subset of the total population of workstations and are referred to as not a Wintel workstations or
specialized workstations.
Unit-2

There are two modes of operation in the operating system to make sure it
works correctly. These are user mode and kernel mode.
They are explained as follows −
User Mode
The system is in user mode when the operating system is running a user
application such as handling a text editor. The transition from user mode to kernel
mode occurs when the application requests the help of operating system or an
interrupt or a system call occurs.
The mode bit is set to 1 in the user mode. It is changed from 1 to 0 when
switching from user mode to kernel mode.
Kernel Mode
The system starts in kernel mode when it boots and after the operating
system is loaded, it executes applications in user mode. There are some privileged
instructions that can only be executed in kernel mode.
These are interrupt instructions, input output management etc. If the
privileged instructions are executed in user mode, it is illegal and a trap is
generated.
The mode bit is set to 0 in the kernel mode. It is changed from 0 to 1 when
switching from kernel mode to user mode.

In the above image, the user process executes in the user mode until it gets a
system call. Then a system trap is generated and the mode bit is set to zero. The
system call gets executed in kernel mode. After the execution is completed, again a
system trap is generated and the mode bit is set to 1. The system control returns to
kernel mode and the process execution continues.
Necessity of Dual Mode (User Mode and Kernel Mode) in Operating System
The lack of a dual mode i.e. user mode and kernel mode in an operating
system can cause serious problems. Some of these are −
 A running user program can accidentally wipe out the operating system by
overwriting it with user data.
 Multiple processes can write in the same system at the same time, with
disastrous results.
These problems could have occurred in the MS-DOS operating system which had
no mode bit and so no dual mode.

System Calls in Operating System


A system call is a way for a user program to interface with the operating system. The
program requests several services, and the OS responds by invoking a series of system
calls to satisfy the request. A system call can be written in assembly language or a high-
level language like C or Pascal. System calls are predefined functions that the operating
system may directly invoke if a high-level language is used.

What is a System Call?


A system call is a method for a computer program to request a service from
the kernel of the operating system on which it is running. A system call is a method
of interacting with the operating system via programs. A system call is a request
from computer software to an operating system's kernel.
The Application Program Interface (API) connects the operating system's
functions to user programs. It acts as a link between the operating system and a
process, allowing user-level programs to request operating system services. The
kernel system can only be accessed using system calls. System calls are required for
any programs that use resources.

How are system calls made?


When a computer software needs to access the operating system's kernel, it
makes a system call. The system call uses an API to expose the operating system's
services to user programs. It is the only method to access the kernel system. All
programs or processes that require resources for execution must use system calls, as
they serve as an interface between the operating system and user programs.

Below are some examples of how a system call varies from a user function.
1. A system call function may create and use kernel processes to execute the
asynchronous processing.
2. A system call has greater authority than a standard subroutine. A system call with
kernel-mode privilege executes in the kernel protection domain.
3. System calls are not permitted to use shared libraries or any symbols that are not
present in the kernel protection domain.
4. The code and data for system calls are stored in global kernel memory.

Why do you need system calls in Operating System?

There are various situations where you must require system calls in the operating
system. Following of the situations are as follows:

1. It is must require when a file system wants to create or delete a file.


2. Network connections require the system calls to sending and receiving data packets.
3. If you want to read or write a file, you need to system calls.
4. If you want to access hardware devices, including a printer, scanner, you need a
system call.
5. System calls are used to create and manage new processes.

How System Calls Work

The Applications run in an area of memory known as user space. A system


call connects to the operating system's kernel, which executes in kernel space. When
an application creates a system call, it must first obtain permission from the kernel.
It achieves this using an interrupt request, which pauses the current process and
transfers control to the kernel.
If the request is permitted, the kernel performs the requested action, like
creating or deleting a file. As input, the application receives the kernel's output. The
application resumes the procedure after the input is received. When the operation is
finished, the kernel returns the results to the application and then moves data from
kernel space to user space in memory.
A simple system call may take few nanoseconds to provide the result, like
retrieving the system date and time. A more complicated system call, such as
connecting to a network device, may take a few seconds. Most operating systems
launch a distinct kernel thread for each system call to avoid bottlenecks. Modern
operating systems are multi-threaded, which means they can handle various system
calls at the same time.

Types of System Calls

There are commonly five types of system calls. These are as follows:

 Process Control
 File Management
 Device Management
 Information Maintenance
 Communication

Process Control
Process control is the system call that is used to direct the processes. Some
process control examples include creating, load, abort, end, execute, process,
terminate the process, etc.

File Management
File management is a system call that is used to handle the files. Some file
management examples include creating files, delete files, open, close, read, write,
etc.
Device Management
Device management is a system call that is used to deal with devices. Some
examples of device management include read, device, write, get device attributes,
release device, etc.
Information Maintenance
Information maintenance is a system call that is used to maintain
information. There are some examples of information maintenance, including getting
system data, set time or date, get time or date, set system data, etc.
Communication
Communication is a system call that is used for communication. There are
some examples of communication, including create, delete communication
connections, send, receive messages, etc.
Examples of Windows and Unix system calls
There are various examples of Windows and Unix system calls. These are as listed below
in the table:

Here, you will learn about some methods briefly:


Open()
The open() system call allows you to access a file on a file system. It
allocates resources to the file and provides a handle that the process may refer to.
Many processes can open a file at once or by a single process only. It's all based on
the file system and structure.
Read()
It is used to obtain data from a file on the file system. It accepts three arguments in
general:
o A file descriptor.
o A buffer to store read data.
o The number of bytes to read from the file.
The file descriptor of the file to be read could be used to identify it and open it
using open() before reading.
Wait()
In some systems, a process may have to wait for another process to complete
its execution before proceeding. When a parent process makes a child process, the
parent process execution is suspended until the child process is finished.
The wait() system call is used to suspend the parent process. Once the child
process has completed its execution, control is returned to the parent process.
Write()
It is used to write data from a user buffer to a device like a file. This system
call is one way for a program to generate data. It takes three arguments in general:
o A file descriptor.
o A pointer to the buffer in which data is saved.
o The number of bytes to be written from the buffer.
Fork()
Processes generate clones of themselves using the fork() system call. It is
one of the most common ways to create processes in operating systems. When a
parent process spawns a child process, execution of the parent process is interrupted
until the child process completes. Once the child process has completed its
execution, control is returned to the parent process.
Close()
It is used to end file system access. When this system call is invoked, it
signifies that the program no longer requires the file, and the buffers are flushed, the
file information is altered, and the file resources are de-allocated as a result.
Exec()
When an executable file replaces an earlier executable file in an already
executing process, this system function is invoked. As a new process is not built, the
old process identification stays, but the new process replaces data, stack, data,
head, etc.
Exit()
The exit() is a system call that is used to end program execution. This call
indicates that the thread execution is complete, which is especially useful in multi-
threaded environments. The operating system reclaims resources spent by the
process following the use of the exit() system function.

What is the purpose of System Programs?


System programs provide an environment where programs can be developed
and executed. In the simplest sense, system programs also provide a bridge
between the user interface and system calls. In reality, they are much more
complex. For example, a compiler is a complex system program.
System Programs Purpose
The system program serves as a part of the operating system. It traditionally
lies between the user interface and the system calls. The user view of the system is
actually defined by system programs and not system calls because that is what they
interact with and system programs are closer to the user interface.
An image that describes system programs in the operating system hierarchy
is as follows −
In the above image, system programs as well as application programs form a
bridge between the user interface and the system calls. So, from the user view the
operating system observed is actually the system programs and not the system calls.

Types of System Programs


System programs can be divided into seven parts. These are given as follows:
Status Information
The status information system programs provide required data on the current
or past status of the system. This may include the system date, system time,
available memory in system, disk space, logged in users etc.
Communications
These system programs are needed for system communications such as web
browsers. Web browsers allow systems to communicate and access information from
the network as required.
File Manipulation
These system programs are used to manipulate system files. This can be
done using various commands like create, delete, copy, rename, print etc. These
commands can create files, delete files, copy the contents of one file into another,
rename files, print them etc.
Program Loading and Execution
The system programs that deal with program loading and execution make
sure that programs can be loaded into memory and executed correctly. Loaders and
Linkers are a prime example of this type of system programs.
File Modification
System programs that are used for file modification basically change the data
in the file or modify it in some other way. Text editors are a big example of file
modification system programs.
Application Programs
Application programs can perform a wide range of services as per the needs
of the users. These include programs for database systems, word processors,
plotting tools, spreadsheets, games, scientific applications etc.
Programming Language Support
These system programs provide additional support features for different
programming languages. Some examples of these are compilers, debuggers etc.
These compile a program and make sure it is error free respectively.
Views of Operating System
An operating system is a framework that enables user application programs
to interact with system hardware. The operating system does not perform any
functions on its own, but it provides an atmosphere in which various apps and
programs can do useful work. The operating system may be observed from the point
of view of the user or the system, and it is known as the user view and the system
view.
The operating system may be observed from the viewpoint of the user or the
system. It is known as the user view and the system view. There are mainly two
types of views of the operating system. These are as follows:
1. User View
2. System View
User View
The user view depends on the system interface that is used by the users.
Some systems are designed for a single user to monopolize the resources to
maximize the user's task. In these cases, the OS is designed primarily for ease of
use, with little emphasis on quality and none on resource utilization.
The user viewpoint focuses on how the user interacts with the operating
system through the usage of various application programs. In contrast, the system
viewpoint focuses on how the hardware interacts with the operating system to
complete various tasks.
1. Single User View Point
Most computer users use a monitor, keyboard, mouse, printer, and other accessories
to operate their computer system. In some cases, the system is designed to maximize
the output of a single user. As a result, more attention is laid on accessibility, and
resource allocation is less important. These systems are much more designed for a
single user experience and meet the needs of a single user, where the performance is
not given focus as the multiple user systems.
2. Multiple User View Point
Another example of user views in which the importance of user experience and
performance is given is when there is one mainframe computer and many users on their
computers trying to interact with their kernels over the mainframe to each other. In such
circumstances, memory allocation by the CPU must be done effectively to give a good user
experience. The client-server architecture is another good example where many clients may
interact through a remote server, and the same constraints of effective use of server
resources may arise.
3. Handled User View Point
Moreover, the touchscreen era has given you the best handheld technology ever.
Smartphones interact via wireless devices to perform numerous operations, but they're not
as efficient as a computer interface, limiting their usefulness. However, their operating
system is a great example of creating a device focused on the user's point of view.
4. Embedded System User View Point
Some systems, like embedded systems that lack a user point of view. The remote
control used to turn on or off the tv is all part of an embedded system in which the
electronic device communicates with another program where the user viewpoint is limited
and allows the user to engage with the application
System View
The OS may also be viewed as just a resource allocator. A computer system
comprises various sources, such as hardware and software, which must be managed
effectively. The operating system manages the resources, decides between competing
demands, controls the program execution, etc. According to this point of view, the
operating system's purpose is to maximize performance. The operating system is
responsible for managing hardware resources and allocating them to programs and
users to ensure maximum performance.
From the user point of view, we've discussed the numerous applications that
require varying degrees of user participation. However, we are more concerned with
how the hardware interacts with the operating system than with the user from a
system viewpoint. The hardware and the operating system interact for a variety of
reasons, including:
1. Resource Allocation
The hardware contains several resources like registers, caches, RAM, ROM,
CPUs, I/O interaction, etc. These are all resources that the operating system needs
when an application program demands them. Only the operating system can allocate
resources, and it has used several tactics and strategies to maximize its processing
and memory space. The operating system uses a variety of strategies to get the
most out of the hardware resources, including paging, virtual memory, caching, and
so on. These are very important in the case of various user viewpoints because
inefficient resource allocation may affect the user viewpoint, causing the user system
to lag or hang, reducing the user experience.
2. Control Program
The control program controls how input and output devices (hardware)
interact with the operating system. The user may request an action that can only be
done with I/O devices; in this case, the operating system must also have proper
communication, control, detect, and handle such devices.
Process
A process is basically a program in execution. The execution of a process must progress
in a sequential fashion.
A process is defined as an entity which represents the basic unit of work to be
implemented in the system.
To put it in simple terms, we write our computer programs in a text file and when we
execute this program, it becomes a process which performs all the tasks mentioned in the
program.
When a program is loaded into the memory and it becomes a process, it can be divided
into four sections ─ stack, heap, text and data. The following image shows a simplified
layout of a process inside main memory –

 The Text section is made up of the compiled program code, read in from non-
volatile storage when the program is launched.
 The Data section is made up of the global and static variables, allocated and
initialized prior to executing the main.
 The Heap is used for the dynamic memory allocation and is managed via calls to
new, delete, malloc, free, etc.
 The Stack is used for local variables. Space on the stack is reserved for local
variables when they are declared.
Program
A program is a piece of code which may be a single line or millions of lines. A
computer program is usually written by a computer programmer in a programming
language. For example, here is a simple program written in C programming language −

#include <stdio.h>
int main() {
printf("Hello, World! \n");
return 0;
}
A computer program is a collection of instructions that performs a specific task when
executed by a computer. When we compare a program with a process, we can conclude
that a process is a dynamic instance of a computer program.
A part of a computer program that performs a well-defined task is known as
an algorithm. A collection of computer programs, libraries and related data are
referred to as a software.

Process Life Cycle

The process, from its creation to completion, passes through various states. The
minimum number of states is five.
The names of the states are not standardized although the process may be in
one of the following states during execution.
1. New
A program which is going to be picked up by the OS into the main memory is
called a new process.
2. Ready
Whenever a process is created, it directly enters in the ready state, in which,
it waits for the CPU to be assigned. The OS picks the new processes from the
secondary memory and put all of them in the main memory.
The processes which are ready for the execution and reside in the main
memory are called ready state processes. There can be many processes present in
the ready state.
3. Running
One of the processes from the ready state will be chosen by the OS
depending upon the scheduling algorithm. Hence, if we have only one CPU in our
system, the number of running processes for a particular time will always be one. If
we have n processors in the system then we can have n processes running
simultaneously.
4. Block or wait
From the Running state, a process can make the transition to the block or
wait state depending upon the scheduling algorithm or the intrinsic behavior of the
process.
When a process waits for a certain resource to be assigned or for the input
from the user then the OS move this process to the block or wait state and assigns
the CPU to the other processes.
5. Completion or termination
When a process finishes its execution, it comes in the termination state. All
the context of the process (Process Control Block) will also be deleted the process
will be terminated by the Operating system.
6. Suspend ready
A process in the ready state, which is moved to secondary memory from the
main memory due to lack of the resources (mainly primary memory) is called in the
suspend ready state.
If the main memory is full and a higher priority process comes for the
execution then the OS have to make the room for the process in the main memory
by throwing the lower priority process out into the secondary memory. The suspend
ready processes remain in the secondary memory until the main memory gets
available.
7. Suspend wait
Instead of removing the process from the ready queue, it's better to remove
the blocked process which is waiting for some resources in the main memory. Since
it is already waiting for some resource to get available hence it is better if it waits in
the secondary memory and make room for the higher priority process. These
processes complete their execution once the main memory gets available and their
wait is finished.

Operations on the Process


1. Creation
Once the process is created, it will be ready and come into the ready queue
(main memory) and will be ready for the execution.
2. Scheduling
Out of the many processes present in the ready queue, the Operating system
chooses one process and start executing it. Selecting the process which is to be
executed next, is known as scheduling.
3. Execution
Once the process is scheduled for the execution, the processor starts
executing it. Process may come to the blocked or wait state during the execution
then in that case the processor starts executing the other processes.
4. Deletion/killing
Once the purpose of the process gets over then the OS will kill the process.
The Context of the process (PCB) will be deleted and the process gets terminated by
the Operating system.
Process Control Block (PCB)
A Process Control Block is a data structure maintained by the Operating System for
every process. The PCB is identified by an integer process ID (PID).
Operating System Resource Management

An operating system (OS) is basically a collection of software that manages


computer hardware resources and provides common services for computer programs. The
operating system is a crucial component of the system software in a computer system.
These are some few common services provided by an operating system −
 Program execution
 I/O operations
 File System manipulation
 Communication
 Error Detection
 Resource Allocation
 Protection
In the matter of multi-user or multi-tasking environments, resources such as main
memory, CPU cycles and files storage are to be allocated to each user or job. Some major
activities of an OS with respect to resource management are −
 The Operating System manages all kinds of resources using schedulers.
 CPU scheduling algorithms are employed for better utilization of CPU
What is a process hierarchy?
Now-a-days all general purpose operating systems permit a user to create and destroy
processes. A process can create several new processes during its time of execution.
The creating process is called the Parent Process and the new process is called Child
Process.
There are different ways for creating a new process. These are as follows −
 Execution − The child process is executed by the parent process concurrently or it
waits till all children get terminated.
 Sharing − The parent or child process shares all resources like memory or files or
children process shares a subset of parent’s resources or parent and children process
share no resource in common.
The reasons that parent process terminates the execution of one of its children are as
follows −
 The child process has exceeded its usage of resources that have been allocated.
Because of this there should be some mechanism which allows the parent process to
inspect the state of its children process.
 The task that is assigned to the child process is no longer required.
Example
Consider a Business process to know about process hierarchy.
Step 1 − Business processes can become very complicated, making it difficult to model
a large process with a single graphical model.
Step 2 − It makes no sense to condense an end-to-end mechanism like "order to cash"
into a single graphical model that includes "article collection to shopping cart,"
"purchase order request," "money transfer," "packaging," and "logistics," among
other things.
Step 3 − To break down large processes into smaller chunks, you'll need a process
hierarchy. The "from abstract to real" theory is followed by a process hierarchy.
Step 4 − This indicates that it includes data on operations at various levels of
granularity. As a result, knowledge about the abstract value chain or very basic
method steps and their logical order can be obtained.
Step 5 − The levels of a process hierarchy, as well as the details included in these
levels, determine the hierarchy.
Step 6 − It is critical to have a given knowledge base at each level; otherwise, process
models would not be comparable later.
The model below depicts the process hierarchy model which includes examples for each
level – there are six levels in all.
Threads
What is Thread?
A thread is a flow of execution through the process code, with its own program
counter that keeps track of which instruction to execute next, system registers which hold
its current working variables, and a stack which contains the execution history.
A thread shares with its peer threads few information like code segment, data
segment and open files. When one thread alters a code segment memory item, all other
threads see that.
A thread is also called a lightweight process. Threads provide a way to improve
application performance through parallelism. Threads represent a software approach to
improving performance of operating system by reducing the overhead thread is equivalent
to a classical process.
Each thread belongs to exactly one process and no thread can exist outside a
process. Each thread represents a separate flow of control. Threads have been successfully
used in implementing network servers and web server. They also provide a suitable
foundation for parallel execution of applications on shared memory multiprocessors. The
following figure shows the working of a single-threaded and a multithreaded process.

Difference between Process and Thread

S.N. Process Thread

1 Process is heavy weight or resource Thread is light weight, taking lesser


intensive. resources than a process.

2 Process switching needs interaction Thread switching does not need to


with operating system. interact with operating system.

3 In multiple processing environments, All threads can share same set of


each process executes the same open files, child processes.
code but has its own memory
and file resources.

4 If one process is blocked, then no While one thread is blocked and


other process can execute until waiting, a second thread in the
the first process is unblocked. same task can run.

5 Multiple processes without using Multiple threaded processes use fewer


threads use more resources. resources.

6 In multiple processes each process One thread can read, write or change
operates independently of the another thread's data.
others.

Advantages of Thread
 Threads minimize the context switching time.
 Use of threads provides concurrency within a process.
 Efficient communication.
 It is more economical to create and context switch threads.
 Threads allow utilization of multiprocessor architectures to a greater scale and
efficiency.
Types of Thread
Threads are implemented in following two ways −
 User Level Threads − User managed threads.
 Kernel Level Threads − Operating System managed threads acting on kernel, an
operating system core.
User Level Threads
In this case, the thread management kernel is not aware of the existence of
threads. The thread library contains code for creating and destroying threads, for
passing message and data between threads, for scheduling thread execution and for
saving and restoring thread contexts. The application starts with a single thread.
Advantages
 Thread switching does not require Kernel mode privileges.
 User level thread can run on any operating system.
 Scheduling can be application specific in the user level thread.
 User level threads are fast to create and manage.
Disadvantages
 In a typical operating system, most system calls are blocking.
 Multithreaded application cannot take advantage of multiprocessing.
Kernel Level Threads
In this case, thread management is done by the Kernel. There is no thread
management code in the application area. Kernel threads are supported directly by the
operating system. Any application can be programmed to be multithreaded. All of the
threads within an application are supported within a single process.
The Kernel maintains context information for the process as a whole and for
individuals threads within the process. Scheduling by the Kernel is done on a thread basis.
The Kernel performs thread creation, scheduling and management in Kernel space. Kernel
threads are generally slower to create and manage than the user threads.
Advantages
 Kernel can simultaneously schedule multiple threads from the same process on
multiple processes.
 If one thread in a process is blocked, the Kernel can schedule another thread of the
same process.
 Kernel routines themselves can be multithreaded.
Disadvantages
 Kernel threads are generally slower to create and manage than the user threads.
 Transfer of control from one thread to another within the same process requires a
mode switch to the Kernel.
Multithreading Models
Some operating system provide a combined user level thread and Kernel level
thread facility. Solaris is a good example of this combined approach. In a combined
system, multiple threads within the same application can run in parallel on multiple
processors and a blocking system call need not block the entire process. Multithreading
models are three types
 Many to many relationship.
 Many to one relationship.
 One to one relationship.
Many to Many Model
The many-to-many model multiplexes any number of user threads onto an equal or
smaller number of kernel threads.
The following diagram shows the many-to-many threading model where 6 user level
threads are multiplexing with 6 kernel level threads. In this model, developers can create
as many user threads as necessary and the corresponding Kernel threads can run in
parallel on a multiprocessor machine. This model provides the best accuracy on
concurrency and when a thread performs a blocking system call, the kernel can schedule
another thread for execution.
Many to One Model
Many-to-one model maps many user level threads to one Kernel-level thread.
Thread management is done in user space by the thread library. When thread makes a
blocking system call, the entire process will be blocked. Only one thread can access the
Kernel at a time, so multiple threads are unable to run in parallel on multiprocessors.

If the user-level thread libraries are implemented in the operating system in such a way
that the system does not support them, then the Kernel threads use the many-to-one
relationship modes.

One to One Model


There is one-to-one relationship of user-level thread to the kernel-level thread. This
model provides more concurrency than the many-to-one model. It also allows another
thread to run when a thread makes a blocking system call. It supports multiple threads to
execute in parallel on microprocessors.
Disadvantage of this model is that creating user thread requires the corresponding
Kernel thread. OS/2, windows NT and windows 2000 use one to one relationship model.

Difference between User-Level & Kernel-Level Thread

S.N. User-Level Threads Kernel-Level Thread

1 User-level threads are faster to create Kernel-level threads are slower to


and manage. create and manage.

2 Implementation is by a thread library at Operating system supports


the user level. creation of Kernel threads.

3 User-level thread is generic and can run Kernel-level thread is specific to


on any operating system. the operating system.

4 Multi-threaded applications cannot take Kernel routines themselves can


advantage of multiprocessing. be multithreaded.

What is Process Scheduling?


Process Scheduling is an OS task that schedules processes of different
states like ready, waiting, and running.
Process scheduling allows OS to allocate a time interval of CPU execution for
each process. Another important reason for using a process scheduling system is
that it keeps the CPU busy all the time. This allows you to get the minimum response
time for programs.
Process Scheduling Queues
Process Scheduling Queues help you to maintain a distinct queue for each and
every process states and PCBs. All the process of the same execution state are
placed in the same queue. Therefore, whenever the state of a process is modified, its
PCB needs to be unlinked from its existing queue, which moves back to the new
state queue.
Three types of operating system queues are:
1. Job queue – It helps you to store all the processes in the system.
2. Ready queue – This type of queue helps you to set every process residing in the
main memory, which is ready and waiting to execute.
3. Device queues – It is a process that is blocked because of the absence of an I/O
device.

In the above-given Diagram,


 Rectangle represents a queue.
 Circle denotes the resource
 Arrow indicates the flow of the process.
1. Every new process first put in the Ready queue .It waits in the ready queue until it is
finally processed for execution. Here, the new process is put in the ready queue and
wait until it is selected for execution or it is dispatched.
2. One of the processes is allocated the CPU and it is executing
3. The process should issue an I/O request
4. Then, it should be placed in the I/O queue.
5. The process should create a new subprocess
6. The process should be waiting for its termination.
7. It should remove forcefully from the CPU, as a result interrupt. Once interrupt is
completed, it should be sent back to ready queue.
Two State Process Model
Two-state process models are:
 Running State
 Not Running State
Running
In the Operating system, whenever a new process is built, it is entered into the system,
which should be running.
Not Running
The process that are not running are kept in a queue, which is waiting for their turn to
execute. Each entry in the queue is a point to a specific process.
Scheduling Objectives
Here, are important objectives of Process scheduling
 Maximize the number of interactive users within acceptable response times.
 Achieve a balance between response and utilization.
 Avoid indefinite postponement and enforce priorities.
 It also should give reference to the processes holding the key resources.
Type of Process Schedulers
A scheduler is a type of system software that allows you to handle process scheduling.
There are mainly three types of Process Schedulers:
1. Long Term Scheduler
2. Short Term Scheduler
3. Medium Term Scheduler
Long Term Scheduler
Long term scheduler is also known as a job scheduler. This scheduler
regulates the program and select process from the queue and loads them into
memory for execution. It also regulates the degree of multi-programming.
However, the main goal of this type of scheduler is to offer a balanced mix of
jobs, like Processor, I/O jobs., that allows managing multiprogramming.
Medium Term Scheduler
Medium-term scheduling is an important part of swapping. It enables you to
handle the swapped out-processes. In this scheduler, a running process can become
suspended, which makes an I/O request.
A running process can become suspended if it makes an I/O request. A
suspended processes can’t make any progress towards completion. In order to
remove the process from memory and make space for other processes, the
suspended process should be moved to secondary storage.
Short Term Scheduler
Short term scheduling is also known as CPU scheduler. The main goal of this
scheduler is to boost the system performance according to set criteria. This helps
you to select from a group of processes that are ready to execute and allocates CPU
to one of them. The dispatcher gives control of the CPU to the process selected by
the short term scheduler.
Difference between Schedulers

Long-Term Vs. Short Term Vs. Medium-Term

Long-Term Short-Term Medium-Term


Long term is also known as Short term is also known as Medium-term is also called
a job scheduler CPU scheduler swapping scheduler.
It is either absent or
It is insignificant in the time- This scheduler is an element
minimal in a time-sharing
sharing order. of Time-sharing systems.
system.
Speed is the fastest
Speed is less compared to
compared to the short-term It offers medium speed.
the short term scheduler.
and medium-term scheduler.
Allow you to select
It only selects processes that
processes from the loads It helps you to send process
is in a ready state of the
and pool back into the back to memory.
execution.
memory
Reduce the level of
Offers full control Offers less control
multiprogramming.

Context Switch
A context switch is the mechanism to store and restore the state or context of a CPU
in Process Control block so that a process execution can be resumed from the same point
at a later time. Using this technique, a context switcher enables multiple processes to
share a single CPU. Context switching is an essential part of a multitasking operating
system features.
When the scheduler switches the CPU from executing one process to execute another,
the state from the current running process is stored into the process control block. After
this, the state for the process to run next is loaded from its own PCB and used to set the
PC, registers, etc. At that point, the second process can start executing.
Context switches are computationally intensive since register and memory state must
be saved and restored. To avoid the amount of context switching time, some hardware
systems employ two or more sets of processor registers. When the process is switched, the
following information is stored for later use.
 Program Counter
 Scheduling information
 Base and limit register value
 Currently used register
 Changed State
 I/O State information
 Accounting information

What is Preemptive Scheduling?


Preemptive Scheduling is a scheduling method where the tasks are mostly
assigned with their priorities. Sometimes it is important to run a task with a higher
priority before another lower priority task, even if the lower priority task is still
running.
At that time, the lower priority task holds for some time and resumes when
the higher priority task finishes its execution.

Advantages of Preemptive Scheduling


Here, are pros/benefits of Preemptive Scheduling method:
 Preemptive scheduling method is more robust, approach so one process cannot
monopolize the CPU
 Choice of running task reconsidered after each interruption.
 Each event cause interruption of running tasks
 The OS makes sure that CPU usage is the same by all running process.
 In this, the usage of CPU is the same, i.e., all the running processes will make use of
CPU equally.
 This scheduling method also improvises the average response time.
 Preemptive Scheduling is beneficial when we use it for the multi-programming
environment.
Disadvantages of Preemptive Scheduling
Here, are cons/drawback of Preemptive Scheduling method:
 Need limited computational resources for Scheduling
 Takes a higher time by the scheduler to suspend the running task, switch the
context, and dispatch the new incoming task.
 The process which has low priority needs to wait for a longer time if some high
priority processes arrive continuously.
Example of Pre-emptive Scheduling
Consider this following three processes in Round-robin

Process Queue Burst time

P1 4

P2 3
P3 5

Step 1) The execution begins with process P1, which has burst time 4. Here, every
process executes for 2 seconds. P2 and P3 are still in the waiting queue.

Step 2) At time =2, P1 is added to the end of the Queue and P2 starts executing

Step 3) At time=4 , P2 is preempted and add at the end of the queue. P3 starts
executing.

Step 4) At time=6 , P3 is preempted and add at the end of the queue. P1 starts
executing.
Step 5) At time=8 , P1 has a burst time of 4. It has completed execution. P2 starts
execution

Step 6) P2 has a burst time of 3. It has already executed for 2 interval. At time=9, P2
completes execution. Then, P3 starts execution till it completes.

Step 7) Let’s calculate the average waiting time for above example.
Wait time
P1= 0+ 4= 4
P2= 2+4= 6
P3= 4+3= 7
What is Non- Preemptive Scheduling?
In this type of scheduling method, the CPU has been allocated to a specific process. The
process that keeps the CPU busy will release the CPU either by switching context or
terminating.
It is the only method that can be used for various hardware platforms. That’s
because it doesn’t need specialized hardware (for example, a timer) like preemptive
Scheduling.
Non-Preemptive Scheduling occurs when a process voluntarily enters the wait
state or terminates.
Advantages of Non-preemptive Scheduling
Here, are pros/benefits of Non-preemptive Scheduling method:
 Offers low scheduling overhead
 Tends to offer high throughput
 It is conceptually very simple method
 Less computational resources need for Scheduling
Disadvantages of Non-Preemptive Scheduling
Here, are cons/drawback of Non-Preemptive Scheduling method:
 It can lead to starvation especially for those real-time tasks
 Bugs can cause a machine to freeze up
 It can make real-time and priority Scheduling difficult
 Poor response time for processes

Example of Non-Preemptive Scheduling


In non-preemptive SJF scheduling, once the CPU cycle is allocated to process,
the process holds it till it reaches a waiting state or terminated.
Consider the following five processes each having its own unique burst time and
arrival time.
Process Queue Burst time Arrival time
P1 6 2
P2 2 5
P3 8 1
P4 3 0
P5 4 4
Step 0) At time=0, P4 arrives and starts execution.

Step 1) At time= 1, Process P3 arrives. But, P4 still needs 2 execution units to


complete. It will continue execution.

Step 2) At time =2, process P1 arrives and is added to the waiting queue. P4 will
continue execution.
Step 3) At time = 3, process P4 will finish its execution. The burst time of P3 and P1 is
compared. Process P1 is executed because its burst time is less compared to P3.

Step 4) At time = 4, process P5 arrives and is added to the waiting queue. P1 will
continue execution.

Step 5) At time = 5, process P2 arrives and is added to the waiting queue. P1 will
continue execution.

Step 6) At time = 9, process P1 will finish its execution. The burst time of P3, P5, and
P2 is compared. Process P2 is executed because its burst time is the lowest.
Step 7) At time=10, P2 is executing, and P3 and P5 are in the waiting queue.

Step 8) At time = 11, process P2 will finish its execution. The burst time of P3 and P5 is
compared. Process P5 is executed because its burst time is lower.

Step 9) At time = 15, process P5 will finish its execution.

Step 10) At time = 23, process P3 will finish its execution.


Step 11) Let’s calculate the average waiting time for above example.
Wait time
P4= 0-0=0
P1= 3-2=1
P2= 9-5=4
P5= 11-4=7
P3= 15-1=14
Average Waiting Time= 0+1+4+7+14/5 = 26/5 = 5.2

CPU Scheduling Algorithms (process)

CPU Scheduling:
A process scheduler schedules different process to be assigned to be the CPU based
on particular scheduling algorithms.
There are diff type of scheduling algorithms to schedule the process.They are:
1.First come -first serve(FCFS) scheduling
2.Shorest job first(SJF) scheduling
3.Priority scheduling
4.Round -Robin (RR)scheduling
5.Multi-level Queue scheduling
These algorithms are preemptive or non -preemptive
*Non -preemptive algorithms are cannot be preempted until it completes its allotted
time
*Preemptive scheduling is based on priority process enters into a ready state.
FCFS Scheduling:
*The implementation of FCFS, is rasily managed with a FIFO Queue.
*FCFS Scheduling algorithm is non-preemptive.
*The process that request the CPU first is allocated the CPU first.
*When a process enters the ready queue its PCB is linked onto the tail of the queue.
*When the CPU is free , it is allocated to the process at the head of the queue the
running process is then removed from the queue.
*The code for FCFS Scheduling is simple to write and understand.
*once the CPU has allocated to a process, that process keeps the CPU until it releases
the CPU either by terminating (or) by requesting I/O
Eg: consider the following set of process that arrive at time 0, with the length of the
CPU-brust time given in millisecond.
process Burst
ti
m
e
P1 24
P2 3
P3 3
If the process arrive in the order p1,p2,p3 and served in FCFSorder, we get
the result in the following Ganh chart:
P1 P2 P3
0 24 27 30

from the above chart , we have


Average waiting time =[0+24+27]/3=51/3=17 milliseconds
Average turn-around time= [24+27+30]/3=81/3=27 milliseconds
*If the process arrive in the order p2 , p3, p1 however , the results will be as shown in
the following Ganh chart:
GANH CHART:

P1 P2 P3
0 3 6 30

From the above chart , we have


Average waiting time=[0+3+6]/3=9/3=3 milliseconds
Average turn-around time=[3+6+30]/30=39/3=13 milliseconds

SJF scheduling:
*The SJF algorithm may be either preemptive or non-preemptive.
*A non-preemptive SJF algorithm with allow the currently running process its CPU burst
time
*A preemptive algorithm will preemptive the currently executing process.
*preemptive SJF scheduling is sometimes called shortest- remaining-time-first.
*In this algorithm the shortest CPU burst time will be executed 1st

non preemptive SJF :


Example , consider the following process with length of CPU burst time given in
milliseconds .

process Burst
ti
m
e
P1 6
P2 8
P3 7
P4 3

P4 P1 P3 P2
0 3 9 18 24

By using Ganh chart the SJF scheduling will be


Average waiting time=3+16+9+0/4=24/4=7 milliseconds Average turn-around time
SJF around time is a optimal because it gives minimal average waiting time
for the given set of process.
preemptive SJF:
Example , consider the following set of process with the arrival time and length of
CPU burst time given in milliseconds
process Arrival Burst
tim ti
e m
e
P1 0 8
P2 1 4
P3 2 9
P4 3 5

P1 P2 P4 P1 P3
0 1 3 10 17 26

Average waiting time=process waiting - process arrival time/ total no process


=(10-0)+(1-1)+(17-2)+(5-3)/4
=10+12/4=27/4=6.75 milliseconds

Average turn around time= 17+5+26+10/4=58/4 = 14.5 milliseconds

Priority Scheduling:

*Priority scheduling can be either preemptive or non-preemptive.


*When a process arrives at the ready queue , its priority is compared with the priority of
the currently running process.
*A preemptive priority scheduling algorithm will preempt the CPU if the priority of the
newly arrived process is higher than the priority of currently running process.
*A non-preemptive priority scheduling algorithm will simply put the new process at the
head of the ready queue.
*A major problem with priority scheduling algorithm is indefinite blocking or starvation.
*A solution to the problem to indefinite blockage of low priority process is aging.
*Aging is a technique of gradually increasing the priority of processes that wait in the
system for a long time.
As an example , consider the following set of processes, assumed to have arrived at
time 0 , in the order p1,p2---p5 with the length of the CPU burst time given in
milliseconds:
process Burst priority
ti
m
e
P1 10 3
P2 1 1
P3 2 3
P4 1 4
P5 5 2

By using GANH chart


P2 P5 P1 P3 P4
0 1 6 16 18 19

Average waiting time= 6+0+16+18+1/5=6+16+18+1/5=16+15/5=35/5


=7 milliseconds
Average turnaround time = 16+1+18+19+6/5=16+18+19+7/5=60/5
=12 milliseconds

Round - Robin Scheduling:

*The RR scheduling algorithm is preemptive.


*The Round -Robin scheduling algorithm is designed especially for time sharing systems.
*A small unit of time ,called a time quantum, or time slice is defined.
*To implement RR scheduling , we keep the ready queue as a FIFO queue of process
*New process are added to the tail of ready queue.
*The average waiting time under the RR policy , however is often quite long.
Consider the following set of process that arrive at time 0, with the length of the CPU
burst time given in milliseconds.,Here time slice = 4
process Burst
ti
m
e
P1 24
P2 3
P3 3
BY USING GANH CHART
P1 P2 P3 P1 P1 P1 P1 P1
0 4 7 10 14 18 22 26 30
Average waiting time= process waiting - (n-1)*q+p2 waiting time- (n-1)9+------/no. of
processer
HERE
n=no of times the processor is executed
q=time quantum
=26-(6-1)*4+4-(1-1)4+7-(1-1)4/3
=26-20+4+7/3
=37-11/3=17/3=5.66 milliseconds.
Multilevel Queue Scheduling:
*Another class of scheduling algorithm has been created for situations in which process
are easily classified in to different groups.
For , example a common division is made b/w foreground processes and back ground
process.
*These two types of processes have different response time requirements and so might
have different scheduling needs.
A multilevel queue - scheduling algorithm partitions the ready queue into
several separate queues.
1.System process
2.Interactive process
3.Interactive editing process
4.Batch process
5.Student process

*The fore ground queue might be scheduled by an RR algorithm, while the background
queue is scheduled by an FCFS algorithm.

Multilevel Feedback Queue Scheduling:


*Normally ,in multilevel queue scheduling algorithm process are permanently assigned
to a queue an entry to the system.
*Processes do not move b/w queues
*If there are separate queues for foreground and background process
*For eg: processes do not move from one queue to other, since processes do not change
their foreground or background nature.
*Multilevel feedback queue scheduling, however allows a process to move b/w queues.
*The is too separate processes with different CPU burst characteristics.
*if a process uses too much CPU time , it will be moved to lower priority queue.
*This schema leaves i/o bound and interactive processes in the higher-priority queues.
*similarly, a process that wants too long in a lower priority queue may be moved to a
higher priority queue.
Multiple – process scheduling :
*If multiple CPU’S are available, the scheduling problem is correspondingly more
complex.
*If several identical processors are available then load sharing can occur.
*If would be possible to provide a separate queue for each processor.
*In this case , however, one processor could be idle , with an empty queue. While
another processor was very busy
*To prevent this situation , we use a common ready queue.
*All processes go into one queue and are scheduled onto any available processor.
*In such a scheme , one of two scheduling approaches may be used.
*In one approach , each processor is self scheduling.
*Each processor examines the common ready queue and selects a process to execute.
Unit-3
Process Management

Deadlock
System model:
A system consists of a finite number of resources to be distributed among a number of
completing processes. the partitioned into several types each of which consists of some
number of identical instance memory space CPU cycles ,files and I/O devices are examples of
resource types.

A process must request resource before using it and must release the resource after using it. a
process may request as many resources as it requires to carry out its designated task.
obviously, the number of resources may not exceed the total number of resources available in
the system.
Under the normal mode of operation, a process may utilize a resource in only the following
sequence.
Request: if the request cannot be granted immediately(for example, the resource is being used
by another process),then the requesting process must wait until it can acquire the resource
Use: the process can operate on the resource(for example if the resource is printer can print on
printer
Release: the process release the resource.

Deadlock characterization or conditions:


A deadlock situation can arise if the following four conditions hold simultaneously in a system:
Mutual exclusion: at least one resource must be held in a non-sharable mode ,i.e. only one
process at a time can use the resource. if another process request that resource the request
process must be delayed until the resource has been released
Hold and time: there must be exist a process that holding at least one resource and is waiting to
acquire additional resource that are currently being held by other processors.
No pre-emption: resource cannot be pre-empted i.e.; a resource can be released only voluntarily
by the process holding it,after that process has completed its task.
Circular wait: there must be exist a set {p 0,......pn}of waiting processes such that p 0 is waiting
for a resource that is held by p 1,p1 is waiting for a resource that is held by p 1,p2.......pn-1 is waiting
for a resource that is held by pnand pn is waiting for resource that is held by p0.
Resource allocation graph:
Deadlocks can be described in terms of a directed graph called a system resource allocation
graph. a set of vertices V and set of edges. V is partitioned in to two types.
P={p1,p2,p3...pn}the set consisting of all the processes in the systems. R={r 1,r2,r3...rm} the set
consisting of all resources types in the system. a directed edge from process pi to resource r j
is denoted by pirj. it represents that the process pi requested a resource type rj(request
edge).a directed edge from resource type ri to process pi is denoted by rjpi.it represents the
instance of resource type rj has been allocated to process pi(assignment edge)
Here,
Process P1 is holding an instance of resource type R2 and is waiting for resource type R1.
P2 is holding an instance of R1&R2 and is waiting for the instance of R3
Process P3 is holding an instance of R3.

Here, P1R1,R1P2,P2R3,R3P3,P3R2,R2P1.
Methods for handling deadlocks:
Deadlock:
In multiprogramming environment several processes may complete for a finite number of
resources a process request resources if the resources are not available at the time then the
process enters into waiting state
The waiting processes may not change there state if the requested resources are held by other
waiting processes this situation is called as deadlock.
There are 3 different methods dealing with the deadlock method
1.Deadlock Prevention
2.Deadlock Avoidance
3.Deadlock Detection
1.Deadlock Prevention:

A Deadlock situation can arise if the following 4 conditions hold simultaneously in the system
1. Mutual exclusion
2. Hold and wait
3. No pre-emption
4. Circular wait
Mutual exclusion: the mutual exclusion condition holds that non sharable resources. for
example, a printer cannot be simultaneously shared by several processes.
Hold and wait: to ensure that the hold and wait condition never occurs in the system we must
guarantee that whenever a process request a resource it does not hold any other resource.
For this purpose the two protocols are used to
1.one protocol is to requires process to request and allocated all it resources before it begin
resources
2.another protocol allows a process to request resources only when the process has none
No pre-emption to ensure that this condition does not hold,we can use the following protocols:
1.if the process that is holding same resources requests another resource that cannot be
immediately allocated to it(i.e; the process must wait) then all resources currently being held
are preemptedi.e; these resource are implicitly released
Circular wait one way to ensure that are circular wait condition never holds is to impose a total
ordering of all resource types and to require that each process requests resources in an
increasing order of enumeration
Let R={R1,R2.....Rm} be set of resource types we assign to each resource type a unique integer
number which allows us to compare two resources and to determine whether one precedes
another in our ordering.

2.Deadlock Avoidance
the main drawback of deadlock prevention is low device utilization and reduced system through
put. an alternative technique for is to require additional information about how resources are
to be requested. it is possible to construct can algorithm that ensures the system will never a
deadlock state. the algorithm for deadlock avoidance approach are
1.resource allocation graph algorithm
2.banker’s algorithm
3.saftey algorithm
4.resource request algorithm

1.resource allocation graph algorithm: claim edge pirj indicated the process pi may request
resource rj. represented by dashed line
Claim edge converts to request edge when a process requests a resource
When a resource is released by a process assignment edge reconverts to a claim edge
Resource must be claimed a priori in the system

R1P1=assignment edge P2R1=request edge P2R2=claim edge

2.banker’s algorithm
The resources allocation graph algorithm is not applicable to a resource allocation system with
the multiple instance of each resource type
The banker’s algorithm name was chosen because this algorithm could be used in a banking
system to ensure that the bank never allocates its available can such that it can no longer
satisfy the needs of all its algorithm
Algorithm:
Available: A vector of length m indicates the number of available resources of each type. If
available [j]=k, there are k instance of resource type r1 available.
Max: an n*m matrix defines the maximum demand of each process. if max[i,j]=k,then pi may
request at most k instance of resource type rj.
Allocation: an n*m matrix defines the number of resource of each type currently allocated to
each process. if allocation [i,j]=k,then process pi is currently allocated k instance of resource
type rj.
Need: An n*m matrix indicate the remaining resource need of each process if need [i,j]=k then pi
may need k more instances of resource type rj to complete its task
Need[i,j]=Max[i,j]-Allocation[i,j]

3.saftey algorithm
The algorithm for finding out whether or not a system is in a safe state can be describe
as follows
1.let work and finish be vectors of length m and n respectively initialize work!= available
Finish[i]!=false for i=1,2,3,.......n
2.find an i such that both
A.finish[i]=flase
B.need i<=work
If no such i exists, go to step 4.
3.work:=work+allocation
Finish[i]:=true ,go to step 2.
4.if finish[i]=true for all i, then the system is in a safe state
4.resource request algorithm
If requesti<=needi go to step 2.otherwise raise an error condition. since the process has
executed its maximum claim.If requesti<=available, go to step 3.other wise pi must
wait since the resources are not available. Have the system pretend to have
allocated the requested resources to process pi by modifying the states as follows
a. Available:=Available-Requesti
b. Allocationi:=Allocationi+Requesti
c. Need i=Needi-Requesti
3.Deadlock Detection
it a system does not empty either a deadlock prevention or a deadlock avoidance
algorithm. an algorithm that examines the state of the system to determine whether
a deadlock has occurred. an algorithm to recover from the deadlock.
Single instance of each resource type: maintain wait for graph nodes are processes
Pipj if pi is waiting for pj. periodically invoke an algorithm that searches for a cycle in
the graph.an a algorithm to detect a cycle in a graph requires an order of n 2
operations where n is the number of vertices in the graph.
Several instance of resource type:
The wait for graph scheme is not applicable to a resource allocation system with multiple
instance of each resource type.

Available: a vector of length m indicates the number of available resource of each type.
Allocation: A n*m matrix defines the number of resources of each type currently
allocated to each process.

Request An n*m matrix indicates the current request of each process if request [i,j]=k,
then process pi is requesting k more instance of resource type rj.
Detection Algorithm
1.Let work and finish be vectors of length m and n respectively initialize
work:=available.for i=1,2,3.....n.
2.if allocation i!=0 then finish [i]:=false otherwise finish[i]:=true.
find an index i such that both
a.finish[i]=false
b.requesti<=work
3.if no such i exists go to step 4
work:=work+allocation
finish[i]:=true goto step2
4.if finish[i]=false,for same i 1<=i<=n, then the system is in a deadlock state moreover
it finish[i]=false,the process pi is deadlocked.
Recovery from deadlock
When a detection algorithm determines that a deadlock exists one possibility is true
inform the operator that a deadlock has occurred and deals with manually. the other
possibility is to let the system recovery from the deadlock automatically. there are two
options for breaking a deadlock
Process termination
To eliminates deadlock by aborting a process we use one of 2 methods they are
 Abort all deadlocked processes
 Abort one process at a time until the deadlock cycle eliminated
Resource pre-emption:
 Selecting a victim-minimize cost
 Roll back return to some safe state restart process for the state
 Starvation some process may always be picked as victim include number of rollback in
cost factor.

Concurrent Processes in Operating System


Concurrent processing is a computing model in which multiple processors execute
instructions simultaneously for better performance. Concurrent means, which occurs
when something else happens. The tasks are broken into sub-types, which are then
assigned to different processors to perform simultaneously, sequentially instead, as
they would have to be performed by one processor. Concurrent processing is
sometimes synonymous with parallel processing.

The term real and virtual concurrency in concurrent processing:

Multiprogramming Environment:

In multiprogramming environment, there are multiple tasks shared by one


processor. while a virtual concert can be achieved by the operating system, if the
processor is allocated for each individual task, so that the virtual concept is visible if
each task has a dedicated processor. The multilayer environment shown in figure.
Multiprocessing Environment:

In multiprocessing environment two or more processors are used with shared


memory. Only one virtual address space is used, which is common for all processors.
All tasks reside in shared memory. In this environment, concurrency is supported in
the form of concurrently executing processors. The tasks executed on different
processors are performed with each other through shared memory. The
multiprocessing environment is shown in figure .

Distributed Processing Environment:

In a distributed processing environment, two or more computers are connected to


each other by a communication network or high speed bus. There is no shared
memory between the processors and each computer has its own local memory. Hence
a distributed application consisting of concurrent tasks, which are distributed over
network communication via messages. The distributed processing environment is
shown in figure

Process:
A process is a program in execution the execution of a process must
progress in sequential fashion. a process is more than the program code known as
the text section. A process generally includes the process stack, containing
temporary. Data etc. a program is passive entity such as the content of a file stored
in disk, where as process is an active entity with program counter.
Process state:
As a process executes it change state. the state of a process is defined in part by the
current activity of that process. each process may be in one of following state

New: the process is being created


Running: instructions are being executed
Waiting: the process is waiting for some event to occur(such as I/O completion or
reception of a signal)
Ready: the process is waiting to assigned to processor
Terminated: the process has finished execution.

Process control block(PCB):


Each process is represented in the OS by a PCB also called as “task control block”.

Pointer: it indicate the address of the process


Program counter: the counter indicate the address of the next instruction to be
executed for this process
CPU register: it includes accumulator index registers, plus any condition code
information
CPU scheduling information: this information includes a process priority pointers to
scheduling queues and any other scheduling parameters.
Memory management information: it includes the information of base and limit
registers, the page tables or the segment tables etc.
Accounting information: this information includes the amount of CPU to real time
used time limits etc.
I/O states information: the information including the list of I/O devices.
Context switch: process and loading the saved state for the new process. This task is
known as a “context switch”. its speed varies from machine to machine depending
on the memory speed the number of registers which must be copied. context
switch times are highly dependent on H/W supports.

Process synchronization:
Process synchronization means sharing system resources by process in a such a way
that, concurrent access to shared data is handled there by minimizing the change
of inconsistency data maintaining data consistency demands mechanisms to ensure
synchronized execution of cooperating processes.

Critical section problem


Consider a system consistency of n processes. each process has a segment of code
called a critical section in which the process may be changing variables writing a file
and so on. The important features f the system is that when one process is executing
in its critical section no other process is to be allowed executed in its critical section.
each process must request permission to enters critical section. this section of code
implemented this request is the entry section. the critical section may be followed by
an exit section. the remaining code is remainder section.
Solution to critical section problem:
A solution to the critical section problem section problem must satisfy the following
three requirements.
Mutual exclusion: if the process pi is executing in its critical section then no other
processes can be executing in their critical section.
Progress: if no process in its critical section if one or more threads want to execute
their critical section. then any one of these threads must be allowed to get into its
critical section.
Bounded waiting:
After a process makes a request for getting into its critical section there is a limit for
how many other processes can get into their critical section, before this processes
request is granted so after the limit is reached system must grant the process
permission to get into its critical section.
Process solutions:
in this solutions there are some algorithms that are applicable to only two processors
at the time. The processors are numbered p0 and p1.pi is one processor and pj
denotes another process i.e;j=1-i;

Algorithm1:
Here, the processors share common integer variable turn initialized to 0 or 1.if turn=i
the process pi is allowed to executed in critical section.the structure of process pi
shown below

This solution ensures that only one process at a time cab be its critical section. if does
not satisfy the progress requirement because only p0 is ready to enter in its critical
section.
Algorithm 2:
The problem with algorithm 1 is that, if does not retain sufficient information about the
state of each process. it remembers only which process is allowed into enter that
process critical section. to remedy this problem we can replaces the variable turn
with the following array:
Var flag:array[0....1] of Boolean;

The elements of array are initialized to false. if flag[i] is true the pi is ready to enter in
critical section. the structure of process pi is shown below
In this solution the mutual exclusion is satisfying but progress and bounded waiting
conditions are not satisfied
By combining the key ideal of algorithm 1&2,we obtain a correct solution to critical
section problem where, the processors share two variables

Var flag:array[0....1] of Boolean;


Turn:0.......1;
Initially flag[0]=flag[1]=false and the value of turn in immaterial (but is either 0 or
1).the structure of process pi is shown below

We now prove that the solution is correct we need show that


1.mutual exclusion is preserved
2.The progress requirement is satisfied
3.The bounded waiting requirement is met.

Semaphores:
The solutions to the critical section problem are not suitable for complex
problems. to overcome this difficulty we can use one tool is called semaphores.
Semaphores is a synchronization tool. In 1965,dijkstra proposed a new and
very significant technique for managing concurrent processes to synchronize the
progress of interacting process the new technique is called semaphore. a
semaphore ‘S’ is an integer variable which is used to access to S standard
automatic operations called wait and signal designated by P() and V() respectively.
The classical definition of wait and signal are:
Wait: decrement the value of its argument S as soon as if would become non-
negtive
Wait : while S<=0 do no-op; S:S-1;
Signal: increment the value of its argument, S as an individual operation
Signal(S):S:S+1;
Use of semaphores:
Semaphores are used to deal with the n process critical section problem. the
n-processors share a semaphores mutex(mutual exclusion) which is initialized to
one. Each processes pi is organized as follows
Repeat
Wait(mutex)
Critical section
Single(mutex)
Remainder Section
Until False;
Properties of semaphores:

1.simple
2.works with many processes
3.it can have many different critical section with different semaphores
4.each critical section has unique

Types of semaphores
Semaphores are mainly two types
Binary semaphores:
1.it is special form of semaphores used for implementing mutual exclusion hence it is
often mutex.
2. a binary semaphores is initialized to 1 and only takes the value 0 and 1during
execution of a program.
Counting semaphores:
These are used to implement bounded concurrency
Implementation:
The main disadvantages of mutual exclusion solution is that the are required busy
waiting. while a process is in its critical section any other process that tries to enter
its critical section must loop continuously in the entry code. This continual looping
is clearly a problem in a real programming system, various single CPU is shared
among many processes. busy waiting wastes CPU cycles that some other process
might be enable to use productively. this type of semaphores is also called as
“spinlock”. spinlock are useful in multi processor systems. the semaphores
operations can now be defined as
Wait(S): S. value :=S.value-1;
If S. value<0
Then begin
Add this process to S.L(list of processes)
Block;

End;
Signal(S):S.value:=S.value+1;
If S.value<0;
Then begin;
Remove a process p from S.L
Wake up(p);
End;
Limitations of semaphores:
1.priority in version is a big limitation of OS semaphores. with improper use a process
may clock indefinitely such as situation is called deadlock.

What is Inter Process Communication?


In general, Inter Process Communication is a type of mechanism usually provided by
the operating system (or OS). The main aim or goal of this mechanism is to provide
communications in between several processes. In short, the intercommunication allows
a process letting another process know that some event has occurred.
Let us now look at the general definition of inter-process communication,
which will explain the same thing that we have discussed above.
Definition
"Inter-process communication is used for exchanging useful information
between numerous threads in one or more processes (or programs)."
To understand inter process communication, you can consider the following
given diagram that illustrates the importance of inter-process communication:

Role of Synchronization in Inter Process Communication


It is one of the essential parts of inter process communication. Typically, this
is provided by interprocess communication control mechanisms, but sometimes it
can also be controlled by communication processes.
These are the following methods that used to provide the synchronization:
1. Mutual Exclusion
2. Semaphore
3. Barrier
4. Spinlock
Mutual Exclusion:-
It is generally required that only one process thread can enter the critical
section at a time. This also helps in synchronization and creates a stable state to
avoid the race condition.
Semaphore:-
Semaphore is a type of variable that usually controls the access to the shared
resources by several processes. Semaphore is further divided into two types which
are as follows:
1. Binary Semaphore
2. Counting Semaphore
Barrier:-
A barrier typically not allows an individual process to proceed unless all the
processes does not reach it. It is used by many parallel languages, and collective
routines impose barriers.
Spinlock:-
Spinlock is a type of lock as its name implies. The processes are trying to acquire the
spinlock waits or stays in a loop while checking that the lock is available or not. It is
known as busy waiting because even though the process active, the process does not
perform any functional operation (or task).
Approaches to Inter process Communication
We will now discuss some different approaches to inter-process communication
which are as follows:

These are a few different approaches for Inter- Process Communication:

1. Pipes
2. Shared Memory
3. Message Queue
4. Direct Communication
5. Indirect communication
6. Message Passing
7. FIFO
To understand them in more detail, we will discuss each of them individually.
Pipe:-
The pipe is a type of data channel that is unidirectional in nature. It means
that the data in this type of data channel can be moved in only a single direction at a
time. Still, one can use two-channel of this type, so that he can able to send and
receive data in two processes. Typically, it uses the standard methods for input and
output. These pipes are used in all types of POSIX systems and in different versions
of window operating systems as well.
Shared Memory:-
It can be referred to as a type of memory that can be used or accessed by
multiple processes simultaneously. It is primarily used so that the processes can
communicate with each other. Therefore the shared memory is used by almost all
POSIX and Windows operating systems as well.
Message Queue:-
In general, several different messages are allowed to read and write the data
to the message queue. In the message queue, the messages are stored or stay in
the queue unless their recipients retrieve them. In short, we can also say that the
message queue is very helpful in inter-process communication and used by all
operating systems.
To understand the concept of Message queue and Shared memory in more
detail, let's take a look at its diagram given below:

Message Passing:-
It is a type of mechanism that allows processes to synchronize and
communicate with each other. However, by using the message passing, the
processes can communicate with each other without restoring the hared variables.
Usually, the inter-process communication mechanism provides two operations
that are as follows:
1.send (message)
2.received (message)
Direct Communication:-
In this type of communication process, usually, a link is created or established
between two communicating processes. However, in every pair of communicating
processes, only one link can exist.
Indirect Communication
Indirect communication can only exist or be established when processes share
a common mailbox, and each pair of these processes shares multiple communication
links. These shared links can be unidirectional or bi-directional.
FIFO:-
It is a type of general communication between two unrelated processes. It
can also be considered as full-duplex, which means that one process can
communicate with another process and vice versa.

Some other different approaches


Socket:-
It acts as a type of endpoint for receiving or sending the data in a network. It
is correct for data sent between processes on the same computer or data sent
between different computers on the same network. Hence, it used by several types
of operating systems.
File:-
A file is a type of data record or a document stored on the disk and can be
acquired on demand by the file server. Another most important thing is that several
processes can access that file as required or needed.
Signal:-
As its name implies, they are a type of signal used in inter process
communication in a minimal way. Typically, they are the massages of systems that
are sent by one process to another. Therefore, they are not used for sending data
but for remote commands between multiple processes.
Usually, they are not used to send the data but to remote commands in
between several processes.
Why we need interprocess communication?
There are numerous reasons to use inter-process communication for sharing
the data. Here are some of the most important reasons that are given below:
1. It helps to speedup modularity
2. Computational
3. Privilege separation
4. Convenience
5. Helps operating system to communicate with each other and
synchronize their actions as well.
Process synchronization:
Process synchronization means sharing system resources by process in a such a way
that, concurrent access to shared data is handled there by minimizing the change of
inconsistency data maintaining data consistency demands mechanisms to ensure
synchronized execution of cooperating processes.
Classical problem of Synchronization:
These are the examples of large class of concurrency control problems.
semaphores are used for synchronization in these solutions.
some of the problems are
1.Bounded-buffer problem.
2.Writer-Reader problem.
3.Dining-philosophers problem.
The bounded - buffer problem:
we assume that the pool consists of n buffers , each capable of holding one
item. The mutex semaphore provides mutual exclusion for accesses to the buffer
pool and its in initialized to the value1. The empty and full semaphores count the
number of empty and full buffer, respectively. The semaphore empty is initialized to
the value n , the semaphore full is initialized to the value 0.
The code for the procedure process is below
Repeat
------
Produce An Item In Next P
------
Wait(Empty);
Wait(Mutex);
------
And Next P To Buffer
-------
Signal (Mutex);
Signal (Full);
Until False;
The structure of the producer process
The code of consumer process is shown below
Repeat
Wait(Full);
Wait(Mutex);
------
Remove An Item From Buffer To Next C
------
Signal(Mutex);
Signal(Empty);
-------
Consume The Item In Next C
------
Until False;
The structure of the consumer process
The readers and writers problem :
A data object (such as a file or record) is to be shared among several concurrent
process .some of these processes may want only to read the content of the shared
object, where as others may want to update the shared object. we distinguish b/w
these two types of processes by referring to those processes that are interested in
only reading as readers, and to the rest as writers.
the code for writer process is
Wait (Wrt);
-------
Writing Is Performed
------
Signal (Wrt);
The structure of writer process
The code for reader process is
Wait (Mutex);
Read Count:=Read Count+1;
If Read Count=1 Then Wait (Wrt);
Signal (Mutex);
-----
Reading Is Performed
-----
Wait(Mutex);
Readcount:= Readcount -1;
If Readcount=0 Then Signal(Wrt);
Signal (Mutex);
The structure of a reader process
The Dining - Philosophers Problem :
consider five philosophers who spend their lives thinking and eating . The
philosophers share a common circular table surrounded by five chairs, each
belonging to one philosopher. In the centre of the table there is a bowl of rice and a
table is laid with five single chop sticks.
A philosopher may pick up only one chopstick at a time .when a hungry philosopher
has both her chopsticks at the same time, she eats without releasing her chopsticks.
when she is finished eating ,she puts down both of her chopsticks and starts thinking
again.
The situation of the dining philosopher one simple solution is to represent each
chopstick by a semaphore. A philosopher tries to grab the chopstick by executing a
wait operation on that semphore; she releases her chopsticks by executing the signal
operation on the appropriate semaphores. Thus , the shared data are
Var Chopstick : Array [0....4] Of Semaphore;
Repeat
Wait (Chopstick[I]);
Wait( Chopstick[I+1 Mod5]);
-------
Eat
-----
Think
----
Until False;
The structure of philosopher i

Unit-4
Memory Management
Logical VS physical address space:
*The concept of a logical address space that is bound to a separate physical address
space is central proper memory management.
- Logical address: generated by the CPU , also referred to as virtual address.
- Physical address: address seen by the memory unit.
*Logical and physical addresses are the same in compile time and load time address -
binding scheme logical (virtual) and physical addresses differ in execution time
address- binding scheme.
*The set of all physical addresses corresponding to these logical addresses is referred to
as a physical address space.
*The run-time mapping from virtual to physical addresses is done by the memory -
management unit (MMU).which is a hard ware device .
*There are no. of different schemes for a accomplishing search mapping as follows
*The base register is known called a re-location register.
*The value in the re location register is added to every address generated by a user
process at the time it is sent to memory.
*The user program never sees the real physical addresses.
*The program can create a pointer to location 346, store it in memory. maniculate it,
compare it to other addresses-all as the number 346
*The user program deals with logical address .
Memory Allocation Methods :
*The direct access nature of disks us flexibility in the implementation of files.
*In almost every case , many files will be stored on the same disk.
*The main problem is how to allocate space to these files so that disk space is utilized to
these files so that disk space is utilized effectively and files can be accessed quickly.
*Three major methods of allocation disk space are in wide use they are
1.Contiguous allocation
2.Linked allocation
3.Indexed allocation
Contiguous Allocation:
*The contiguous allocation method requires each file to occupy a set of contiguous
blocks on the disk.
*Disk address define a linear ordering on disk.
*Notice that with this ordering , assuming block 'b+1' after block 'b' normally requires
no head movement.
*When a head movement is needed, it is only one track.
*Thus , the number of disk seeks required for accessing contiguously allocated files in
minimal as is seek time when a seek is finally needed.
-Random access
-wasteful of space
-files cannot grow
Linked allocation of disk space
*each block contains a pointer to the next block.
*To create a new file , we simply create a new entry in a directory.
*With linked allocation, each directory entry has a pointer to the disk block of the file.
*This pointer to initialized to nil (the end of the list pointer value) to signify an empty
file.
*The size field is also set to 0.
*There is no external fragmentation with linked allocation and any free block on the
free-space list can be used to satisfy a request.
*The major problem is that it can be used effectively for only sequential-access files.
*To find 'ith’ block of a file , we must start at beginning of the file, and follow the pointers
until we get to the i'th block.
*Another disadvantage to linked allocation is the space required for the pointer.
*The usual solution tom this problem is to collect blocks into multiples , called clusters,
and to allocate the clusters rather than blocks.
Indexed Allocation:
*Linked allocation solves the external -fragmentation and size - declaration problems of
contiguous allocation.
*Linked allocation cannot support efficient direct access.
*Indexed allocation solves this problem by bringing all the pointers together into one
locations the index block.
*Each file has its own index block ,which is an array of disk-block address
*The i'th entry in the index block points to the i'th block of file.
*The directory contains the address of the index block.

*To read i'th block, we use the pointer in the i'th index block entry to find and read the
desired block.
*Indexed allocation supports direct access, without suffering from external
fragmentation, because any free block on the disk may satisfy a request for more
space.
*Indexed allocation does suffer from wasted space .

*The pointer overhead of the index block is generally greater than the pointer overhead
of linked allocation
Paging:
*Logical address space of a process can be non-contiguous; process is allocated physical
memory whenever the latter is available.
*Device physical memory into fixed-size blocks is called frames. size is power of 2,b/w
512 bytes and 8192 bytes.
*Divide logical memory into blocks of same size called pages.
*Keep track of all free frames.
*To run a program of size n-pages , need to find n-free frames and load program.
*set up a page table to translate logical to physical addresses .
*Internal fragmentation.
Address translation scheme:
 Address generated by the CPU is divided into
 Page number(p): it used as index into a page table which contains base address of
each page in physical memory.
 Page offset(o): combined with base address to define the physical memory address
that is sent to the memory unit

Implementation of page table


 Page table is kept in main memory
 Page table base register(PTBR) points to the page table
 Page table length(PTLR) indicates size of the page table
 In this scheme every data/instruction access requires two memory access. one for
page table and one for the data/instruction.
 The two memory access problem can be solved by the use of special fast look up
hardware cache called associative memory or translation look-aside buffers(TLBS).

Segmentation:
*memory management scheme that supports user view of memory.
*program is a collection of segments A segments is a logical unit such as:
 program
 procedure
 function
 method
 object
 local variables, global variables
 stack
 symbol table , arrays
Segmentation Architecture :
 Logical address consist of 2 types :
 < segment-number , offset >
 Segment table - maps two - dimensional physical addresses ; each table entry has
 base - contains the starting physical address where the segments reside in memory .
 Limit-specifies the length of the segment.
 Segment-table base register (STBR) points to the segment tables location in
memory.
 Segment - table length register(STLR) indicates numbers of segments used by a
program;segment number s is legal if s<STLR
 Relocation
 dynamic
 be segment table
 Sharing
 shared segments
 same segment number
 Allocation
 first fit /best fit
 external fragmentation

 The MULTICS system solved problems of external fragmentation and lengthy search
times by paging the segments.
 Solution differs from pure segmentation in that the segment-table entry contains not
the base address of the segment , but rather the base address of a page table for
this segment.

Trashing:
 If the number frames allocated to low priority process false below the minimum
number required by the computer architecture, we must suspend that process
execution.
 We should then page out its remaining pages, freeing all its allocated frames.
 This provision introduces a swap-in , swap-out level of intermediate CPU scheduling.
 Although it is technically possible to reduce the number of (larger) number of pages
that are in active useful.
 If the process does not have this no. of frames it will very quickly page fault.
 The process continues to fault replacing pages for which it will then fault and bring
back in right away.
 This high paging activity is called thrashing.
 A process is thrashing if it is spending more time paging than executing.

Virtual memory:
Virtual memory is a concept used in some large computer systems that
permits space were available, equal to the totally of auxiliary memory.
Virtual memory is used to give programmers the illusion that they have a
very large memory at their disposal, even though the computer actually has a
relatively small main memory .A virtual memory system provides a mechanism for
translating program-generated addresses into correct main memory locations. this is
done dynamically , while programs are being executed in the CPU. The translation or
mapping is handled automatically by the hardware by means of a mapping table.
Address space and memory space:
An address used by a programmer will be called a virtual address , and the
set of such addresses the address space . An address in main memory is called the
memory space
The address space is allowed to the larger than the memory space in
computers with virtual memory.
In a multi program computer system , programs and data are transferred to
and from auxiliary memory and main memory based on demands imposed by the
CPU. suppose that program 1 is currently being executed in the CPU. program 1 and
a portion of its associated data are moved from auxiliary memory into main memory.
In a virtual memory system, the address field of an instruction code will consist of 20
bits but physical memory addresses must be specified with only 15 bits .Thus CPU
will reference instructions and data with a 20-bit address, but the information at this
address mus5t be taken from physical memory.

Relation b/w address and memory space in a virtual memory system.


To map a virtual address of 20 bits to a physical address of 15 bits. The
mapping is a dynamic operation , which means that every address is translated
immediately as a word is referred by CPU.
The mapping table may be stored in a separate memory or in main memory.
Address mapping using pages:
The table implementation of the address mapping is simplified if the
information in the address space and the memory space are each divided into group
of fixed size.
For example , if a page or block consists of 1k words , then, using the
previous example , address space is divided into 1024 pages and main memory is
divided into 32 blocks . Although both a page and a block are split into groups of 1k
words , a page refers to the organization of address space , while a block refers to
the organization of memory space . The programs are also considered to be split into
pages . portions of programs are moved from auxiliary memory to main memory in
records used to denote a block.
Consider a computer with an address space of 8k and a memory space of 4k .
if we split each into groups of 1k words we obtain eight pages and four blocks.
The mapping from address space to memory space is facilitated if each virtual
address is considered to be represented by two numbers: a page number address
and a line with in the page.
The line address in address space and memory space is the same , the only
mapping required is from a page number to a block number.
The memory - page table consists of eight words , one for each page . The
address in the page table denotes the page number and the content of the word
gives the block number where that page is stored in main memory . The table shows
that pages 1,2,5 and 6 are now available in main memory in blocks 3,0,1,and 2
respectively. A presence bit in each location indicates whether the page has been
transferred from auxiliary memory into main memory.
The CPU references a word in memory with a virtual address of 13 bits .T he
three high -order bits of the virtual address specifies a page number and also an
address for the memory - page table
The content of the word in the memory page table at the page number
address is read out into the memory table buffer register. in the presence bit is a 1,
the block number bus thus read is transferred to the two high - order bits of the
main memory address register .The line number from the virtual address is
transferred into the 10 low-order bits of the memory address register.

Unit-5
File and I/O Management ,OS Security

Directory Structure
What is a directory?
Directory can be defined as the listing of the related files on the disk. The
directory may store some or the entire file attributes.
To get the benefit of different file systems on the different operating systems,
A hard disk can be divided into the number of partitions of different sizes. The
partitions are also called volumes or mini disks.
Each partition must have at least one directory in which, all the files of the
partition can be listed. A directory entry is maintained for each file in the directory
which stores all the information related to that file.
A directory can be viewed as a file which contains the Meta data of the bunch
of files.Every Directory supports a number of common operations on the file:
1. File Creation
2. Search for the file
3. File deletion
4. Renaming the file
5. Traversing Files
6. Listing of files
Single Level Directory
The simplest method is to have one big list of all the files on the disk. The
entire system will contain only one directory which is supposed to mention all the
files present in the file system. The directory contains one entry per each file present
on the file system.

This type of directories can be used for a simple system.


Advantages
1. Implementation is very simple.
2. If the sizes of the files are very small then the searching becomes faster.
3. File creation, searching, deletion is very simple since we have only one directory.
Disadvantages
1. We cannot have two files with the same name.
2. The directory may be very big therefore searching for a file may take so much time.
3. Protection cannot be implemented for multiple users.
4. There are no ways to group same kind of files.
5. Choosing the unique name for every file is a bit complex and limits the number of
files in the system because most of the Operating System limits the number of
characters used to construct the file name.
Two Level Directory
In two level directory systems, we can create a separate directory for each
user. There is one master directory which contains separate directories dedicated to
each user. For each user, there is a different directory present at the second level,
containing group of user's file. The system doesn't let a user to enter in the other
user's directory without permission.
Characteristics of two level directory system
1. Each files has a path name as /User-name/directory-name/
2. Different users can have the same file name.
3. Searching becomes more efficient as only one user's list needs to be traversed.
4. The same kind of files cannot be grouped into a single directory for a particular user.
Every Operating System maintains a variable as PWD which contains the present
directory name (present user name) so that the searching can be done
appropriately.
Tree Structured Directory
In Tree structured directory system, any directory entry can either be a file or
sub directory. Tree structured directory system overcomes the drawbacks of two
level directory system. The similar kind of files can now be grouped in one directory.
Each user has its own directory and it cannot enter in the other user's
directory. However, the user has the permission to read the root's data but he
cannot write or modify this. Only administrator of the system has the complete
access of root directory.
Searching is more efficient in this directory structure. The concept of current working
directory is used. A file can be accessed by two types of path, either relative or
absolute.
Absolute path is the path of the file with respect to the root directory of the
system while relative path is the path with respect to the current working directory
of the system. In tree structured directory systems, the user is given the privilege to
create the files as well as directories.

Permissions on the file and directory


A tree structured directory system may consist of various levels therefore
there is a set of permissions assigned to each file and directory.
The permissions are R W X which are regarding reading, writing and the
execution of the files or directory. The permissions are assigned to three types of
users: owner, group and others.
There is a identification bit which differentiate between directory and file. For
a directory, it is d and for a file, it is dot (.)
The following snapshot shows the permissions assigned to a file in a Linux
based system. Initial bit d represents that it is a directory.

Acyclic-Graph Structured Directories


The tree structured directory system doesn't allow the same file to exist in
multiple directories therefore sharing is major concern in tree structured directory
system. We can provide sharing by making the directory an acyclic graph. In this
system, two or more directory entry can point to the same file or sub directory. That
file or sub directory is shared between the two directory entries.
These kinds of directory graphs can be made using links or aliases. We can
have multiple paths for a same file. Links can either be symbolic (logical) or hard link
(physical).
If a file gets deleted in acyclic graph structured directory system, then
1. In the case of soft link, the file just gets deleted and we are left with a dangling
pointer.
2. In the case of hard link, the actual file will be deleted only if all the references to it
gets deleted.
Operations on the File
A file is a collection of logically related data that is recorded on the secondary
storage in the form of sequence of operations. The content of the files are defined by
its creator who is creating the file. The various operations which can be implemented
on a file such as read, write, open and close etc. are called file operations. These
operations are performed by the user by using the commands provided by the
operating system. Some common operations are as follows:
1.Create operation:
This operation is used to create a file in the file system. It is the most widely
used operation performed on the file system. To create a new file of a particular type
the associated application program calls the file system. This file system allocates
space to the file. As the file system knows the format of directory structure, so entry
of this new file is made into the appropriate directory.
2. Open operation:
This operation is the common operation performed on the file. Once the file is
created, it must be opened before performing the file processing operations. When
the user wants to open a file, it provides a file name to open the particular file in the
file system. It tells the operating system to invoke the open system call and passes
the file name to the file system.
3. Write operation:
This operation is used to write the information into a file. A system call write
is issued that specifies the name of the file and the length of the data has to be
written to the file. Whenever the file length is increased by specified value and the
file pointer is repositioned after the last byte written.
4. Read operation:
This operation reads the contents from a file. A Read pointer is maintained by
the OS, pointing to the position up to which the data has been read.
5. Re-position or Seek operation:
The seek system call re-positions the file pointers from the current position to
a specific place in the file i.e. forward or backward depending upon the user's
requirement. This operation is generally performed with those file management
systems that support direct access files.
6. Delete operation:
Deleting the file will not only delete all the data stored inside the file it is also
used so that disk space occupied by it is freed. In order to delete the specified file
the directory is searched. When the directory entry is located, all the associated file
space and the directory entry is released.
7. Truncate operation:
Truncating is simply deleting the file except deleting attributes. The file is not
completely deleted although the information stored inside the file gets replaced.

8. Close operation:
When the processing of the file is complete, it should be closed so that all the
changes made permanent and all the resources occupied should be released. On
closing it de-allocates all the internal descriptors that were created when the file was
opened.
9. Append operation:
This operation adds data to the end of the file.
10. Rename operation:
This operation is used to rename the existing file.
Allocation Methods
There are various methods which can be used to allocate disk space to the
files. Selection of an appropriate allocation method will significantly affect the
performance and efficiency of the system. Allocation method provides a way in which
the disk will be utilized and the files will be accessed.
There are following methods which can be used for allocation.
1. Contiguous Allocation
2. Extents
3. Linked Allocation
4. Clustering
5. FAT
6. Indexed Allocation
7. Linked Indexed Allocation
8. Multilevel Indexed Allocation
9. Inode
We will discuss three of the most used methods in detail.
Contiguous Allocation
If the blocks are allocated to the file in such a way that all the logical blocks
of the file get the contiguous physical block in the hard disk then such allocation
scheme is known as contiguous allocation.
In the image shown below, there are three files in the directory. The starting
block and the length of each file are mentioned in the table. We can check in the
table that the contiguous blocks are assigned to each file as per its need.
Advantages
1. It is simple to implement.
2. We will get Excellent read performance.
3. Supports Random Access into files.
Disadvantages
1. The disk will become fragmented.
2. It may be difficult to have a file grow.

Linked List Allocation


Linked List allocation solves all problems of contiguous allocation. In linked
list allocation, each file is considered as the linked list of disk blocks. However, the
disks blocks allocated to a particular file need not to be contiguous on the disk. Each
disk block allocated to a file contains a pointer which points to the next disk block
allocated to the same file.

Advantages
1. There is no external fragmentation with linked allocation.
2. Any free block can be utilized in order to satisfy the file block requests.
3. File can continue to grow as long as the free blocks are available.
4. Directory entry will only contain the starting block address.
Disadvantages
1. Random Access is not provided.
2. Pointers require some space in the disk blocks.
3. Any of the pointers in the linked list must not be broken otherwise the file will get
corrupted.
4. Need to traverse each block
File Allocation Table
The main disadvantage of linked list allocation is that the Random access to a
particular block is not provided. In order to access a block, we need to access all its
previous blocks.
File Allocation Table overcomes this drawback of linked list allocation. In this
scheme, a file allocation table is maintained, which gathers all the disk block links.
The table has one entry for each disk block and is indexed by block number.
File allocation table needs to be cached in order to reduce the number of head seeks.
Now the head doesn't need to traverse all the disk blocks in order to access one
successive block.
It simply accesses the file allocation table, read the desired block entry from
there and access that block. This is the way by which the random access is
accomplished by using FAT. It is used by MS-DOS and pre-NT Windows versions.

Advantages
1. Uses the whole disk block for data.
2. A bad disk block doesn't cause all successive blocks lost.
3. Random access is provided although its not too fast.
4. Only FAT needs to be traversed in each file operation.
Disadvantages
1. Each Disk block needs a FAT entry.
2. FAT size may be very big depending upon the number of FAT entries.
3. Number of FAT entries can be reduced by increasing the block size but it will also
increase Internal Fragmentation.
Indexed Allocation(Limitation of FAT)
Limitation in the existing technology causes the evolution of a new
technology. Till now, we have seen various allocation methods; each of them was
carrying several advantages and disadvantages.
File allocation table tries to solve as many problems as possible but leads to a
drawback. The more the number of blocks, the more will be the size of FAT.
Therefore, we need to allocate more space to a file allocation table. Since, file
allocation table needs to be cached therefore it is impossible to have as many space
in cache. Here we need a new technology which can solve such problems.
Indexed Allocation Scheme
Instead of maintaining a file allocation table of all the disk pointers, Indexed
allocation scheme stores all the disk pointers in one of the blocks called as indexed
block. Indexed block doesn't hold the file data, but it holds the pointers to all the
disk blocks allocated to that particular file. Directory entry will only contain the index
block address.

Advantages
1. Supports direct access
2. A bad data block causes the lost of only that block.
Disadvantages
1. A bad index block could cause the lost of entire file.
2. Size of a file depends upon the number of pointers, a index block can hold.
3. Having an index block for a small file is totally wastage.
4. More pointer overhead
Linked Index Allocation(Single level linked Index Allocation)
In index allocation, the file size depends on the size of a disk block. To allow
large files, we have to link several index blocks together. In linked index allocation,
o Small header giving the name of the file
o Set of the first 100 block addresses
o Pointer to another index block
For the larger files, the last entry of the index block is a pointer which points to
another index block. This is also called as linked schema.
Advantage: It removes file size limitations
Disadvantage: Random Access becomes a bit harder
Multilevel Index Allocation
In Multilevel index allocation, we have various levels of indices. There are outer
level index blocks which contain the pointers to the inner level index blocks and the
inner level index blocks contain the pointers to the file data.
o The outer level index is used to find the inner level index.
o The inner level index is used to find the desired data block.
Advantage: Random Access becomes better and efficient.
Disadvantage: Access time for a file will be higher.

Inode
In UNIX based operating systems, each file is indexed by an Inode. Inode are
the special disk block which is created with the creation of the file system. The
number of files or directories in a file system depends on the number of Inodes in
the file system.
An Inode includes the following information
1. Attributes (permissions, time stamp, ownership details, etc) of the file
2. A number of direct blocks which contains the pointers to first 12 blocks of the file.
3. A single indirect pointer which points to an index block. If the file cannot be indexed
entirely by the direct blocks then the single indirect pointer is used.
4. A double indirect pointer which points to a disk block that is a collection of the
pointers to the disk blocks which are index blocks. Double index pointer is used if the
file is too big to be indexed entirely by the direct blocks as well as the single indirect
pointer.
5. A triple index pointer that points to a disk block that is a collection of pointers. Each
of the pointers is separately pointing to a disk block which also contains a collection
of pointers which are separately pointing to an index block that contains the pointers
to the file blocks.

Free Space Management


A file system is responsible to allocate the free blocks to the file therefore it
has to keep track of all the free blocks present in the disk. There are mainly two
approaches by using which, the free blocks in the disk are managed.
1. Bit Vector
In this approach, the free space list is implemented as a bit map vector. It
contains the number of bits where each bit represents each block.
If the block is empty then the bit is 1 otherwise it is 0. Initially all the blocks
are empty therefore each bit in the bit map vector contains 1.
LAs the space allocation proceeds, the file system starts allocating blocks to
the files and setting the respective bit to 0.
2. Linked List
It is another approach for free space management. This approach suggests
linking together all the free blocks and keeping a pointer in the cache which points to
the first free block.
Therefore, all the free blocks on the disks will be linked together with a
pointer. Whenever a block gets allocated, its previous free block will be linked to its
next free block.

Device Management in Operating System:


Device management in an operating system means controlling the
Input/Output devices like disk, microphone, keyboard, printer, magnetic tape, USB
ports, camcorder, scanner, other accessories, and supporting units like supporting
units control channels. A process may require various resources, including main
memory, file access, and access to disk drives, and others. If resources are
available, they could be allocated, and control returned to the CPU.
Otherwise, the procedure would have to be postponed until adequate
resources become available. The system has multiple devices, and in order to handle
these physical or virtual devices, the operating system requires a separate program
known as an ad device controller. It also determines whether the requested device is
available.
The fundamentals of I/O devices may be divided into three categories:
1. Boot Device
2. Character Device
3. Network Device
Boot Device:It stores data in fixed-size blocks, each with its unique address. For example-
Disks.
Character Device: It transmits or accepts a stream of characters, none of which can be
addressed individually. For instance, keyboards, printers, etc.
Network Device: It is used for transmitting the data packets.
Functions of the device management in the operating system
The operating system (OS) handles communication with the devices via their
drivers. The OS component gives a uniform interface for accessing devices with
various physical features. There are various functions of device management in the
operating system. Some of them are as follows:
1. It keeps track of data, status, location, uses, etc. The file system is a term used to
define a group of facilities.
2. It enforces the pre-determined policies and decides which process receives the
device when and for how long.
3. It improves the performance of specific devices.
4. It monitors the status of every device, including printers, storage drivers, and other
devices.
5. It allocates and effectively deallocates the device. De-allocating differentiates the
devices at two levels: first, when an I/O command is issued and temporarily freed.
Second, when the job is completed, and the device is permanently release
Types of devices
There are three types of Operating system peripheral devices: dedicated,
shared, and virtual. These are as follows:
1. Dedicated Device
In device management, some devices are allocated or assigned to only one
task at a time until that job releases them. Devices such as plotters, printers, tape
drivers, and other similar devices necessitate such an allocation mechanism because
it will be inconvenient if multiple people share them simultaneously. The
disadvantage of such devices is the inefficiency caused by allocating the device to a
single user for the whole duration of task execution, even if the device is not used
100% of the time.
2. Shared Devices
These devices could be assigned to a variety of processes. By interleaving
their requests, disk-DASD could be shared by multiple processes simultaneously. The
Device Manager carefully controls the interleaving, and pre-determined policies must
resolve all difficulties.
3. Virtual Devices
Virtual devices are a hybrid of the two devices, and they are dedicated
devices that have been transformed into shared devices. For example, a printer can
be transformed into a shareable device by using a spooling program that redirects all
print requests to a disk. A print job is not sent directly to the printer; however, it is
routed to the disk until it is fully prepared with all of the required sequences and
formatting, at which point it is transmitted to the printers. The approach can
transform a single printer into numerous virtual printers, improving performance and
ease of use.
Features of Device Management
Here, you will learn the features of device management in the operating system.
Various features of the device management are as follows:
1. The OS interacts with the device controllers via the device drivers while allocating
the device to the multiple processes executing on the system.
2. Device drivers can also be thought of as system software programs that bridge
processes and device controllers.
3. The device management function's other key job is to implement the API.
4. Device drivers are software programs that allow an operating system to control the
operation of numerous devices effectively.
5. The device controller used in device management operations mainly contains three
registers: command, status, and data.
Pipelining
The term Pipelining refers to a technique of decomposing a sequential process into
sub-operations, with each sub-operation being executed in a dedicated segment that
operates concurrently with all other segments.
The most important characteristic of a pipeline technique is that several
computations can be in progress in distinct segments at the same time. The overlapping
of computation is made possible by associating a register with each segment in the
pipeline. The registers provide isolation between each segment so that each can operate
on distinct data simultaneously.
The structure of a pipeline organization can be represented simply by including an
input register for each segment followed by a combinational circuit.
Let us consider an example of combined multiplication and addition operation to get a better
understanding of the pipeline organization
The combined multiplication and addition operation is done with a stream of
numbers such as:
Ai* Bi + Ci for i = 1, 2, 3, ......., 7
The operation to be performed on the numbers is decomposed into sub-operations
with each sub-operation to be implemented in a segment within a pipeline.
The sub-operations performed in each segment of the pipeline are defined as:
R1 ← Ai, R2 ← Bi Input Ai, and Bi
R3 ← R1 * R2, R4 ← Ci Multiply, and input Ci
R5 ← R3 + R4 Add Ci to product
The following block diagram represents the combined as well as the sub-operations
performed in each segment of the pipeline.
Registers R1, R2, R3, and R4 hold the data and the combinational circuits
operate in a particular segment.
The output generated by the combinational circuit in a given segment is
applied as an input register of the next segment. For instance, from the block
diagram, we can see that the register R3 is used as one of the input registers for the
combinational adder circuit.
In general, the pipeline organization is applicable for two areas of computer
design which includes:
1. Arithmetic Pipeline
2. Instruction Pipeline
Arithmetic Pipeline
Arithmetic Pipelines are mostly used in high-speed computers. They are used
to implement floating-point operations, multiplication of fixed-point numbers, and
similar computations encountered in scientific problems.
To understand the concepts of arithmetic pipeline in a more convenient way,
let us consider an example of a pipeline unit for floating-point addition and
subtraction.
The inputs to the floating-point adder pipeline are two normalized floating-point
binary numbers defined as:
X = A * 2a = 0.9504 * 103
Y = B * 2b = 0.8200 * 102
Where A and B are two fractions that represent the mantissa and a and b are
the exponents.
The combined operation of floating-point addition and subtraction is divided into
four segments. Each segment contains the corresponding suboperation to be
performed in the given pipeline. The sub operations that are shown in the four
segments are:
1. Compare the exponents by subtraction.
2. Align the mantissas.
3. Add or subtract the mantissas.
4. Normalize the result.
We will discuss each sub operation in a more detailed manner later in this section.
The following block diagram represents the sub operations performed in each segment
of the pipeline.

1. Compare exponents by subtraction:


The exponents are compared by subtracting them to determine their
difference. The larger exponent is chosen as the exponent of the result.
The difference of the exponents, i.e., 3 - 2 = 1 determines how many times
the mantissa associated with the smaller exponent must be shifted to the right.
2. Align the mantissas:
The mantissa associated with the smaller exponent is shifted according to the
difference of exponents determined in segment one.
X = 0.9504 * 103
Y = 0.08200 * 103
3. Add mantissas:
The two mantissas are added in segment three.
Z = X + Y = 1.0324 * 103
4. Normalize the result:
After normalization, the result is written as: Z = 0.1324 * 104

Instruction Pipeline
Pipeline processing can occur not only in the data stream but in the
instruction stream as well.
Most of the digital computers with complex instructions require instruction
pipeline to carry out operations like fetch, decode and execute instructions.
In general, the computer needs to process each instruction with the following
sequence of steps.
1. Fetch instruction from memory.
2. Decode the instruction.
3. Calculate the effective address.
4. Fetch the operands from memory.
5. Execute the instruction.
6. Store the result in the proper place.
Each step is executed in a particular segment, and there are times when different
segments may take different times to operate on the incoming information.
Moreover, there are times when two or more segments may require memory access
at the same time, causing one segment to wait until another is finished with the
memory.
The organization of an instruction pipeline will be more efficient if the instruction
cycle is divided into segments of equal duration. One of the most common examples
of this type of organization is a Four-segment instruction pipeline.
A four-segment instruction pipeline combines two or more different segments
and makes it as a single one. For instance, the decoding of the instruction can be
combined with the calculation of the effective address into one segment.
The following block diagram shows a typical example of a four-segment
instruction pipeline. The instruction cycle is completed in four segments.

Segment 1:The instruction fetch segment can be implemented using first in, first out
(FIFO) buffer.
Segment 2:The instruction fetched from memory is decoded in the second segment, and
eventually, the effective address is calculated in a separate arithmetic circuit.
Segment 3:An operand from memory is fetched in the third segment.
Segment 4:The instructions are finally executed in the last segment of the pipeline
organization.
Buffering in Operating System
The buffer is an area in the main memory used to store or hold the
data temporarily. In other words, buffer temporarily stores data transmitted from
one place to another, either between two devices or an application. The act of storing
data temporarily in the buffer is called buffering.
A buffer may be used when moving data between processes within a
computer. Buffers can be implemented in a fixed memory location in hardware or by
using a virtual data buffer in software, pointing at a location in the physical memory.
In all cases, the data in a data buffer are stored on a physical storage medium.
Most buffers are implemented in software, which typically uses the faster RAM
to store temporary data due to the much faster access time than hard disk drives.
Buffers are typically used when there is a difference between the rate of received
data and the rate of processed data, for example, in a printer spooler or online video
streaming.
A buffer often adjusts timing by implementing a queue or FIFO algorithm in
memory, simultaneously writing data into the queue at one rate and reading it at
another rate
Purpose of Buffering
You face buffer during watching videos on YouTube or live streams. In a video
stream, a buffer represents the amount of data required to be downloaded before
the video can play to the viewer in real-time. A buffer in a computer environment
means that a set amount of data will be stored to preload the required data before it
gets used by the CPU.
Computers have many different devices that operate at varying speeds, and a
buffer is needed to act as a temporary placeholder for everything interacting. This is
done to keep everything running efficiently and without issues between all the
devices, programs, and processes running at that time. There are three reasons
behind buffering of data,
1. It helps in matching speed between two devices in which the data is transmitted.
For example, a hard disk has to store the file received from the modem. As we know,
the transmission speed of a modem is slow compared to the hard disk. So bytes
coming from the modem is accumulated in the buffer space, and when all the bytes
of a file has arrived at the buffer, the entire data is written to the hard disk in a
single operation.
2. It helps the devices with different sizes of data transfer to get adapted to each
other. It helps devices to manipulate data before sending or receiving it. In computer
networking, the large message is fragmented into small fragments and sent over the
network. The fragments are accumulated in the buffer at the receiving end and
reassembled to form a complete large message.
3. It also supports copy semantics. With copy semantics, the version of data in the
buffer is guaranteed to be the version of data at the time of system call, irrespective
of any subsequent change to data in the buffer. Buffering increases the performance
of the device. It overlaps the I/O of one job with the computation of the same job.

Types of Buffering

There are three main types of buffering in the operating system, such as:
1. Single Buffer
In Single Buffering, only one buffer is used to transfer the data between two
devices. The producer produces one block of data into the buffer. After that, the
consumer consumes the buffer. Only when the buffer is empty, the processor again
produces the data.

Block oriented device: The following operations are performed in the block-oriented
device,
o System buffer takes the input.
o After taking the input, the block gets transferred to the user space and then requests
another block.
o Two blocks work simultaneously. When the user processes one block of data, the
next block is being read in.
o OS can swap the processes.
o OS can record the data of the system buffer to user processes.
Stream oriented device: It performed the following operations, such as:
o Line-at a time operation is used for scroll made terminals. The user inputs one line
at a time, with a carriage return waving at the end of a line.
o Byte-at a time operation is used on forms mode, terminals when each keystroke is
significant.
2. Double Buffer
In Double Buffering, two schemes or two buffers are used in the place of
one. In this buffering, the producer produces one buffer while the consumer
consumes another buffer simultaneously. So, the producer not needs to wait for
filling the buffer. Double buffering is also known as buffer swapping.
Block oriented: This is how a double buffer works. There are two buffers in the system.
o The driver or controller uses one buffer to store data while waiting for it to be taken
by a higher hierarchy level.
o Another buffer is used to store data from the lower-level module.
o A major disadvantage of double buffering is that the complexity of the process gets
increased.
o If the process performs rapid bursts of I/O, then using double buffering may be
deficient.
Stream oriented: It performs these operations, such as:
o Line- at a time I/O, the user process does not need to be suspended for input or
output unless the process runs ahead of the double buffer.
o Byte- at time operations, double buffer offers no advantage over a single buffer of
twice the length.
3. Circular Buffer
When more than two buffers are used, the buffers' collection is called
a circular buffer. Each buffer is being one unit in the circular buffer. The data
transfer rate will increase using the circular buffer rather than the double buffering.

o In this, the data do not directly pass from the producer to the consumer because the
data would change due to overwriting of buffers before consumed.
o The producer can only fill up to buffer x-1 while data in buffer x is waiting to be
consumed.
How Buffering Works
In an operating system, buffer works in the following way:
o Buffering is done to deal effectively with a speed mismatch between the producer
and consumer of the data stream.
o A buffer is produced in the main memory to heap up the bytes received from the
modem.
o After receiving the data in the buffer, the data get transferred to a disk from the
buffer in a single operation.
o This process of data transfer is not instantaneous. Therefore the modem needs
another buffer to store additional incoming data.
o When the first buffer got filled, then it is requested to transfer the data to disk.
o The modem then fills the additional incoming data in the second buffer while the
data in the first buffer gets transferred to the disk.
o When both the buffers completed their tasks, the modem switches back to the first
buffer while the data from the second buffer gets transferred to the disk.
o Two buffers disintegrate the producer and the data consumer, thus minimising the
time requirements between them.
o Buffering also provides variations for devices that have different data transfer sizes.
Advantages of Buffer
Buffering plays a very important role in any operating system during the
execution of any process or task. It has the following advantages.
o The use of buffers allows uniform disk access. It simplifies system design.
o The system places no data alignment restrictions on user processes doing I/O. By
copying data from user buffers to system buffers and vice versa, the kernel
eliminates the need for special alignment of user buffers, making user programs
simpler and more portable.
o The use of the buffer can reduce the amount of disk traffic, thereby increasing
overall system throughput and decreasing response time.
o The buffer algorithms help ensure file system integrity.
Disadvantages of Buffer
Buffers are not better in all respects. Therefore, there are a few disadvantages as
follows, such as:
o It is costly and impractical to have the buffer be the exact size required to hold the
number of elements. Thus, the buffer is slightly larger most of the time, with the rest
of the space being wasted.
o Buffers have a fixed size at any point in time. When the buffer is full, it must be
reallocated with a larger size, and its elements must be moved. Similarly, when the
number of valid elements in the buffer is significantly smaller than its size, the buffer
must be reallocated with a smaller size and elements be moved to avoid too much
waste.
o Use of the buffer requires an extra data copy when reading and writing to and from
user processes. When transmitting large amounts of data, the extra copy slows down
performance

IPC through Shared Memory


Shared memory is a memory shared between two or more processes. Each
process has its own address space; if any process wants to communicate with some
information from its own address space to other processes, then it is only possible
with IPC (inter-process communication) techniques.
Shared memory is the fastest inter-process communication mechanism. The
operating system maps a memory segment in the address space of several
processes to read and write in that memory segment without calling operating
system functions.

For applications that exchange large amounts of data, shared memory is far
superior to message passing techniques like message queues, which require system
calls for every data exchange. To use shared memory, we have to perform two basic
steps:
1. Request a memory segment that can be shared between processes to the operating
system.
2. Associate a part of that memory or the whole memory with the address space of the
calling process.
A shared memory segment is a portion of physical memory that is shared by
multiple processes. In this region, processes can set up structures, and others may
read/write on them. When a shared memory region is established in two or more
processes, there is no guarantee that the regions will be placed at the same base
address. Semaphores can be used when synchronization is required.
For example, one process might have the shared region starting at address
0x60000 while the other process uses 0x70000. It is critical to understand that these
two addresses refer to the exact same piece of data. So storing the number 1 in the
first process's address 0x60000 means the second process has the value of 1 at
0x70000. The two different addresses refer to the exact same location.
Usually, inter-related process communication is performed using Pipes or
Named Pipes. And unrelated processes communication can be performed using
Named Pipes or through popular IPC techniques of Shared Memory and Message
Queues.
But the problem with pipes, FIFO, and message queue is that the information
exchange between two processes goes through the kernel, and it works as follows.
o The server reads from the input file.
o The server writes this data in a message using pipe, FIFO, or message queue.
o The client reads the data from the IPC channel, again requiring the data to be copied
from the kernel's IPC buffer to the client's buffer.
o Finally, the data is copied from the client's buffer.
A total of four copies of data are required (2 read and 2 write). So, shared
memory provides a way by letting two or more processes share a memory segment.
With Shared Memory, the data is only copied twice, from the input file into shared
memory and from shared memory to the output file.
Functions of IPC Using Shared Memory
Two functions shmget() and shmat() are used for IPC using shared
memory. shmget() function is used to create the shared memory segment, while the
shmat() function is used to attach the shared segment with the process's address
space.
1. shmget() Function
The first parameter specifies the unique number (called key) identifying the shared
segment. The second parameter is the size of the shared segment, e.g., 1024 bytes
or 2048 bytes. The third parameter specifies the permissions on the shared segment.
On success, the shmget() function returns a valid identifier, while on failure, it
returns -1.
Syntax
#include <sys/ipc.h>
#include <sys/shm.h>
int shmget (key_t key, size_t size, int shmflg);
2. shmat() Function:
shmat() function is used to attach the created shared memory segment
associated with the shared memory identifier specified by shmid to the calling
process's address space. The first parameter here is the identifier which the
shmget() function returns on success. The second parameter is the address where to
attach it to the calling process. A NULL value of the second parameter means that
the system will automatically choose a suitable address. The third parameter is '0' if
the second parameter is NULL. Otherwise, the value is specified by SHM_RND.
Syntax
#include <sys/types.h>
#include <sys/shm.h>
void *shmat(int shmid, const void *shmaddr, int shmflg);

A process creates a shared memory segment using shmget(). The original


owner of a shared memory segment can assign ownership to another user
with shmctl(). It can also revoke this assignment. Other processes with proper
permission can perform various control functions on the shared memory segment
using shmctl().
Once created, a shared segment can be attached to a process address space
using shmat(). It can be detached using shmdt(). The attaching process must have
the appropriate permissions for shmat(). Once attached, the process can read or
write to the segment, as the permission requested in the attach operation allows. A
shared segment can be attached multiple times by the same process.
A shared memory segment is described by a control structure with a unique
ID that points to an area of physical memory. The identifier of the segment is called
the shmid. The structure definition for the shared memory segment control
structures and prototypes can be found in <sys/shm.h>.
Examples
We will write two programs for IPC using shared memory as an example. Program 1 will
create the shared segment, attach it, and then write some content in it.
Program 1: This program creates a shared memory segment, attaches itself to it, and
then writes some content into the shared memory segment
#include<stdio.h>
#include<stdlib.h>
#include<unistd.h>
#include<sys/shm.h>
#include<string.h>
int main()
{
int i;
void *shared_memory;
char buff[100];
int shmid;
shmid=shmget((key_t)2345, 1024, 0666|IPC_CREAT);
//creates shared memory segment with key 2345, having size 1024 bytes. IPC_CREAT is
used to create the shared segment if it does not exist. 0666 are the permissions on the
shared segment
printf("Key of shared memory is %d\n",shmid);
shared_memory=shmat(shmid,NULL,0);
//process attached to shared memory segment
printf("Process attached at %p\n",shared_memory);
//this prints the address where the segment is attached with this process
printf("Enter some data to write to shared memory\n");
read(0,buff,100); //get some input from user
strcpy(shared_memory,buff); //data written to shared memory
printf("You wrote : %s\n",(char *)shared_memory);
}
Output
Key of shared memory is 0
Process attached at 0x7ffe040fb000
Enter some data to write to shared memory
Hello World
You wrote: Hello World

How does it work?


In the above program, the shmget() function creates a segment with key
2345, size 1024 bytes, and reads and writes permissions for all users. It returns the
identifier of the segment, which gets stored in shmid. This identifier is used
in shmat() to attach the shared segment to the process's address space.
NULL in shmat() means that the OS will itself attach the shared segment at a
suitable address of this process. Then some data is read from the user using
the read() system call, and it is finally written to the shared segment using
the strcpy() function.
What is Operating System Security?
The process of ensuring OS availability, confidentiality, integrity is known as operating
system security. OS security refers to the processes or measures taken to protect
the operating system from dangers, including viruses, worms, malware, and remote
hacker intrusions. Operating system security comprises all preventive-control
procedures that protect any system assets that could be stolen, modified, or deleted
if OS security is breached.

Security refers to providing safety for computer system resources like


software, CPU, memory, disks, etc. It can protect against all threats, including
viruses and unauthorized access. It can be enforced by assuring the operating
system's integrity, confidentiality, and availability. If an illegal user runs a
computer application, the computer or data stored may be seriously damaged.

The goal of Security System

There are several goals of system security. Some of them are as follows:
1. Integrity
Unauthorized users must not be allowed to access the system's objects, and
users with insufficient rights should not modify the system's critical files and
resources.
2. Secrecy
The system's objects must only be available to a small number of authorized
users. The system files should not be accessible to everyone.
3. Availability
All system resources must be accessible to all authorized users, i.e., no single
user/process should be able to consume all system resources. If such a situation
arises, service denial may occur. In this case, malware may restrict system resources
and preventing legitimate processes from accessing them
Operating System Security Policies and Procedures
Various operating system security policies may be implemented based on the
organization that you are working in. In general, an OS security policy is a document
that specifies the procedures for ensuring that the operating system maintains a specific
level of integrity, confidentiality, and availability.
OS Security protects systems and data from worms, malware, threats,
ransom ware, backdoor intrusions, viruses, etc. Security policies handle all
preventative activities and procedures to ensure an operating system's protection,
including steal, edited, and deleted data.
As OS security policies and procedures cover a large area, there are various
techniques to addressing them. Some of them are as follows:
1. Installing and updating anti-virus software
2. Ensure the systems are patched or updated regularly
3. Implementing user management policies to protect user accounts and privileges.
4. Installing a firewall and ensuring that it is properly set to monitor all incoming and
outgoing traffic.
OS security policies and procedures are developed and implemented to
ensure that you must first determine which assets, systems, hardware, and date are
the most vital to your organization. Once that is completed, a policy can be
developed to secure and safeguard them properly.

Authentication and Internal Access Authorization

The process of identifying every system user and associating the programs executing
with those users is known as authentication. The operating system is responsible for
implementing a security system that ensures the authenticity of a user who is
executing a specific program. In general, operating systems identify and
authenticate users in three ways.
1. Username/Password
Every user contains a unique username and password that should be input correctly
before accessing a system.
2. User Attribution
These techniques usually include biometric verification, such as fingerprints, retina
scans, etc. This authentication is based on user uniqueness and is compared to
database samples already in the system. Users can only allow access if there is a
match.
3. User card and Key
To login into the system, the user must punch a card into a card slot or enter a key
produced by a key generator into an option provided by the operating system.

 Authentication is used by a server when the server needs to know exactly who is
accessing their information or site.
 Authentication is used by a client when the client needs to know that the server is
system it claims to be.
 In authentication, the user or computer has to prove its identity to the server or
client.
 Usually, authentication by a server entails the use of a user name and password.
Other ways to authenticate can be through cards, retina scans, voice recognition,
and fingerprints.
 Authentication by a client usually involves the server giving a certificate to the client in
which a trusted third party such as Verisign or Thawte states that the server belongs to the
entity (such as a bank) that the client expects it to.
 Authentication does not determine what tasks the individual can do or what files the
individual can see. Authentication merely identifies and verifies who the person or
system is.

Authorization

 Authorization is a process by which a server determines if the client has permission to


use a resource or access a file.
 Authorization is usually coupled with authentication so that the server has some
concept of who the client is that is requesting access.
 The type of authentication required for authorization may vary; passwords may be
required in some cases but not in others.
 In some cases, there is no authorization; any user may be use a resource or access a
file simply by asking for it. Most of the web pages on the Internet require no
authentication or authorization.
Android Operating System
Android is a mobile operating system based on a modified version of the Linux
kernel and other open-source software, designed primarily for touchscreen mobile
devices such as smartphones and tablets. Android is developed by a partnership of
developers known as the Open Handset Alliance and commercially sponsored by
Google. It was disclosed in November 2007, with the first commercial Android
device, the HTC Dream, launched in September 2008.
It is free and open-source software. Its source code is Android Open Source
Project (AOSP), primarily licensed under the Apache License. However, most Android
devices dispatch with additional proprietary software pre-installed, mainly Google
Mobile Services (GMS), including core apps such as Google Chrome, the digital
distribution platform Google Play and the associated Google Play Services
development platform.
o About 70% of Android Smartphone runs Google's ecosystem, some with vendor-
customized user interface and some with software suite, such as TouchWizand
later One UI by Samsung, and HTC Sense.
o Competing Android ecosystems and forksinclude Fire OS (developed by Amazon) or
LineageOS. However, the "Android" name and logo are trademarks of Google which
impose standards to restrict "uncertified" devices outside their ecosystem to use
android branding.
Features of Android Operating System
Below are the following unique features and characteristics of the android operating
system, such as:
1. Near Field Communication (NFC)
44.6M
895
C++ vs Java
Most Android devices support NFC, which allows electronic devices to interact across
short distances easily. The main goal here is to create a payment option that is
simpler than carrying cash or credit cards, and while the market hasn't exploded as
many experts had predicted, there may be an alternative in the works, in the form of
Bluetooth Low Energy (BLE).
2. Infrared Transmission
The Android operating system supports a built-in infrared transmitter that allows you to
use your phone or tablet as a remote control.
3. Automation
The Tasker app allows control of app permissions and also automates them.
4. Wireless App Downloads
You can download apps on your PC by using the Android Market or third-party options
like AppBrain. Then it automatically syncs them to your Droid, and no plugging is
required.
5. Storage and Battery Swap
Android phones also have unique hardware capabilities. Google's OS makes it possible to
upgrade, replace, and remove your battery that no longer holds a charge. In
addition, Android phones come with SD card slots for expandable storage.
6. Custom Home Screens
While it's possible to hack certain phones to customize the home screen, Android comes
with this capability from the get-go. Download a third-party launcher like Apex,
Nova, and you can add gestures, new shortcuts, or even performance
enhancements for older-model devices.
7. Widgets
Apps are versatile, but sometimes you want information at a glance instead of having to
open an app and wait for it to load. Android widgets let you display just about any
feature you choose on the home screen, including weather apps, music widgets, or
productivity tools that helpfully remind you of upcoming meetings or approaching
deadlines.
8. Custom ROMs
Because the Android operating system is open-source, developers can twist the current
OS and build their versions, which users can download and install in place of the
stock OS. Some are filled with features, while others change the look and feel of a
device. Chances are, if there's a feature you want, someone has already built a
custom ROM for it.
Architecture of Android OS
The android architecture contains a different number of components to support any
android device needs. Android software contains an open-source Linux Kernel with
many C/C++ libraries exposed through application framework services.
Among all the components, Linux Kernel provides the main operating system functions
to Smartphone and Dalvik Virtual Machine (DVM) to provide a platform for running
an android application. An android operating system is a stack of software
components roughly divided into five sections and four main layers, as shown in the
below architecture diagram.
o Applications
o Application Framework
o Android Runtime
o Platform Libraries
o Linux Kernel

1. Applications
An application is the top layer of the android architecture. The pre-installed applications
like camera, gallery, home, contacts, etc., and third-party applications downloaded
from the play store like games, chat applications, etc., will be installed on this layer.
It runs within the Android run time with the help of the classes and services provided by
the application framework.
2. Application framework
Application Framework provides several important classes used to create an Android
application. It provides a generic abstraction for hardware access and helps in
managing the user interface with application resources. Generally, it provides the
services with the help of which we can create a particular class and make that class
helpful for the Applications creation.
It includes different types of services, such as activity manager, notification manager,
view system, package manager etc., which are helpful for the development of our
application according to the prerequisite.
The Application Framework layer provides many higher-level services to applications in
the form of Java classes. Application developers are allowed to make use of these
services in their applications. The Android framework includes the following key
services:
o Activity Manager: Controls all aspects of the application lifecycle and activity stack.
o Content Providers: Allows applications to publish and share data with other
applications.
o Resource Manager: Provides access to non-code embedded resources such as
strings, colour settings and user interface layouts.
o Notifications Manager: Allows applications to display alerts and notifications to the
user.
o View System: An extensible set of views used to create application user interfaces.
3. Application runtime
Android Runtime environment contains components like core libraries and the Dalvik
virtual machine (DVM). It provides the base for the application framework and
powers our application with the help of the core libraries.
Like Java Virtual Machine (JVM), Dalvik Virtual Machine (DVM) is a register-based
virtual machine designed and optimized for Android to ensure that a device can run
multiple instances efficiently.
It depends on the layer Linux kernel for threading and low-level memory management.
The core libraries enable us to implement android applications using the
standard JAVA or Kotlin programming languages.
4. Platform libraries
The Platform Libraries include various C/C++ core libraries and Java-based libraries such
as Media, Graphics, Surface Manager, OpenGL, etc., to support Android
development.
o app: Provides access to the application model and is the cornerstone of all Android
applications.
o content: Facilitates content access, publishing and messaging between applications
and application components.
o database: Used to access data published by content providers and includes SQLite
database, management classes.
o OpenGL: A Java interface to the OpenGL ES 3D graphics rendering API.
o os: Provides applications with access to standard operating system services,
including messages, system services and inter-process communication.
o text: Used to render and manipulate text on a device display.
o view: The fundamental building blocks of application user interfaces.
o widget: A rich collection of pre-built user interface components such as buttons,
labels, list views, layout managers, radio buttons etc.
o WebKit: A set of classes intended to allow web-browsing capabilities to be built into
applications.
o media: Media library provides support to play and record an audio and video format.
o surface manager: It is responsible for managing access to the display subsystem.
o SQLite: It provides database support, and FreeType provides font support.
o SSL: Secure Sockets Layer is a security technology to establish an encrypted link
between a web server and a web browser.
5. Linux Kernel
Linux Kernel is the heart of the android architecture. It manages all the available drivers
such as display, camera, Bluetooth, audio, memory, etc., required during the
runtime.
The Linux Kernel will provide an abstraction layer between the device hardware and the
other android architecture components. It is responsible for the management of
memory, power, devices etc. The features of the Linux kernel are:
o Security: The Linux kernel handles the security between the application and the
system.
o Memory Management: It efficiently handles memory management, thereby
providing the freedom to develop our apps.
o Process Management: It manages the process well, allocates resources to
processes whenever they need them.
o Network Stack: It effectively handles network communication.
o Driver Model: It ensures that the application works properly on the device and
hardware manufacturers responsible for building their drivers into the Linux build.
Android Applications
Android applications are usually developed in the Java language using the Android
Software Development Kit. Once developed, Android applications can be packaged
easily and sold out either through a store such as Google Play, SlideME, Opera
Mobile Store, Mobango, F-droid or the Amazon Appstore.
Android powers hundreds of millions of mobile devices in more than 190 countries
around the world. It's the largest installed base of any mobile platform and growing
fast. Every day more than 1 million new Android devices are activated worldwide.

Android Emulator
The Emulator is a new application in the Android operating system. The Emulator is a
new prototype used to develop and test android applications without using any
physical device.
The android emulator has all of the hardware and software features like mobile devices
except phone calls. It provides a variety of navigation and control keys. It also
provides a screen to display your application. The emulators utilize the android
virtual device configurations. Once your application is running on it, it can use
services of the android platform to help other applications, access the network, play
audio, video, store, and retrieve the data.
Advantages of Android Operating System
We considered every one of the elements on which Android is better as thought about
than different platforms. Below are some important advantages of Android OS, such
as:
o Android Google Developer: The greatest favourable position of Android is Google.
Google claims an android operating system. Google is a standout amongst the most
trusted and rumoured item on the web. The name Google gives trust to the clients to
purchase Android gadgets.
o Android Users: Android is the most utilized versatile operating system. More than a
billion individuals clients utilize it. Android is likewise the quickest developing
operating system in the world. Various clients increment the number of applications
and programming under the name of Android.
o Android Multitasking: The vast majority of us admire this component of Android.
Clients can do heaps of undertakings on the double. Clients can open a few
applications on the double and oversee them very. Android has incredible UI, which
makes it simple for clients to do multitasking.
o Google Play Store App: The best part of Android is the accessibility of many
applications. Google Play store is accounted for as the world's largest mobile store. It
has practically everything from motion pictures to amusements and significantly
more. These things can be effortlessly downloaded and gotten to through an Android
phone.
o Android Notification and Easy Access: Without much of a stretch, one can access
their notice of any SMS, messages, or approaches their home screen or the notice
board of the android phone. The client can view all the notifications on the top bar.
Its UI makes it simple for the client to view more than 5 Android notices
immediately.
o Android Widget: Android operating system has a lot of widgets. This gadget
improves the client encounter much and helps in doing multitasking. You can include
any gadget relying on the component you need on your home screen. You can see
warnings, messages, and a great deal more use without opening applications.
Disadvantages of Android Operating System
We know that the Android operating system has a considerable measure of interest for
users nowadays. But at the same time, it most likely has a few weaknesses. Below
are the following disadvantages of the android operating system, such as:
o Android Advertisement pop-ups: Applications are openly accessible in the Google
play store. Yet, these applications begin demonstrating tons of advertisements on
the notification bar and over the application. This promotion is extremely difficult and
makes a massive issue in dealing with your Android phone.
o Android require Gmail ID: You can't get to an Android gadget without your email
ID or password. Google ID is exceptionally valuable in opening Android phone bolts
as well.
o Android Battery Drain: Android handset is considered a standout amongst the
most battery devouring operating systems. In the android operating system, many
processes are running out of sight, which brings about the draining of the battery. It
is difficult to stop these applications as the lion's share of them is system
applications.
o Android Malware/Virus/Security: Android gadget is not viewed as protected
when contrasted with different applications. Hackers continue attempting to take
your data. It is anything but difficult to target any Android phone, and each day
millions of attempts are done on Android phones.
Application Framework
The Android OS exposes the underlying libraries and features of the Android device that are using
a Java API. This is what is known as the Android framework. The framework exposes a safe and
uniform means to utilize Android device resources.

Application framework
1) Activity Manager
Applications use the Android activity component for presenting an entry point to the app. Android
Activities are the components that house the user interface that app users interact with. As end-
users interact with the Android device, they start, stop, and jump back and forth across many
applications. Each navigation event triggers activation and deactivation of many activities in
respective applications.
The Android ActivityManager is responsible for predictable and consistent behavior during
application transitions. The ActivityManager provides a slot for app creators to have their apps
react when the Android OS performs global actions. Applications can listen to events such as
device rotation, app destruction due to memory shortage, an app being shifted out of focus, and
so on.
Some examples of the way applications can react to these transitions include pausing activity in a
game, stopping music playing during a phone call.
2) Window Manager
Android can determine screen information to determine the requirements needed to create
windows for applications. Windows are the slots where we can view our app user interface.
Android uses the Window manager to provide this information to the apps and the system as they
run so that they can adapt to the mode the device is running on.
The Window Manager helps in delivering a customized app experience. Apps can fill the complete
screen for an immersive experience or share the screen with other apps. Android enables this by
allowing multi-windows for each app.
3) Location Manager
Most Android devices are equipped with GPS devices that can get user location using satellite
information to which can go all the way to meters precision. Programmers can prompt for location
permission from the users, deliver location, and aware experiences.
Android is also able to utilize wireless technologies to further enrich location details and increase
coverage when devices are enclosed spaces. Android provides these features under the umbrella
of the Location-Manager.
4) Telephony Manager
Most Android devices serve a primary role in telephony. Android uses TelephoneManager to
combine hardware and software components to deliver telephony features. The hardware
components include external parts such as the sim card, and device parts such as the microphone,
camera, and speakers. The software components include native components such as dial pad,
phone book, ringtone profiles. Using the TelephoneManager, a developer can extend or fine-tune
the default calling functionality.
5) Resource Manager
Android app usually come with more than just code. They also have other resources such as icons,
audio and video files, animations, text files, and the like. Android helps in making sure that there
is efficient, responsive access to these resources. It also ensures that the right resources are
delivered to the end-users. For example, the proper language text files are used when populating
fields in the apps.
6) View System
Android also provides a means to easily create common visual components needed for app
interaction. These components include widgets like buttons, image holders such as ImageView,
components to display a list of items such as ListView, and many more. The components are
premade but are also customizable to fit app developer needs and branding.
7) Notification Manager
The Notification Manager is responsible for informing Android users of application events. It does
this by giving users visual, audio or vibration signals or a combination of them when an event
occurs. These events have external and internal triggers. Some examples of internal triggers are
low-battery status events that trigger a notification to show low battery. Another example is user-
specified events like an alarm. Some examples of external triggers include new messages or new
wifi networks detected.
Android provides a means for programmers and end-users to fine-tune the notifications system.
This can help to guarantee they can send and receive notification events in a means that best
suits them and their current environments.
8) Package Manager
Android also provides access to information about installed applications. Android keeps track of
application information such as installation and uninstallation events, permissions the app
requests, and resource utilization such as memory consumption.
This information can enable developers to make their applications to activate or deactivate
functionality depending on new features presented by companion apps.
9) Content Provider
Android has a standardized way to share data between applications on the device using the
content provider. Developers can use the content provider to expose data to other applications.
For example, they can make the app data searchable from external search applications. Android
itself exposes data such as calendar data, contact data, and the like using the same system.

Android Process Management and File System


File system:
Most of the Android user are using their Android phone just for calls, SMS, browsing
and basic apps, But form the development prospective, we should know about Android internal
structure. Android uses several partitions (like boot, system, recovery , data etc) to organize
files and folders on the device just like Windows OS. Each of these partitions has it’s own
functionality, But most of us don’t know the significance of each partition and its contents.
There are mainly 6 partitions in Android phones, tablets and other Android devices.
Note that there might be some other partitions available, it differs from Model to Model. But
logically below 6 partitions can be found in any Android devices.
/boot  /system  /recovery  /data  /cache  /misc Also Below are the for SD Card
Fie System Partitions.  /sdcard  /sd-ex

You can know which partitions are available along with the partition size for all
partition in your android device. Go through the below image and run the adb command as
shown in that image.
/boot:
This is the boot partition of your Android device, as the name suggests. It includes the
android kernel and the ramdisk. The device will not boot without this partition. Wiping this
partition from recovery should only be done if absolutely required and once done, the device
must NOT be rebooted before installing a new one, which can be done by installing a ROM that
includes a /boot partition.
/system
As the name suggests, this partition contains the entire Android OS. This includes the
Android GUI and all the system applications that come pre-installed on the device. Wiping this
partition will remove Android from the device without rendering it unbootable, and you will still
be able to put the phone into recovery or bootloader mode to install a new ROM
/recovery
This is specially designed for backup. The recovery partition can be considered as an
alternative boot partition, that lets the device boot into a recovery console for performing
advanced recovery and maintenance operations on it.
/data
It is called userdata partition. This partition contains the user’s data like your contacts,
sms, settings and all android applications that you have installed. While you are doing factory
reset on your device, this partition will wipe out, Then your device will be in the state, when
you use for he first time, or the way it was after the last official or custom ROM installation
/cache
This is the partition where Android stores frequently accessed data and app
components. Wiping the cache doesn’t effect your personal data but simply gets rid of the
existing data there, which gets automatically rebuilt as you continue using the device.
/misc
This partition contains miscellaneous system settings in form of on/off switches.
These settings may include CID (Carrier or Region ID), USB configuration and certain
hardware settings etc. This is an important partition and if it is corrupt or missing, several of
the device’s features will will not function normally
/sdcard
This is not a partition on the internal memory of the device but rather the SD card. In
terms of usage, this is your storage space to use as you see fit, to store your media,
documents, ROMs etc. on it. Wiping it is perfectly safe as long as you backup all the data you
require from it, to your computer first. Though several user-installed apps save their data and
settings on the SD card and wiping this partition will make you lose all that data.
/sd-ext
This is not a standard Android partition, but has become popular in the custom ROM
scene. It is basically an additional partition on your SD card that acts as the /data partition. It
is especially useful on devices with little internal memory allotted to the /data partition. Thus,
users who want to install more programs than the internal memory allows can make this
partition and use it for installing their apps.
Sometimes you might prefer to use the traditional file system to store your data. For
example, you might want to store the text of poems you want to display in your applications.
In Android, you can use the classes in the java.io package to do so.
Saving to Internal Storage
To save text into a file, you use the FileOutputStream class. The openFileOutput()
method opens a named file for writing, with the mode specified. In this example, you used the
MODE_WORLD_READABLE constant to indicate that the file is readable by all other
applications.  MODE_PRIVATE  MODE_APPEND  MODE_WORLD_WRITEABLE
To convert a character stream into a byte stream, you use an instance of the
OutputStreamWriter class, by passing it an instance of the FileOutputStream object: You then use
its write() method to write the string to the file. To ensure that all the bytes are written to the
file, use the flush() method. Finally, use the close() method to close the file
Small Application Development using Android Development Framework

You might also like