Download as pdf or txt
Download as pdf or txt
You are on page 1of 79

Operating System

21CST404

VINUTHA M S,DEPT OF CSE,Dr AIT 1


VINUTHA M S,DEPT OF CSE,Dr AIT 2
SYLLABUS:

VINUTHA M S,DEPT OF CSE,Dr AIT 3


TEXT BOOK:
1. Operating System Concepts - Tenth Edition, Avi Silber Schatz, Peter Baer Galvin, Greg Gagne, John Wiley
& Sons, Inc. ISBN 978-1-118-06333-0

REFERENCE BOOKS:
1. D.M Dhamdhere: Operating systems - A concept-based Approach, 3rd Edition, Tata McGraw- Hill, 2012.
ISBN13: 9781259005589
2. P.C.P. Bhatt: Introduction to Operating Systems Concepts and Practice, 3rd Edition, PHI, 2010. ISBN-10:
9788120348363
3. Harvey M Deital: Operating systems, 3rd Edition, Pearson Education, 2011. ISBN: 9788131712894

VINUTHA M S,DEPT OF CSE,Dr AIT 4


UNIT I 8 hours
• Introduction: What Operating Systems Do, Computer-System Organization, Computer-System Architecture,
Operating-System Structure, Resource Management, Virtualization, Distributed Systems
• Operating-System Structures: Operating-System Services, System Calls, Operating-System Design and
Implementation, Operating-System Structure.

VINUTHA M S,DEPT OF CSE,Dr AIT 5


INTRODUCTION TO OPERATING SYSTEM
What is an Operating System?
An operating system is system software that acts as an intermediary between a user of a computer and the computer
hardware. It is software that manages the computer hardware and allows the user to execute programs in a convenient and
efficient manner.
It handles all the interactions between the software and the hardware. All the working of a computer system depends on the
OS at the base level.
An operating system (OS) manages all other applications and programs in a computer, and it is loaded into the computer by a
boot program. It enables applications to interact with a computer’s hardware. Through a designated application programme
interface, the application programmes request services from the operating system (API).
Windows, Linux, and Android are examples of operating systems that enable the user to use programs like MS Office, Notepad,
and games on the computer or mobile phone. It is necessary to have at least one operating system installed in the computer to
run basic programs like browsers.

VINUTHA M S,DEPT OF CSE,Dr AIT 6


Objectives/goals of OS
The primary goals of an operating system are as follows:

•Convenience – An operating system improves the use of a machine. Operating systems enable users to get
started on the things they wish to complete quickly without having to cope with the stress of first configuring
the system.
It hides the difficulty in managing the hardware.

•Efficiency – An operating system enables the efficient use of resources. This is due to less time spent
configuring the system.
The management of resources and programs should be done so that no resource is kept idle and
memory is used for no use.
•Ability to evolve – An operating system should be designed in such a way that it allows for the effective
development, testing, and introduction of new features without interfering with service.
This makes the operating system more reliable.

•Management of system resources – It guarantees that resources are shared fairly among various
processes and users.
It is a resource allocator

• Security-An operating system provides the safety and security of data between the user and the hardware.
OS enables multiple users to securely share a computer(system), including files, processes, memory, and
device separately.
VINUTHA M S,DEPT OF CSE,Dr AIT 7
It keeps the system and applications safe through authorization
What are the functions of Operating System?
• Security – For security, modern operating systems employ a firewall. A firewall is a type of security system that monitors all
computer activity and blocks it if it detects a threat.

• Job Accounting – As the operating system keeps track of all the functions of a computer system. Hence, it makes a record of
all the activities taking place on the system. It has an account of all the information about the memory, resources, errors,
etc. Therefore, this information can be used as and when required.

• Control over system performance – The operating system will collect consumption statistics for various resources and
monitor performance indicators such as reaction time, which is the time between requesting a service and receiving a
response from the system.

• Error detecting aids – While a computer system is running, a variety of errors might occur. Error detection guarantees that
data is delivered reliably across susceptible networks. The operating system continuously monitors the system to locate or
recognize problems and protects the system from them.

• Coordination between other software and users – The operating system (OS) allows hardware components to be
coordinated and directs and allocates assemblers, interpreters, compilers, and other software to different users of
the computer system.

• Booting process – The process of starting or restarting a computer is referred to as Booting. Cold booting occurs when a
computer is totally turned off and then turned back on. Warm booting occurs when the computer is restarted. The operating
system (OS) is incharge of booting the computer.

VINUTHA M S,DEPT OF CSE,Dr AIT 8


Operating System is used as a communication channel between the Computer hardware and the user. It works as an

intermediate between System Hardware and End-User. Operating System handles the following responsibilities:

 It controls all the computer resources.


 It provides valuable services to user programs.
 It coordinates the execution of user programs.
 It provides resources for user programs.
 It provides an interface to the user.
 It hides the complexity.
 It supports multiple execution modes.
 It monitors the execution of user programs to prevent errors.
“PROVIDES ENVIRONMENT”

Operating systems are everywhere, from cars and home appliances that include “Internet of Things” devices, to smart
phones, personal computers, enterprise computers, and cloud computing environments.

VINUTHA M S,DEPT OF CSE,Dr AIT 9


Computer System Structure (Components of Computer System)

In order to explore the role of an operating system in a modern computing


environment, it is important first to understand the organization and
architecture of computer hardware
Mainly consists of four components
• Hardware – Provides basic computing resources CPU, memory, I/O
devices.
• Operating system - Controls and coordinates use of hardware among
various applications and users.
• Application programs – Define the ways in which the system resources
are used to solve the computing problems of the users, Word processors,
compilers, web browsers, database systems, video games.
• Users - People, Machines, other Computers.

Abstract View of Components of Computer system

VINUTHA M S,DEPT OF CSE,Dr AIT 10


DIFFERENT VIEWS OF OS
We can also view a computer system as consisting of hardware, software, and data. The operating system provides the means for
proper use of these resources in the operation of the computer system. An operating system is similar to a government. Like a
government, it performs no useful function by itself. It simply provides an environment within which other programs can do useful
work.
To understand more fully the operating system’s role, we next explore operating systems from two viewpoints. The operating
system does not perform any functions on its own, but it provides an atmosphere in which various apps and programs can do useful
work. The operating system may be observed from the point of view of the user or the system, and it is known as the
 User View
 System View.

VINUTHA M S,DEPT OF CSE,Dr AIT 11


USER VIEW
The user’s view of the operating system depends on the type of user.
• If the user is using standalone system, then OS is designed for ease of use and high performances. Here resource utilization is
not given importance.
• If the users are at different terminals connected to a mainframe or minicomputers, by sharing information and resources,
then the OS is designed to maximize resource utilization. OS is designed such that the CPU time, memory and I/O are used
efficiently and no single user takes more than the resource allotted to them.
• If the users are in workstations, connected to networks and servers, then the user have a system unit of their own and shares
resources and files with other systems. Here the OS is designed for both ease of use and resource availability (files).
• Other systems like embedded systems used in home device (like washing m/c) & automobiles do not have any user interaction.
There are some LEDs to show the status of its work
• Users of hand-held systems, expects the OS to be designed for ease of use and performance per amount of battery life.

VINUTHA M S,DEPT OF CSE,Dr AIT 12


System Views
Operating system can be viewed as a resource allocator and control program.
• Resource allocator –
 The OS acts as a manager of hardware and software resources.
 CPU time, memory space, file-storage space, I/O devices, shared files etc. are the different resources required during
execution of a program.
 There can be conflicting request for these resources by different programs running in same system. The OS assigns the
resources to the requesting program depending on the priority.

• Control Program –
The OS is a control program and manage the execution of user program to prevent errors and improper use of the
computer.

VINUTHA M S,DEPT OF CSE,Dr AIT 13


Defining Operating System
An operating system (OS) is the program that, after being initially loaded into the computer by a boot program, manages all of the
other application programs in a computer. The application programs make use of the operating system by making requests for
services through a defined application program interface (API). In addition, users can interact directly with the operating system
through a user interface.
• “An Operating System is the low-level software that supports a computer's basic functions, such as file management, memory
management, process management, handling input and output, and controlling peripheral devices such as disk drives and
printers.
• “An Operating system is a program that acts as an interface between the user and the computer hardware and controls the
execution of all kinds of programs”.
• “An Operating system (OS) is system software that manages computer hardware, software resources, and provides common
services for computer programs.”
• “operating system is the one program running at all times on the computer—usually called the kernel. Along with the kernel,
there are two other types of programs: system programs, which are associated with the operating system but are not
necessarily part of the kernel, and application programs, which include all programs not associated with the operation of the
system”

VINUTHA M S,DEPT OF CSE,Dr AIT 14


Computer System Organization
Architecture of a computer system can be considered as a catalogue of tools or attributes that are visible to the user such as
instruction sets, number of bits used for data, addressing techniques, etc. These attributes have direct impact on logical
execution of program.

Organization of a computer system defines the way system is structured and how features are implemented. The significant
components of Computer organization are ALU, CPU, memory and memory organization.

An organization is done on the basis of architecture. Computer Organization deals with low-level design issues. Organization
expresses the realization of architecture.

VINUTHA M S,DEPT OF CSE,Dr AIT 15


Von-Neumann Model [ias architecture]
• Von-Neumann proposed his computer architecture design in
1945 which was later known as Von-Neumann Architecture. It
consisted of a Control Unit, Arithmetic, and Logical Memory Unit
(ALU), Registers and Inputs/Outputs.
• Von Neumann architecture is based on the stored-program
computer concept, where instruction data and program data
are stored in the same memory. This design is still used in most
computers produced today. Executes programs following the
fetch-decode-execute cycle [Instruction Cycle/Machine Cycle]

VINUTHA M S,DEPT OF CSE,Dr AIT 16


VINUTHA M S,DEPT OF CSE,Dr AIT 17
Harvard Architecture
Harvard Architecture is the computer architecture that contains
separate storage and separate buses (signal path) for instruction and data. It
was basically developed to overcome the bottleneck of Von Neumann’s
Architecture. The main advantage of having separate buses for instruction and
data is that the CPU can access instructions and read/write data at the same
time.

VINUTHA M S,DEPT OF CSE,Dr AIT 18


Computer - System Operation
A modern or general purpose computer consist of One or more CPUs, device controllers connect through
common bus providing access to shared memory. Each device controller is in-charge of a specific type of device. To ensure
orderly access to the shared memory, a memory controller is provided whose function is to synchronize access to the
memory. The CPU and other devices execute concurrently competing for memory cycles.
When system is switched on, ‘Bootstrap’ program is executed. It is the initial program to run in the system. This program is
stored in read-only memory (ROM) or in electrically erasable programmable read-only memory (EEPROM). It initializes the
CPU registers, memory, device controllers and other initial setups. The program also locates and loads, the OS kernel to
the memory. Then the OS starts with the first process to be executed (ie. ‘init’ process) and then wait for the interrupt from
the user.

VINUTHA M S,DEPT OF CSE,Dr AIT 19


Interrupt
• An interrupt is a signal emitted by a device attached to a computer or from a program within the computer. It requires the operating
system to stop and figure out what to do next.
• An interrupt temporarily stops or terminates a service or a current process. Most I/O devices have a bus control line called Interrupt
Service Routine for this purpose. An interrupt signal might be planned or it may be unplanned.
• Interrupts have two types:
 Hardware interrupt: The hardware interrupt occurs by the interrupt request signal from peripheral circuits to the CPU.
 Software interrupt: The software interrupt occurs by executing a dedicated instruction called system call or monitor call.

VINUTHA M S,DEPT OF CSE,Dr AIT 20


Hardware interrupt

For example − In a keyboard if we press a key to do some action


this pressing of the keyboard generates a signal that is given to the
processor to do action, such interrupts are called hardware
interrupts.

Software interrupt

VINUTHA M S,DEPT OF CSE,Dr AIT 21


Interrupt handling
• When the CPU is interrupted, it stops what it is doing and immediately transfers execution to a fixed location. The CPU will now detect
the kind of interrupt and its respective interrupt number. The interrupt number and its corresponding instructions set address(or
process’s base address) are stored in a vector table known as IVT or Interrupt Vector Table.
• Using the interrupt number and IVT, the CPU will get to know the base address of the process that is needed to handle the respective
interrupt. Now, the control will be given to that process to handle the interrupt and this process is known as Interrupt Service Routine
(ISR).
• Interrupts are an important part of computer architecture. Each computer design has its own interrupt mechanism, but several functions
are common.

VINUTHA M S,DEPT OF CSE,Dr AIT 22


Implementation :Interrupt Driven I/O Cycle

The basic interrupt mechanism works as follows.


1. The CPU hardware has a wire called the interrupt-request line that
the CPU senses after executing every instruction.
2. When the CPU detects that a controller has asserted a signal on the
interrupt-request line, it reads the interrupt number and jumps to
the interrupt-handler routine by using that interrupt number as an
index into the interrupt vector. It then starts execution at the
address associated with that index.
3. The interrupt handler saves any state it will be changing during its
operation, determines the cause of the interrupt, performs the
necessary processing, performs a state restore, and executes a
return from interrupt instruction to return the CPU to the
execution state prior to the interrupt.

We say that the device controller raises an interrupt by asserting a


signal on the interrupt request line, the CPU catches the interrupt and
dispatches it to the interrupt handler, and the handler clears the
interrupt by servicing the device. Figure summarizes the interrupt-
driven I/O cycle.

VINUTHA M S,DEPT OF CSE,Dr AIT 23


Most CPUs have two interrupt request lines.
• One is the nonmaskable interrupt, which is reserved for
events such as unrecoverable memory errors.
• The second interrupt line is maskable: it can be turned
off by the CPU before the execution of critical instruction
sequences that must not be interrupted. The maskable
interrupt is used by device controllers to request service.
A vectored interrupt mechanism is to reduce the need for a
single interrupt handler to search all possible sources of
interrupts to determine which one needs service. In
practice, however, computers have more devices (and,
hence, interrupt handlers) than they have address elements
in the interrupt vector.
A common way to solve this problem is to use interrupt
chaining, in which each element in the interrupt vector
points to the head of a list of interrupt handlers. When an
interrupt is raised, the handlers on the corresponding list
are called one by one, until one is found that can service the
request. This structure is a compromise between the
overhead of a huge interrupt table and the inefficiency of
dispatching to a single interrupt handler. Figure illustrates
the design of the interrupt vector for Intel processors. The
events from 0 to 31, which are nonmaskable, are used to
signal various error conditions. The events from 32 to 255,
which are maskable, are used for purposes such as device-
generated interrupts

VINUTHA M S,DEPT OF CSE,Dr AIT 24


Storage Structure

• Computer programs must be in main memory (RAM) to be executed. Main memory is the large memory that the processor
can access directly. It commonly is implemented in a semiconductor technology called dynamic random-access memory
(DRAM). Computers provide Read Only Memory (ROM), whose data cannot be changed.
• All forms of memory provide an array of memory words. Each word has its own address. Interaction is achieved through a
sequence of load or store instructions to specific memory addresses.
• A typical instruction-execution cycle, as executed on a system with a Von Neumann architecture, first fetches an instruction
from memory and stores that instruction in the instruction register. The instruction is then decoded and may cause operands
to be fetched from memory and stored in some internal register. After the instruction on the operands has been executed,
the result may be stored back in memory.
• Ideally, we want the programs and data to reside in main memory permanently. This arrangement usually is not possible for
the following two reasons:
1.Main memory is usually too small to store all needed programs and data permanently.
2.Main memory is a volatile storage device that loses its contents when power is turned off.

VINUTHA M S,DEPT OF CSE,Dr AIT 25


Storage Device Hierarchy
• Thus, most computer systems provide secondary storage as an extension of
main memory. The main requirement for secondary storage is that it will be
able to hold large quantities of data permanently.
• The most common secondary-storage device is a magnetic disk, which
provides storage for both programs and data. Most programs are stored on a
disk until they are loaded into memory. Many programs then use the disk as
both a source and a destination of the information for their processing.
• The wide variety of storage systems in a computer system can be organized in
a hierarchy as shown in the figure, according to speed, cost and capacity. The
higher levels are expensive, but they are fast. As we move down the
hierarchy, the cost per bit generally decreases, whereas the access time and
the capacity of storage generally increases.
• In addition to differing in speed and cost, the various storage systems are
either volatile or nonvolatile. Volatile storage loses its contents when the
power to the device is removed. In the absence of expensive battery and
generator backup systems, data must be written to nonvolatile storage for
safekeeping. In the hierarchy shown in figure, the storage systems above the
electronic disk are volatile, whereas those below are nonvolatile.
• An electronic disk can be designed to be either volatile or nonvolatile. During
normal operation, the electronic disk stores data in a large DRAM array,
which is volatile. But many electronic-disk devices contain a hidden magnetic
hard disk and a battery for backup power. If external power is interrupted,
the electronic-disk controller copies the data from RAM to the magnetic disk.
Another form of electronic disk is flash memory.

VINUTHA M S,DEPT OF CSE,Dr AIT 26


Performance of Various Levels of Storage

VINUTHA M S,DEPT OF CSE,Dr AIT 27


I/O Structure
The method that is used to transfer information between internal storage and external I/O devices is known as I/O interface or
structure. The CPU is interfaced using special communication links by the peripherals connected to any computer system. These
communication links are used to resolve the differences between CPU and peripheral. There exists special hardware
components between CPU and peripherals to supervise and synchronize all the input and output transfers that are called
interface units.
A large portion of operating system code is dedicated to managing I/O, both because of its importance to the reliability and
performance of a system and because of the varying nature of the devices.
Every device has a device controller, maintains some local buffer and a set of special- purpose registers. The device controller is
responsible for moving the data between the peripheral devices. The operating systems have a device driver for each device
controller.
Data transfer between CPU and the I/O devices may be done in different modes. Data transfer to and from the peripherals may
be done in any of the three possible ways
1. Programmed I/O.
2. Interrupt- initiated I/O.
3. Direct memory access( DMA).

VINUTHA M S,DEPT OF CSE,Dr AIT 28


Programmed I/O :
Programmable I/O is one of the I/O technique other than the interrupt-driven I/O and direct memory access (DMA). The programmed I/O
was the most simple type of I/O technique for the exchanges of data or any types of communication between the processor and the external
devices. With programmed I/O, data are exchanged between the processor and the I/O module. The processor executes a program that
gives it direct control of the I/O operation, including sensing device status, sending a read or write command, and transferring the data.
When the processor issues a command to the I/O module, it must wait until the I/O operation is complete. If the processor is faster than
the I/O module, this is wasteful of processor time. The overall operation of the programmed I/O can be summaries as follow:

1. The processor is executing a program and encounters an instruction relating to I/O operation.
2. The processor then executes that instruction by issuing a command to the appropriate I/O module.
3. The I/O module will perform the requested action based on the I/O command issued by the processor (READ/WRITE) and set the
appropriate bits in the I/O status register.
4. The processor will periodically check the status of the I/O module until it find that the operation is complete.

VINUTHA M S,DEPT OF CSE,Dr AIT 29


Interrupt- initiated I/O:
Since in the above case we saw the CPU is kept busy unnecessarily. This situation can very well be avoided by using an interrupt
driven method for data transfer. By using interrupt facility and special commands to inform the interface to issue an interrupt
request signal whenever data is available from any device. In the meantime the CPU can proceed for any other program
execution. The interface meanwhile keeps monitoring the device. Whenever it is determined that the device is ready for data
transfer it initiates an interrupt request signal to the computer. Upon detection of an external interrupt signal the CPU stops
momentarily the task that it was already performing, branches to the service program to process the I/O transfer, and then
return to the task it was originally performing.

DMA Structure:
Direct Memory Access (DMA) is a method of handling I/O. Here the device controller directly communicates with memory
without CPU involvement. After setting the resources of I/O devices like buffers, pointers, and counters, the device controller
transfers blocks of data directly to storage without CPU intervention. DMA is generally used for high speed I/O devices. The data
transfer between a fast storage media such as magnetic disk and memory unit is limited by the speed of the CPU. Thus we can allow
the peripherals directly communicate with each other using the memory buses, removing the intervention of the CPU. The DMA
controller takes over the buses to manage the transfer directly between the I/O devices and the memory unit.

VINUTHA M S,DEPT OF CSE,Dr AIT 30


Interrupt-driven I/O is well suited for moving small amounts of
data but can produce high overhead when used for bulk data
movement such as disk I/O.
To solve this problem, direct memory access (DMA) is used. After
setting up buffers, pointers, and counters for the I/O device, the
device controller transfers an entire block of data directly to or
from its own buffer storage to memory, with no intervention by
the CPU. Only one interrupt is generated per block, to tell the
device driver that the operation has completed. While the device
controller is performing these operations, the CPU is available to
accomplish other work.
Some high-end systems use switch rather than bus architecture.
On these systems, multiple components can talk to other
components concurrently, rather than competing for cycles on a
**The main differences between the two are that in Bus architecture, the paths to
shared bus. different components of the network are shared and the response time is usually
slow especially when a large number of users are there because of single path to
a particular resource(shared memory).
But in case of switch networks there are concurrent paths to a resource and
these paths are point to point. Due to this reason the throughput of switch based
architecture is more than that of bus based architecture as multiple paths are
available to a shared resource

VINUTHA M S,DEPT OF CSE,Dr AIT 31


Computer System Architecture
Computer System Architecture Categorized roughly according to the number of general-purpose processors used.
1. Single-Processor Systems
2. Multi -Processor Systems
3. Clustered Systems
Single-Processor Systems :
A single processor system contains only one processing core. So only one process can be executed at a time and then the process
is selected from the ready queue. Most general purpose computers contain the single processor systems as they are commonly
in use. The variety of single-processor systems range from PDAs[Personal digital assistant] through mainframes. It contains
special-purpose processors, in the form of device-specific processors, for devices such as disk, keyboard, and graphics
controllers. As in the diagram, there are multiple applications that need to be executed. However, the system contains a single
processor and only one process can be executed at a time.

 Single processor systems cost is more because here every processor


requires separate resources.
 It is Easy to design Single Processor Systems.
 Single processor system is less reliable because failure in one processor
will result in failure of the entire system.
 The Throughput of single processor systems is less when compared to
multiprocessor systems because every task is performed by the same
processor.
 There is a use of a coprocessor in single processors because it uses
multiple Controllers which are designed to handle special tasks and that
can execute limited instruction sets. For example − DMA Controller.

VINUTHA M S,DEPT OF CSE,Dr AIT 32


Multi-Processor Systems
Multi-Processor Systems (parallel systems or tightly coupled systems -More: interdependency, coordination, information flow) Systems that have
two or more processors in close communication, sharing the computer bus, the clock, memory, and peripheral devices are the multiprocessor
systems.
Multiprocessor systems have three main advantages:
1. Increased throughput - In multiprocessor system, as there are multiple processors execution of different programs take place
simultaneously. Even if the number of processors is increased the performance cannot be simultaneously increased. This is due to the
overhead incurred in keeping all the parts working correctly and also due to the competition for the shared resources. The speed-up
ratio with N processors is not N, rather, it is less than N. Thus the speed of the system is not has expected.
2. Economy of scale - Multiprocessor systems can cost less than equivalent number of many single-processor systems. As the
multiprocessor systems share peripherals, mass storage, and power supplies, the cost of implementing this system is economical. If
several processes are working on the same data, the data can also be shared among them.
3. Increased reliability- In multiprocessor systems functions are shared among several processors. If one processor fails, the system is
not halted, it only slows down. The job of the failed processor is taken up, by other processors.
Two techniques to maintain ‘Increased Reliability’ - graceful degradation & fault tolerant
1. Graceful degradation – As there are multiple processors when one processor fails other process will take up its work and the system
go down slowly.
2. Fault tolerant – When one processor fails, its operations are stopped, the system failure is then detected, diagnosed, and corrected.

Throughput is a measure of how many units of information a system can process in a


given amount of time.

VINUTHA M S,DEPT OF CSE,Dr AIT 33


Different types of multi-processor systems
1) Asymmetric multiprocessing – (Master/Slave architecture) Here each processor is assigned a specific task, by the master
processor. A master processor controls the other processors in the system. It schedules and allocates work to the slave
processors.
2) Symmetric multiprocessing (SMP) – All the processors are considered peers. There is no master-slave relationship. All the
processors have their own registers and CPU, only memory is shared. The benefit of this model is that many processes
can run simultaneously. N processes can run if there are N CPUs—without causing a significant deterioration of
performance. Operating systems like Windows, Windows XP, Mac OS X, and Linux—now provide support for SMP.
3) Multicore-A recent trend in CPU design is to include multiple compute cores on a single chip. The communication
between processors within a chip is faster than communication between two single processors. once we add too many
CPUs, contention for the system bus becomes a bottleneck and performance begins to degrade.

Master
sSS

Slave
Asymmetric multiprocessing
Symmetric multiprocessing Dual core Design

VINUTHA M S,DEPT OF CSE,Dr AIT 34


4) NUMA [non-uniform memory access ]: An alternative approach is instead to provide each CPU with its own local memory that is
accessed via a small, fast local bus. The CPUs are connected by a shared system interconnect, so that all CPUs share one physical
address space. This approach—known as non-uniform memory access, or NUMA—is illustrated in Figure . The advantage is that,
when a CPU accesses its local memory, not only is it fast, but there is also no contention over the system interconnect. Thus, NUMA
systems can scale more effectively as more processors are added.

5) Blade servers : Are systems in which multiple processor boards, I/O boards, and networking boards are placed in the same
chassis. The difference between these and traditional multiprocessor systems is that each blade processor board boots
independently and runs its own operating system. Some blade-server boards are multiprocessor as well, which blurs the lines
between types of computers. In essence, these servers consist of multiple independent multiprocessor systems.

NUMA Blade Server

VINUTHA M S,DEPT OF CSE,Dr AIT 35


Parameter Single Processor Systems Multiprocessor Systems

Description If a System contains only one processor for processing than If a System contains two or more than two processors for processing than it is
it is called single processor system. called multiprocessor

Use of Co- Yes, Single Processors uses multiple Controllers that are In Multi Processor Systems Two Types of approaches are Used:
Processors designed to handle special tasks and can execute limited 1) Symmetric Multiprocessing(SMP)
instruction sets. E.g. DMA Controller, North/South Bridge. 2) Asymmetric Multiprocessing
In Asymmetric Multiprocessing one Processor works as Master and Second
Processor act as Slave
And in Symmetric Multiprocessing each processor performs all the tasks within
the operating system.
Throughput Throughput of Single Processor Systems is less than Throughput of Multiprocessor systems is greater than single processor systems. If
Multiprocessor Systems Because each and every task is a System Contains N Processors then its throughput will be slightly less than N
performed by the same processor. because synchronization must be maintained between two processors and they
also share resource which increases certain amount of overhead.

Cost Economic Single Processor Systems cost more because each processor Multiprocessor Systems cost less than equivalent multiple single processor
requires separate resources.i.e. Mass Storage, Peripherals, systems because they uses same resources on sharing basis.
Power Supplies etc.
Design Process It is Easy to design Single Processor Systems. It is difficult to design Multi Processor Systems because Synchronization must be
maintained between processors otherwise it may result in overloading of one
processor and another processor may remain idle on the same time.

Reliability Less reliable because failure in one processor will result in More reliable because failure of one processor does not halt the entire system but
failure of entire system. only speed will be slow down.

Examples Most of Modern PCs. Blade Servers.

VINUTHA M S,DEPT OF CSE,Dr AIT 36


S. No. Asymmetric Multiprocessing Symmetric Multiprocessing

In asymmetric multiprocessing, the processors are not treated In symmetric multiprocessing, all the processors are treated
1.
equally. equally.

2. Tasks of the operating system are done by master processor. Tasks of the operating system are done individual processor.

No Communication between Processors as they are controlled by All processors communicate with another processor by a shared
3.
the master processor. memory.

In asymmetric multiprocessing, process scheduling approach used In symmetric multiprocessing, the process is taken from the ready
4.
is master-slave. queue.

5. Asymmetric multiprocessing systems are cheaper. Symmetric multiprocessing systems are costlier.

6. Asymmetric multiprocessing systems are easier to design. Symmetric multiprocessing systems are complex to design.

7. All processors can exhibit different architecture. The architecture of each processor is the same.

It is simple as here the master processor has access to the data, It is complex as synchronization is required of the processors in
8.
etc. order to maintain the load balance.

In case a master processor malfunctions then slave processor


continues the execution which is turned to master processor. In case of processor failure, there is reduction in the system’s
9.
When a slave processor fails then other processors take over the computing capacity.
task.

10. It is suitable for homogeneous or heterogeneous cores. It is suitable for homogeneous cores.
VINUTHA M S,DEPT OF CSE,Dr AIT 37
Clustered Systems
Clustered systems are two or more individual systems connected together via a network and sharing software resources.
• Clustering provides high availability of resources and services. The service will continue even if one or more systems in the
cluster fail. High availability is generally obtained by storing a copy of files (s/w resources) in the system.
• All the systems of the clustered operating system have independent processing power and capacity i.e. they have their CPUs and
shared storage media. These systems work together with a shared storage media to complete all the tasks. The below diagram
illustrates the meaning of the clustered operating system.
• A cluster operating system is a combination of hardware and software clusters. The hardware clusters help in sharing high-
performance disks between all the computer systems or the nodes. Whereas the software cluster ensures and manages the
working of all the systems together. Each node in the cluster system contains the cluster software. This software keeps an eye on
the entire cluster system and ensures it works properly. If any of the nodes in the cluster system fails, then the rest of the nodes of
the system take control of their resources and try to restart.
The types of clustered operating systems can be divided into three types:
• Asymmetric Clustering Systems
• Symmetric Clustering Systems
• Parallel Cluster Systems.

General structure of a clustered system

VINUTHA M S,DEPT OF CSE,Dr AIT 38


2. Symmetric clustering – two or more systems are running
applications, and are monitoring each other. This mode is
1. Asymmetric clustering – one system is in hot-standby more efficient, as it uses all of the available hardware. If any
mode while the others are running the applications. The system fails, its job is taken up by the monitoring system.
hot-standby host machine does nothing but monitor the
active server. If that server fails, the hot-standby host
becomes the active server.

3. Other forms of clusters include parallel clusters and


clustering over a wide-area network (WAN). Parallel clusters
allow multiple hosts to access the same data on the shared
storage. Cluster technology is changing rapidly with the help of
SAN (storage-area networks). Using SAN resources can be
shared with dozens of systems in a cluster, that are separated
by miles.

VINUTHA M S,DEPT OF CSE,Dr AIT 39


Advantages of Cluster Operating System
•Superior Availability: The failure of a single node in the system doesn't mean the loss of service or the task in the system. As
every node in the cluster is running on an individual CPU and if any of the node failures occur then it can be pulled down for
maintenance while the remaining nodes can take the load of that failed node and do not interrupt the service.
•Efficiency: The cluster operating systems are more cost-effective as compared to highly reliable and larger storage mainframe
computers.
•Error Tolerance: If any error or fault occurs in any of the nodes of the system then the system does not halt. Because the failed
node can be swapped with the hot standby node in such situations.
•Performance: The cluster operating systems results in high performance as there are more than two nodes available that are
merged. These nodes work as a parallel unit and hence produce a better result.
•Scalability: These cluster systems are scalable as it is easy to add more nodes to the system. This can increase the
system's improvement, fault tolerance, and overall speed of the system.
•Speed of processing: The cluster operating system offers great availability and performance speed over single computer
systems.

Disadvantages of Cluster Operating System


•High Cost: The major disadvantage of the cluster operating system is that it requires more cost to meet the hardware and
software requirements to create a cluster.
•Maintenance: The cluster resources are challenging to maintain and manage and thus require a high cost to improve the
system.

VINUTHA M S,DEPT OF CSE,Dr AIT 40


Operating system OPERATION
Multitasking Systems [Time Sharing System]
Multitasking refers to execution of multiple jobs by CPU by switching between them. In the modern operating systems, we are able to play MP3 music,
edit documents in Microsoft Word, surf the Google Chrome all simultaneously, this is accomplished by means of multi tasking. Multitasking is a logical
extension of multi programming. The major way in which multitasking differs from multi programming is that multi programming works solely on the
concept of context switching whereas multitasking is based on time sharing alongside the concept of context switching.
In Time sharing systems, a single CPU executes multiple jobs by switching among them, but the switches occur so frequently that the users can interact with
each program while it is running. The user feels that all the programs are being executed at the same time.

A time-shared operating system allows many users to share the computer simultaneously. As the system switches rapidly from one user to the next, each
user is given the impression that the entire computer system is dedicated to his use only, even though it is being shared among many users.
Here also basically a context switch is occurring but it is occurring so fast that the user is able to interact with each program separately while it is running.
This way, the user is given the illusion that multiple processes/ tasks are executing simultaneously. But actually only one process/ task is executing at a
particular instant of time. In multitasking, time sharing is best manifested because each running process takes only a fair quantum of the CPU time.

VINUTHA M S,DEPT OF CSE,Dr AIT 41


Multiprogramming
One of the most important aspects of operating systems is the
ability to multiprogram. A single user cannot keep either the CPU or • Eventually, the first job finishes waiting and gets the CPU
the I/O devices busy at all times. Multiprogramming increases CPU back. Thus, the CPU is never idle.
utilization by organizing jobs, so that the CPU always has one to
execute. Figure shows Memory layout for a multiprogramming • Multiprogrammed systems provide an environment in which
system the various system resources (for example, CPU, memory,
and peripheral devices) are utilized effectively, but they do
• The operating system keeps several jobs in memory not provide for user interaction with the computer system.
simultaneously as shown in figure. This set of jobs is a subset of
the jobs kept in the job pool. Since the number of jobs that can be
kept simultaneously in memory is usually smaller than the
number of jobs that can be kept in the job pool. The operating
system picks and begins to execute one of the jobs in memory.
Eventually, the job may have to wait for some tasks, such as an
I/O operation, to complete. In a non-multiprogram system, the
CPU would sit idle.
• In a multiprogrammed system, the operating system simply
switches to, and executes, another job. When that job needs to
wait, the CPU is switched to another job, and so on.

VINUTHA M S,DEPT OF CSE,Dr AIT 42


Uniprogramming vs Multiprogramming

VINUTHA M S,DEPT OF CSE,Dr AIT 43


Dual-Mode and Multimode Operation
DUAL MODE
The dual-mode operations in the operating system protect the operating system from illegal users. We accomplish this defense
by designating some of the system instructions as privileged instructions that can cause harm. The Operating System needs to
function in the dual mode because the Kernel Level programs perform all the bottom level functions of the OS like process
management, Memory management, etc. If the user alters these, then this can cause an entire system failure. So, for
specifying the access to the users only to the tasks of their use, Dual Mode is necessary for an Operating system.
Certain types of processes are to be made hidden from the user, and certain tasks that do not require any type of hardware
support. Using the dual mode of the OS, these tasks can be deal with separately. The hardware only allows for the execution of
privileged instructions in kernel mode. An example of a privileged instruction is the command to switch to user mode. Other
examples include monitoring of I/O, controlling timers and handling interrupts.
To ensure proper operating system execution, we must differentiate between machine code execution and user-defined code.
We have two modes of the operating system: user mode and kernel mode. Mode bit is required to identify in which particular
mode the current instruction is executing.
Types of Dual Mode in Operating System
The operating system has two modes of operation to ensure it works correctly:
• User mode : If the mode bit is 1 means system works on the user applications
• Kernel mode: User requests some hardware services, a transition from
User mode to Kernel mode occurs, and this is done by
changing the mode bit from 1 to 0.

VINUTHA M S,DEPT OF CSE,DR AIT 44


1. User Mode[Restricted mode]
When the computer system runs user applications like file creation or any other application program in the User Mode, this mode does not
have direct access to the computer's hardware. For performing hardware related tasks, like when the user application requests for a
service from the operating system or some interrupt occurs, in these cases, the system must switch to the Kernel Mode. This means that if
the mode bit of the system's processor is 1, then the system will be in the User Mode.
2. Kernel Mode [supervisor mode, system mode, or privileged mode]
All the bottom level tasks of the Operating system are performed in the Kernel Mode. As the Kernel space has direct access to the hardware
of the system, so the kernel-mode handles all the processes which require hardware support. Apart from this, the main functionality of the
Kernel Mode is to execute privileged instructions.
These privileged instructions are not provided with user access, and that's why these instructions cannot be processed in the User mode.
So, all the processes and instructions that the user is restricted to interfere with are executed in the Kernel Mode of the Operating System.
The mode bit for the Kernel Mode is 0. So, for the system to function in the Kernel Mode.
• Example: With the mode bit, we can distinguish between a task executed on behalf of the operating system and one executed
on behalf of the user.

VINUTHA M S,DEPT OF CSE,DR AIT 45


Advantages:
DIFFERENCES 1. Protection : Dual-mode operation provides a layer of protection between user programs and the operating
system. In user mode, programs are restricted from accessing privileged resources, such as hardware
devices or sensitive system data. In kernel mode, the operating system has full access to these resources,
allowing it to protect the system from malicious or unauthorized access.
2. Stability: Dual-mode operation helps to ensure system stability by preventing user programs from
interfering with system-level operations. By restricting access to privileged resources in user mode, the
operating system can prevent programs from accidentally or maliciously causing system crashes or other
errors.
3. Flexibility: Dual-mode operation allows the operating system to support a wide range of applications and
hardware devices. By providing a well-defined interface between user programs and the operating system,
it is easier to develop and deploy new applications and hardware.
4. Debugging: By switching between user mode and kernel mode, developers can identify and fix issues more
quickly and easily.
5. Security: Dual-mode operation enhances system security by preventing unauthorized access to critical
system resources. User programs running in user mode cannot modify system data or perform privileged
operations, reducing the risk of malware attacks or other security threats.
6. Reliability: Dual-mode operation enhances system reliability by preventing crashes and other errors caused
by user programs. By restricting access to critical system resources, the operating system can ensure that
system-level operations are performed correctly and reliably.
7. Efficiency: Dual-mode operation can improve system performance by reducing overhead associated with
system-level operations. By allowing user programs to access resources directly in user mode, the operating
system can avoid unnecessary context switches and other performance penalties.
8. Compatibility: Dual-mode operation ensures backward compatibility with legacy applications and hardware
devices. By providing a standard interface for user programs to interact with the operating system, it is
easier to maintain compatibility with older software and hardware.
9. Isolation: Dual-mode operation provides isolation between user programs, preventing one program from
interfering with another. By running each program in its own protected memory space, the operating system
can prevent programs from accessing each other’s data or causing conflicts.

VINUTHA M S,DEPT OF CSE,DR AIT 46


Multi Mode
The concept of modes of operation in operating system can be extended beyond the dual mode. This is known as the
multimode system. In those cases the more than 1 bit is used by the CPU to set and handle the mode.
• An example of the multimode system can be described by the systems that support virtualisation. These CPU’s have a
separate mode that specifies when the virtual machine manager (VMM) and the virtualisation management software is in
control of the system.
• For these systems, the virtual mode has more privileges than user mode but less than kernel mode.

VINUTHA M S,DEPT OF CSE,DR AIT 47


Life cycle of instruction execution in a computer system.

Initial control resides in the operating system, where instructions are


executed in kernel mode. When control is given to a user application, the
mode is set to user mode. Eventually, control is switched back to the
operating system via an interrupt, a trap, or a system call.
Most contemporary operating systems—such as Microsoft Windows,
Unix, and Linux— take advantage of this dual-mode feature and provide
greater protection for the operating system.
System calls provide the means for a user program to ask the operating
system to perform tasks reserved for the operating system on the user
program’s behalf. A system call is invoked in a variety of ways, depending
on the functionality provided by the underlying processor.
A system call usually takes the form of a trap to a specific location in the The kernel verifies that the parameters are correct and legal, executes the
interrupt vector. This trap can be executed by a generic trap instruction, request, and returns control to the instruction following the system call.
although some systems have a specific system call instruction to invoke a
system call. Once hardware protection is in place, it detects errors that violate modes. These
errors are normally handled by the operating system. If a user program fails in
When a system call is executed, it is typically treated by the hardware as a
some way—such as by making an attempt either to execute an illegal
software interrupt. Control passes through the interrupt vector to a
instruction or to access memory that is not in the user’s address space— then
service routine in the operating system, and the mode bit is set to kernel
the hardware traps to the operating system.
mode.
The kernel examines the interrupting instruction to determine what The trap transfers control through the interrupt vector to the operating system,
system call has occurred; a parameter indicates what type of service the just as an interrupt does. When a program error occurs, the operating system
user program is requesting. Additional information needed for the request must terminate the program abnormally. This situation is handled by the same
may be passed in registers, on the stack, or in memory (with pointers to code as a user-requested abnormal termination.
the memory locations passed in registers).
VINUTHA M S,DEPT OF CSE,DR AIT 48
Timer
We must ensure that the operating system maintains control over the CPU. We must prevent a user program from getting stuck in an
infinite loop or not calling system services and never returning control to the operating system. To accomplish this goal, we can use a
timer. A timer can be set to interrupt the computer after a specified period. The period may be fixed (for example, 1/60 second) or
variable (for example, from 1 millisecond to 1 second). A variable timer is generally implemented by a fixed-rate clock and a counter.

The operating system sets the counter. Every time the clock ticks, the counter is decremented. When the counter reaches 0, an
interrupt occurs.

For instance, a 10-bit counter with a 1-millisecond clock allows interrupts at intervals from 1 millisecond to 1,024 milliseconds, in steps
of 1 millisecond. Before turning over control to the user, the operating system ensures that the timer is set to interrupt. If the timer
interrupts, control transfers automatically to the operating system, which may treat the interrupt as a fatal error or may give the
program more time. Clearly, instructions that modify the content of the timer are privileged. Thus, we can use the timer to prevent a
user program from running too long. A simple technique is to initialize a counter with the amount of time that a program is allowed to
run. A program with a 7-minute time limit, for example, would have its counter initialized to 420.

Every second, the timer interrupts and the counter is decremented by 1. As long as the counter is positive, control is returned to the
user program. When the counter becomes negative, the operating system terminates the program for exceeding the assigned time limit.

VINUTHA M S,DEPT OF CSE,DR AIT 49


RESOURCE MANAGEMENT
The system’s CPU, memory space, file-storage space, and I/O devices are among the resources that the operating system
must manage.

Process Management • The operating system is responsible for the following


• A program under execution is a process. A process needs activities in connection with process management:
resources like CPU time, memory, files, and I/O devices  Scheduling process and threads on the CPU
for its execution. These resources are given to the  Creating and deleting both user and system
process when it is created or at run time. When the processes
process terminates, the operating system reclaims the
resources.  Suspending and resuming processes
• The program stored on a disk is a passive entity and the  Providing mechanisms for process synchronization
program under execution is an active entity. A single-  Providing mechanisms for process communication
threaded process[a single sequential flow of activities
being executed in a process] has one program counter
specifying the next instruction to execute. The CPU
executes one instruction of the process after another,
until the process completes. A multithreaded process has
multiple program counters, each pointing to the next
instruction to execute for a given thread.

VINUTHA M S,DEPT OF CSE,Dr AIT 50


Memory Management
Memory is the important part of the computer that is used The operating system is responsible for the following activities in
to store the data. Its management is critical to the computer connection with memory management:
system because the amount of main memory available in a
 Deciding which processes and data to move into and out of
computer system is very limited. At any time, many
memory.
processes are competing for it. Moreover, to increase
performance, several processes are executed  Allocating and deallocating memory space as needed.
simultaneously. For this, we must keep several processes in  Memory manager is used to keep track of the status of memory
the main memory, so it is even more important to manage locations, whether it is free or allocated.
them effectively.  Memory manager permits computers with a small amount of
main memory to execute programs larger than the size or amount
Main memory is a large array of words or bytes. Each word of available memory. It does this by moving information back
or byte has its own address. Main memory is the storage and forth between primary memory and secondary memory by
device which can be easily and directly accessed by the using the concept of swapping.
CPU.  The memory manager is responsible for protecting the memory
As the program executes, the central processor reads allocated to each process from being corrupted by another
instructions and also reads and writes data from main process. If this is not ensured, then the system may exhibit
unpredictable behavior.
memory. To improve both the utilization of the CPU and
the speed of the computer's response to its users, general-  Memory managers should enable sharing of memory space
purpose computers must keep several programs in between processes.
memory, creating a need for memory management.

VINUTHA M S,DEPT OF CSE,Dr AIT 51


Storage Management
Storage Management is defined as it refers to the management of • Each medium is controlled by a device, such as a disk drive
the data storage equipment’s that are used to store the or tape drive, that also has its own unique characteristics
user/computer generated data. Hence it is a tool or set of processes
• A file is a collection of related information defined by its creator.
used by an administrator to keep your data and storage equipment’s Commonly, files represent programs and data. Data files may be
safe. numeric, alphabetic, alphanumeric, or binary. Files may be free-form
Storage management is a process for users to optimize the use of (for example, text files), or they may be formatted rigidly (for
example, fixed fields).
storage devices and to protect the integrity of data for any media on
which it resides and the category of storage management generally • The operating system implements the abstract concept of a file by
contain the different type of subcategories covering aspects such as managing mass storage media. Files are normally organized into
security, virtualization and more. directories to make them easier to use. When multiple users have
access to files, it may be desirable to control by whom and in what
The purpose of storage management is to help organizations find a ways (read, write, execute) files may be accessed.
balance between costs, performance and storage capacity.
The operating system is responsible for the following activities in
There are three types of storage management connection with file management:
i) File system management.
• Creating and deleting files
ii) Mass-storage management.
iii) Cache management. • Creating and deleting directories to organize file
File-System Management : File management is one of the most • Supporting primitives for manipulating files and directories
visible components of an operating system. Computers can store
• Mapping files onto secondary storage
information on several different types of physical media. Magnetic
disk, optical disk, and magnetic tape are the most common. Each of • Backing up files on stable (nonvolatile) storage media
these media has its own characteristics and physical organization

VINUTHA M S,DEPT OF CSE,Dr AIT 52


Mass-Storage Management
• As the main memory is too small to accommodate all data and programs, and as the data that it holds are erased
when power is lost, the computer system must provide secondary storage to back up main memory. Most modern
computer systems use disks as the storage medium for both programs and data.
• Most programs—including compilers, assemblers, word processors, editors, and formatters— are stored on a disk
until loaded into memory and then use the disk as both the source and destination of their processing. Hence, the
proper management of disk storage is of central importance to a computer system.
The operating system is responsible for the following activities in connection with disk management:
• Free-space management
• Storage allocation
• Disk scheduling
As the secondary storage is used frequently, it must be used efficiently. The entire speed of operation of a computer
may depend on the speeds of the disk. Magnetic tape drives and their tapes, CD, DVD drives and platters are tertiary
storage devices. The functions that operating systems provides include mounting and unmounting media in devices,
allocating and freeing the devices for exclusive use by processes, and migrating data from secondary to tertiary
storage.

VINUTHA M S,DEPT OF CSE,Dr AIT 53


Caching
Caching is an important principle of computer systems. In a hierarchical storage structure, the same data may appear in
Information is normally kept in some storage system (such as different levels of the storage system. For example, suppose to
main memory). As it is used, it is copied into a faster storage retrieve an integer A from magnetic disk to the processing
system— the cache—as temporary data. When a particular program. The operation proceeds by first issuing an I/O
piece of information is required, first we check whether it is in operation to copy the disk block on which A resides to main
the cache. If it is, we use the information directly from the memory. This operation is followed by copying A to the cache
cache; if it is not in cache, we use the information from the and to an internal register. Thus, the copy of A appears in several
source, putting a copy in the cache under the assumption places: on the magnetic disk, in main memory, in the cache, and
that we will need it again soon. in an internal register.
Because caches have limited size, cache management is an
important design problem. Careful selection of the cache
size and page replacement policy can result in greatly
increased performance. In a multiprocessor environment, in addition to maintaining
internal registers, each of the CPUs also contains a local cache.
The movement of information between levels of a storage
In such an environment, a copy of A may exist simultaneously in
hierarchy may be either explicit or implicit, depending on the
several caches. Since the various CPUs can all execute
hardware design and the controlling operating-system
concurrently, any update done to the value of A in one cache is
software. For instance, data transfer from cache to CPU and
immediately reflected in all other caches where A resides. This
registers is usually a hardware function, with no operating-
situation is called cache coherency, and it is usually a hardware
system intervention. In contrast, transfer of data from disk
problem.
to memory is usually controlled by the operating system

VINUTHA M S,DEPT OF CSE,Dr AIT 54


I/O System management
One of the purposes of an operating system is to hide the peculiarities of specific hardware devices from the user.
The I/O subsystem consists of several components:
• A memory-management component that includes buffering, caching, and spooling.
Buffering: The main memory has an area called buffer that is used to store or hold the data temporarily that is being transmitted
either between two devices or between a device or an application. Buffering is an act of storing data temporarily in the buffer. It
helps in matching the speed of the data stream between the sender and the receiver. If the speed of the sender’s transmission is
slower than the receiver, then a buffer is created in the main memory of the receiver, and it accumulates the bytes received from
the sender and vice versa.
Spooling: stands for “Simultaneous Peripheral Operation On-Line”. Its a process of placing data in temporary working area for
another program to process. Spooling is useful for the devices which have differing data access rate. Used mainly when processes
share some resource and needed to have synchronization. It Can handle large amounts of data since spooled data is stored on disk
or other external storage.
Caching: Caching transparently stores data in component called Cache, so that future request for that data can be served faster. A
special high-speed storage mechanism. It can be either a reserved section of main memory or an independent high-speed storage
device.
• A general device-driver interface
• Drivers for specific hardware devices
Only the device driver knows the peculiarities of the specific device to which it is assigned.

VINUTHA M S,DEPT OF CSE,Dr AIT 55


Distributed Systems
A distributed system is a collection of systems that are networked to
provide the users with access to the various resources in the network.
Access to a shared resource increases computation speed,
functionality, data availability, and reliability.
A network is a communication path between two or more systems.
Networks vary by the protocols used(TCP/IP,UDP,FTP etc.), the distances
between nodes, and the transport media(copper wires, fiber-optic,
wireless).
TCP/IP is the most common network protocol. The operating systems
support of protocols also varies. Most operating systems support TCP/IP,
including the Windows and UNIX operating systems.
Networks are characterized based on the distances between their nodes.
A local-area network (LAN) connects computers within a room, a floor,
or a building. A wide-area network (WAN) usually links buildings, cities,
or countries. A global company may have a WAN to connect its offices
worldwide. These networks may run one protocol or several protocols. A
metropolitan-area network (MAN) connects buildings within a city. Distributed System Software: This Software enables computers to
Bluetooth and 802.11 devices use wireless technology to communicate coordinate their activities and to share the resources such as Hardware,
over a distance of several feet, in essence creating a small-area network Software, Data, etc.
such as might be found in a home.
Database: It is used to store the processed data that are processed by each
The transportation media to carry networks are also varied. They include Node/System of the Distributed systems that are connected to
copper wires, fiber strands, and wireless transmissions between the Centralized network.
satellites, microwave dishes, and radios. When computing devices are
connected to cellular phones, they create a network.
VINUTHA M S,DEPT OF CSE,Dr AIT 56
Characteristics of Distributed System
Advantages of Distributed System:
• Resource Sharing: It is the ability to use any Hardware, Software, or • Applications in Distributed Systems are Inherently Distributed
Data anywhere in the System. Applications.
• Information in Distributed Systems is shared among geographically
• Openness: It is concerned with Extensions and improvements in the distributed users.
system (i.e., How openly the software is developed and shared with • Resource Sharing (Autonomous systems can share resources from
others) remote locations).
• It has a better price performance ratio and flexibility.
• Concurrency: It is naturally present in Distributed Systems, that deal • It has shorter response time and higher throughput.
with the same activity or functionality that can be performed by • It has higher reliability and availability against component failure.
separate users who are in remote locations. Every local system has • It has extensibility so that systems can be extended in more remote
its independent Operating Systems and Resources. locations and also incremental growth.

• Scalability: It increases the scale of the system as a number of Disadvantages of Distributed System:
processors communicate with more users by accommodating to • Relevant Software for Distributed systems does not exist currently.
improve the responsiveness of the system. • Security possess a problem due to easy access to data as the resources
are shared to multiple systems.
• Fault tolerance: It cares about the reliability of the system if there is • Networking Saturation may cause a hurdle in data transfer i.e., if there
a failure in Hardware or Software, the system continues to operate is a lag in the network then the user will face a problem accessing data.
properly without degrading the performance the system. • In comparison to a single user system, the database associated with
distributed systems is much more complex and challenging to
• Transparency: It hides the complexity of the Distributed Systems to manage.
the Users and Application programs as there should be privacy in • If every node in a distributed system tries to send data at once, the
every system. network may become overloaded.

• Heterogeneity: Networks, computer hardware, operating systems,


programming languages, and developer implementations can all
vary and differ among dispersed system components.
VINUTHA M S,DEPT OF CSE,Dr AIT 57
VIRTUALIZATION
Virtualization is a technology that provides a way for a
• Virtualization is the process of running a virtual instance of a
machine (VM Host ) to run another operating system (VM Guest)
computer system in a layer abstracted from the actual
on top of the host operating system. The primary component of
hardware. Most commonly, it refers to running multiple
VM Host that enables virtualization is a hypervisor.
operating systems on a computer system simultaneously. To
the applications running on top of the virtualized machine, it
Hypervisor is a layer of software that runs directly on VM Host
can appear as if they are on their own dedicated machine,
hardware. It controls platform resources, sharing them among
where the operating system, libraries, and other programs
multiple VM Guests and their operating systems by presenting
are unique to the guest virtualized system and unconnected
virtualized hardware interfaces to each VM Guest.
to the host operating system which sits below it.

• Virtualization is technology that you can use to create virtual


representations of servers, storage, networks, and other
physical machines. A user of a virtual machine can switch
among the various operating systems in the same way a user
can switch among the various processes running
concurrently in a single operating system.

• Host Machine: The machine on which the virtual machine is


going to be built is known as Host Machine.
• Guest Machine: The virtual machine is referred to as a Guest
Machine.

VINUTHA M S,DEPT OF CSE,Dr AIT 58


• Virtualization is a technology that allows us to abstract the • Broadly speaking, virtualization software is one member
hardware of a single computer (the CPU, memory, disk of a class that also includes emulation. Emulation, in a
drives, network interface cards, and so forth) into several software context, is the use of an application
different execution environments, thereby creating the program or device to imitate the behavior of another
illusion that each separate environment is running on its program or device. Emulation, which involves simulating
own private computer. computer hardware in software, is typically used when
• These environments can be viewed as different individual the source CPU type is different from the target CPU type.
operating systems (for example, Windows and UNIX) that For example, when Apple switched from the IBM Power
may be running at the same time and may interact with CPU to the Intel x86 CPU for its desktop and laptop
each other. computers, it included an emulation facility called
“Rosetta,” which allowed applications compiled for the
• A user of a virtual machine can switch among the various IBM CPU to run on the Intel CPU. That same concept can
operating systems in the same way a user can switch be extended to allow an entire operating system written
among the various processes running concurrently in a for one platform to run on another.
single operating system. so virtualization industry is vast
and growing, which is a testament to its utility and • Emulation comes at a heavy price, however. Every
importance. machine-level instruction that runs natively on the
source system must be translated to the equivalent
function on the target system, frequently resulting in
several target instructions. If the source and target CPUs
have similar performance levels, the emulated code may
run much more slowly than the native code.

VINUTHA M S,DEPT OF CSE,Dr AIT 59


Virtualization first came about on IBM mainframes as a method for multiple
users to run tasks concurrently. Running multiple virtual machines allowed
(and still allows) many users to run tasks on a system designed for a single
user. Later, in response to problems with running multiple Microsoft Windows
applications on the Intel x86 CPU, VMware created a new virtualization
technology in the form of an application that ran on Windows.

Windows was the host operating system, and the VMware application was
the virtual machine manager (VMM). The VMM runs the guest operating
systems, manages their resource use, and protects each guest from the
others. Even though modern operating systems are fully capable of running
multiple applications reliably, the use of virtualization continues to grow.

On laptops and desktops, a VMM allows the user to install multiple operating
systems for exploration or to run applications written for operating systems
other than the native host. For example, an Apple laptop running macOS on
the x86 CPU can run a Windows 10 guest to allow execution of Windows
applications.

Companies writing software for multiple operating systems can use That application ran one or more guest copies of Windows or
virtualization to run all of those operating systems on a single physical server other native x86 operating systems, each running its own
for development, testing, and debugging. Within data centers, virtualization applications. Figure shows computer running (a) a single
operating system and (b) three virtual machines
has become a common method of executing and managing computing
environments.
VINUTHA M S,DEPT OF CSE,Dr AIT 60
OPERATING SYSTEM SERVICES
An operating system provides an environment for the execution of
programs. It provides certain services to programs and to the users of • Program Execution - The OS must be able to load a program into RAM, run
those programs the program, and terminate the program, either normally or abnormally
OS provide services for the users of the system, including: • I/O Operations - The OS is responsible for transferring data to and from I/O
• User Interfaces – Almost all operating systems have a user interface devices, including keyboards, terminals, printers, and files. For specific devices,
(UI). This interface can take several forms. special functions are provided (device drivers) by OS.

Most commonly, a graphical user interface (GUI) is used. • File-System Manipulation – Programs need to read and write files or
Here, the interface is a window system with a mouse that serves as a directories. The services required to create or delete files, search for a file, list
pointing device to direct I/O, choose from menus, and make selections the contents of a file and change the file permissions are provided by OS.
and a keyboard to enter text.
Mobile systems such as phones and tablets provide a
touch-screen interface, enabling users to slide their fingers across the
screen or press buttons on the screen to select choices.
Another option is a command-line interface (CLI), which
uses text commands and a method for entering them (say, a keyboard
for typing in commands in a specific format with specific options. In this
commands are given to the system.
Each time you type a command, a program called a ``shell'' accepts
and interprets what you have typed, and sends your command on to
be performed by the operating system. There are four shells available
the ``C'' shell (csh), the ``TC'' shell (tcsh), the ``Bourne'' shell (sh, the
original UNIX shell) and the ``Korn'' shell (ksh). The TC-shell is like the C-
shell (so called because its syntax is similar to the ``C'' language)
A view of operating system services
VINUTHA M S,DEPT OF CSE,Dr AIT 1
OS provide services for the efficient operation of the system,
• Communications - Inter-process communications, IPC, either including:
between processes running on the same processor, or
between processes running on separate processors or
• Resource Allocation – Resources like CPU cycles,
separate machines. May be implemented by using the main memory, storage space, and I/O devices must be
service of OS- like shared memory or message passing. allocated to multiple users and multiple jobs at the
same time.
• Error Detection - Both hardware and software errors must
be detected and handled appropriately by the OS. • Accounting/Logging – There are services in OS to
keep track of system activity and resource usage,
either for billing purposes or for statistical record
Errors may occur in the CPU and memory hardware (such keeping that can be used to optimize future
as power failure and memory error), in I/O devices (such performance.
as a parity error on tape, a connection failure on a
network, or lack of paper in the printer), and in the user • Protection and Security – The owners of
program (such as an arithmetic overflow, an attempt to information (file) in multiuser or networked computer
access an illegal memory location). system may want to control the use of that
information.
When several separate processes execute
concurrently, one process should not interfere with
other or with OS.
Protection involves ensuring that all access to system
resources is controlled. Security of the system from
outsiders must also be done, by means of a password.

VINUTHA M S,DEPT OF CSE,Dr AIT 2


System Calls
System calls provides an interface to the services of the
operating system. These are generally written in C or C++, An example to illustrate how system calls are used: writing a simple program
although some are written in assembly for optimal to read data from one file and copy them to another file
performance.
There are number of system calls used to finish this task. The first system
The below figure illustrates the sequence of system calls call is to write a message on the screen (monitor). Then to accept the input
required to copy a file content from one file (input/source filename. Then another system call to write message on the screen, then to
file) to another file (output/destination file). accept the output filename.
When the program tries to open the input file, it may find that there is no
file of that name or that the file is protected against access. In these cases,
the program should print a message on the console (another system call)
and then terminate abnormally (another system call) and create a new one
(another system call).
Now that both the files are opened, we enter a loop that reads from the
input file (another system call) and writes to output file (another system
call).
Finally, after the entire file is copied, the program may close both files
(another system call), write a message to the console or window (system
call), and finally terminate normally (final system call).

VINUTHA M S,DEPT OF CSE,Dr AIT 3


• Most programmers do not use the low-level system calls directly, but instead use an "Application Programming Interface", API.
• Instead of direct system calls provides for greater program portability between different systems. The API then makes the
appropriate system calls through the system call interface, using a system call table to access specific numbered system calls.
• Each system call has a specific numbered system call. The system call table (consisting of system call number and address of
the particular service) invokes a particular service routine for a specific system call.
• The caller need know nothing about how the system call is implemented or what it does during execution.

VINUTHA M S,DEPT OF CSE,Dr AIT 4


• Three general methods used to pass parameters to OS
are –
i) To pass parameters in registers
ii) If parameters are large blocks, address of block (where
parameters are stored in memory) is sent to OS in the
register. (Linux & Solaris).
iii) Parameters can be pushed onto the stack by program
and popped off the stack by OS

Types of System Calls:


The system calls can be categorized into six major
categories:
1. Process Control
2. File management
3. Device management
4. Information management
5. Communications
6. Protection

VINUTHA M S,DEPT OF CSE,Dr AIT 5


Process Control Device Management
• Process control system calls include end, abort, load, execute, create
process, terminate process, get/set process attributes, wait for time or • Device management system calls include request device, release device,
event, signal event, and allocate and free memory. read, write, reposition, get/set device attributes, and logically attach or
• Processes must be created, launched, monitored, paused, resumed, and detach devices.
eventually stopped.
• When a process needs a resource, a request for resource is done. Then
• When one process pauses or stops, then another must be launched or
resumed the control is granted to the process. If requested resource is already
attached to some other process, the requesting process has to wait.
• Process attributes like process priority, max. allowable execution time
etc. are set and retrieved by OS. • In multiprogramming systems, after a process uses the device, it has to
• After creating the new process, the parent process may have to wait be returned to OS, so that another process can use the device.
(wait time), or wait for an event to occur (wait event). The process sends
back a signal when the event has occurred (signal event) • Devices may be physical (e.g. disk drives ), or virtual / abstract ( e.g.
File Management files, partitions, and RAM disks ).

The file management functions of OS are –


Information Maintenance
• File management system calls include create file, delete file, open, close,
read, write, reposition, get file attributes, and set file attributes.
• Information maintenance system calls include calls to get/set the time,
• After creating a file, the file is opened. Data is read or written to a file. date, system data, and process, file, or device attributes.
• The file pointer may need to be repositioned to a point.
• These system calls are used to transfer the information between user
• The file attributes like filename, file type, permissions, etc. are set and and the OS. Information like current time & date, no. of current users,
retrieved using system calls. version no. of OS, amount of free memory, disk space etc. are passed
• These operations may also be supported for directories as well as from OS to the user
ordinary files.

VINUTHA M S,DEPT OF CSE,DR AIT 6


• Message passing is simpler and easier, (particularly for inter-
Communication computer communications), and is generally appropriate for small
Communication system calls create/delete communication amounts of data. It is easy to implement, but there are system calls
connection, send/receive messages, transfer status information, and for each read and write process.
attach/detach remote devices. • Shared memory is faster, and is generally the better approach
• The message passing model must support calls to: where large amounts of data are to be shared. This model is difficult
 Identify a remote process and/or host with which to to implement, and it consists of only few system calls.
communicate.
 Establish a connection between the two processes.
Protection
 Open and close the connection as needed.
 Transmit messages along the connection.
 Wait for incoming messages, in either a blocking or non- • Protection provides mechanisms for controlling which users /
blocking state. processes have access to which system resources.
 Delete the connection when no longer needed.
• The shared memory model must support calls to: • System calls allow the access mechanisms to be adjusted as
o Create and access memory that is shared amongst processes needed, and for non- privileged users to be granted elevated access
(and threads. ) permissions under carefully controlled temporary circumstances
o Free up shared memory and/or dynamically allocate it as
needed.
o Transmit messages along the connection.
o Wait for incoming messages, in either a blocking or non-
blocking state.
o Delete the connection when no longer needed.

VINUTHA M S,DEPT OF CSE,DR AIT 7


Operating-System Design and Implementation
STEP 1: Define goals and specifications

• The first problem in designing a system is to define goals and • Beyond this highest design level, the requirements may be much
specifications. Design goals are the objectives of the operating harder to specify. The requirements can, however, be divided into
system. They must be met to fulfill design requirements and they two basic groups
can be used to evaluate the design. 1. User goals (User requirements)
2. System goals (system requirements)
• These goals may not always be technical, but they often have
• User requirements are the features that user care about and
a direct impact on how users perceive their experience with
understand like system should be convenient to use, easy to learn,
an operating system.
reliable, safe and fast.
• While designers need to identify all design goals and prioritize
• System requirements are written for the developers, ie. People
them, they also need to ensure that these goals are who design the OS. Their requirements are like easy to design,
compatible with each other as well as compatible with user implement and maintain, flexible, reliable, error free and efficient.
expectations or expert advice

• At the highest level, the design of the system will be affected • The process of identifying design goals, conflicts, and priorities
by the choice of hardware and the type of system: batch, is often referred to as “goal-driven design.” The goal of this
time shared, single user, multiuser, distributed, real time, or approach is to ensure that each design decision is made with
the best interest of users and other stakeholders in mind.
general purpose.

VINUTHA M S,DEPT OF CSE,Dr AIT 8


STEP 2: Mechanisms and Policies
An operating system is a set of software components that manage a  Thread scheduling or answering the question “which thread
computer’s resources and provide overall system management. should be given the chance to run next?” is a policy. For
example, is it priority based ? or just round robin ?.
• Mechanisms and policies are the two main components of an Implementing context switching is the corresponding
operating system. mechanism.
Mechanisms handle low-level functions such as
scheduling, memory management, and interrupt handling. • Policy decisions are important for all resource allocation.
Whenever it is necessary to decide whether or not to allocate a
policies handle higher-level functions such as resource resource, a policy decision must be made.
management, security, and reliability. A well-designed OS should
provide both mechanisms and policies for each component in order for • Policies change overtime. In the worst case, each change in
it to be successful at its task. policy would require a change in the underlying mechanism.

• Policies determine what is to be done. Mechanisms determine how • If properly separated and implemented, policy changes can be
it is to be implemented. easily adjusted without rewriting the code, just by adjusting
Example: parameters or possibly loading new data / configuration files.
 In timer- counter and decrementing counter is the mechanism
and deciding how long the time has to be set is the policies. • Separation of policy and mechanism is a design principle to achieve
flexibility. In other words, adopting a certain mechanism should not
 Granting a resource to a process using first come first serve restrict existing policies. The idea behind this concept is to have the
least amount of implementation changes if we decide to change the
algorithm (policy). This policy can be implemented using a
way a particular feature is used.
queue (mechanism).

VINUTHA M S,DEPT OF CSE,Dr AIT 9


STEP 3: Implementation • The advantages of using a higher-level language for implementing
operating systems are: The code can be written faster, more
Once an operating system is designed, it must be implemented. compact, easy to port to other systems and is easier to understand
Because operating systems are collections of many programs, written and debug.
by many people over a long period of time, it is difficult to make
general statements about how they are implemented. • operating system is far easier to port to other hardware if it is
written in a higher-level language. This is particularly important
• Traditionally OS were written in assembly language. In recent for operating systems that are intended to run on several
years, OS are written in C, or C++. Critical sections of code are still different hardware systems, such as small embedded devices,
written in assembly language. Intel x86 systems, and ARM chips running on phones and tablets

• The lowest levels of the kernel might be written in assembly • The only disadvantages of implementing an operating system in a
language and C. Higher-level routines might be written in C and higher-level language are reduced speed and increased storage
C++, and system libraries might be written in C++ or even higher- requirements.
level languages .
• major performance improvements in operating systems are more
• Android provides a nice example: its kernel is written mostly in C likely to be the result of better data structures and algorithms
with some assembly language. Most Android system libraries are than of excellent assembly-language code.
written in C or C++, and its application frameworks—which
provide the developer interface to the system—are written • In addition, although operating systems are large, only a small
mostly in Java amount of the code is critical to high performance; the interrupt
handlers, I/O manager, memory manager, and CPU scheduler are
probably the most critical routines.

• After the system is written and is working correctly, bottlenecks


can be identified and can be refactored to operate more
efficiently
VINUTHA M S,DEPT OF CSE,Dr AIT 10
Operating-System Structure
Operating system can be implemented with the help of various structures.
The structure of the OS depends mainly on how the various common
components of the operating system are interconnected and melded into
the kernel. Depending on this we have following structures of the
operating system:
Simple Structure
• Many operating systems do not have well-defined structures. They
started as small, simple, and limited systems and then grew beyond their
original scope. Eg: MS-DOS.
• In MS-DOS, the interfaces and levels of functionality are not well
separated. Application programs can access basic I/O routines to write
directly to the display and disk drives. Such freedom leaves MS-DOS in
bad state and the entire system can crash down when user programs fail. Figure: MS-DOS layer structure.
• UNIX OS consists of two separable parts: the kernel and the system
programs. The kernel is further separated into a series of interfaces and Advantages of Simple structure:
device drivers. The kernel provides the file system, CPU scheduling, •It delivers better application performance because of the few interfaces
memory management, and other operating-system functions through between the application program and the hardware.
system calls. •Easy for kernel developers to develop such an operating system.

Disadvantages of Simple structure:


•The structure has no clear boundaries between modules.
•It does not enforce data hiding in the operating system.

VINUTHA M S,DEPT OF CSE,Dr AIT 11


Layered Approach
• The OS is broken into number of layers (levels). Each layer
rests on the layer below it, and relies on the services provided
by the next lower layer.
• Bottom layer (layer 0) is the hardware and the topmost layer
is the user interface.
• A typical layer, consists of data structure and routines that can
be invoked by higher-level layer.
• Advantage of layered approach is simplicity of construction
and debugging.
• The layers are selected so that each uses functions and
services of only lower-level layers. So simplifies debugging and
system verification. The layers are debugged one by one from
the lowest and if any layer doesn’t work, then error is due to
that layer only, as the lower layers are already debugged. Thus, Disadvantages of layered approach:
the design and implementation are simplified. • The various layers must be appropriately defined, as a layer can use only
• A layer need not know how its lower-level layers are lower-level layers.
implemented. Thus hides the operations from higher layers. • Less efficient than other types, because any interaction with layer 0
required from top layer. The system call should pass through all the layers
and finally to layer 0. This is an overhead.

VINUTHA M S,DEPT OF CSE,Dr AIT 12


Monolithic Structure
• The monolithic operating system is a very basic operating system in
which file management, memory management, device management,
and process management are directly controlled within the kernel. The
kernel can access all the resources present in the system. In
monolithic systems, each component of the operating system is
contained within the kernel. Operating systems that use monolithic
architecture were first time used in the 1970s.

• The monolithic operating system is also known as the monolithic


kernel. This is an old operating system used to perform small tasks like
batch processing and time-sharing tasks in banks. The monolithic
kernel acts as a virtual machine that controls all hardware parts.
Advantages of Monolithic Kernel
Monolithic kernel •The execution of the monolithic kernel is quite fast as the services
• A monolithic kernel is an operating system architecture where the such as memory management, file management, process scheduling,
entire operating system is working in kernel space. etc., are implemented under the same address space.
•A process runs completely in single address space in the monolithic
• The monolithic model differs from other operating system kernel.
architectures, such as the microkernel architecture, in that it alone
defines a high-level virtual interface over computer hardware. Disadvantages of Monolithic Kernel
•If any service fails in the monolithic kernel, it leads to the failure of the
• A set of primitives or system calls implement all operating system entire system.
services such as process management, concurrency, and memory •The entire operating system needs to be modified by the user to add
management. Device drivers can be added to the kernel as modules. any new service.

VINUTHA M S,DEPT OF CSE,Dr AIT 13


Microkernels
The microkernel is entirely responsible for the operating system's
This method structures the operating system by removing all most significant services, which are as follows:
nonessential components from the kernel and implementing them as
system and user-level programs thus making the kernel as small and 1.Inter-Process Communication
efficient as possible. 2.Memory Management
3.CPU Scheduling
• In the above figure, the microkernel includes basic needs like process
scheduling mechanisms, memory, and interprocess communication. It
is the only program that executes at the privileged level, i.e., kernel
mode. The OS's other functions are moved from the kernel-mode and
execute in the user mode.

• The microkernel ensures that the code may be easily controlled


because the services are split in the user space. It means some code
runs in the kernel mode, resulting in improved security and stability.

• Since the kernel is the most crucial OS component, it is responsible for


the essential services. As a result, under this design, only the most
significant services are present inside the kernel in this architecture.
In contrast, the rest operating system services are available inside the
system application software.

VINUTHA M S,DEPT OF CSE,Dr AIT 14


Advantages
1. Microkernels are secure since only those parts are added as kernel, which might disturb the system's
functionality.
2. Microkernels are modular, and the various modules may be swapped, reloaded, and modified without affecting
the kernel.
3. Microkernel architecture is compact and isolated, so it may perform better.
4. The system expansion is more accessible, so it may be introduced to the system application without disrupting
the kernel.
5. When compared to monolithic systems, microkernels have fewer system crashes. Furthermore, due to the
modular structure of microkernels, any crashes that do occur are simply handled.
6. It adds new features without recompiling.

Disadvantages
1. In a microkernel system, providing services are more costly than in a traditional monolithic system.
2. The performance of a microkernel system might be indifferent and cause issues.

VINUTHA M S,DEPT OF CSE,Dr AIT 15


Modules
• Modern OS development is object-oriented, with a relatively small core kernel
and a set of modules which can be linked in dynamically.

• Modules are similar to layers in that each subsystem has clearly defined tasks and
interfaces, but any module is free to contact any other module, eliminating the
problems of going through multiple intermediary layers.

• The kernel is relatively small in this architecture, similar to microkernels, but the
kernel does not have to implement message passing since modules are free to
contact each other directly. Eg: Solaris, Linux and MacOSX.

• Perhaps the best current methodology for operating-system design involves


using loadable kernel modules (LKMs). Here, the kernel has a set of core
components and can link in additional services via modules, either at boot time
or during run time. This type of design is common in modern implementations
of UNIX, such as Linux, macOS, and Solaris, as well as Windows.

• The MacOSX architecture relies on the Mach microkernel for basic system
management services, and the BSD kernel for additional services. Application The overall result resembles a layered system in that each kernel
services and dynamically loadable modules (kernel extensions ) provide the rest of section has defined, protected interfaces; but it is more flexible
the OS functionality. than a layered system, because any module can call any other
module.
The approach is also similar to the microkernel approach in that
the primary module has only core functions and knowledge of how
to load and communicate with other modules; but it is more
efficient, because modules do not need to invoke message passing
in order to communicate.
VINUTHA M S,DEPT OF CSE,Dr AIT 16
Hybrid Systems
• In practice, very few operating systems adopt a single, strictly • Below these layers is the kernel environment, which consists
defined structure. Instead, they combine different structures, primarily of the Mach microkernel and the BSD UNIX kernel.
resulting in hybrid systems that address performance, security,
and usability issues. • Mach provides memory management; support for remote
procedure calls (RPCs) and interprocess communication (IPC)
• For example, both Linux and Solaris are monolithic, because facilities, including message passing; and thread scheduling.
having the operating system in a single address space provides
very efficient performance. • The BSD component provides a BSD command-line interface,
support for networking and file systems, and an
• However, they are also modular, so that new functionality can implementation of POSIX APIs, including Pthreads.
be dynamically added to the kernel.
• In addition to Mach and BSD, the kernel environment provides
• Windows is largely monolithic as well (again primarily for an I/O kit for development of device drivers and dynamically
performance reasons), but it retains some behavior typical of loadable modules (which Mac OS X refers to as kernel
microkernel systems, including providing support for separate extensions
subsystems (known as operating-system personalities) that run
as user-mode processes. Windows systems also provide
support for dynamically loadable kernel modules.
• MacOSX :The Apple MacOSX operating system uses a hybrid
structure. As shown in Figure, it is a layered system and
consists microkernel. The top layers include the Aqua user
interface and a set of application environments and services.
Notably, the Cocoa environment specifies an API for the
Objective-C programming language, which is used for writing
MacOSX applications.

VINUTHA M S,DEPT OF CSE,Dr AIT 17


Example of Hybrid structure:
1. macOS and iOS:
• Apple’s macOS operating system is designed to run primarily on • Kernel environment: This environment, also known as Darwin,
desktop and laptop computer systems, whereas iOS is a mobile includes the Mach microkernel and the BSD UNIX kernel.
operating system designed for the iPhone smartphone and iPad
tablet computer.
• The general architecture of these two systems is shown in
Figure. Highlights of the various layers include the following:
• User experience layer: This layer defines the software interface
that allows users to interact with the computing devices.
macOS uses the Aqua user interface, which is designed for a
mouse or trackpad, whereas iOS uses the Springboard user
interface, which is designed for touch devices.
• Application frameworks layer: This layer includes the Cocoa
and Cocoa Touch frameworks, which provide an API for the
Objective-C and Swift programming languages. The primary
difference between Cocoa and Cocoa Touch is that the former
is used for developing macOS applications, and the latter by iOS
to provide support for hardware features unique to mobile
devices, such as touch screens. As shown in Figure , applications can be designed to take advantage
• Core frameworks: This layer defines frameworks that support of user-experience features or to bypass them and interact directly
graphics and media including, Quicktime and OpenGL with either the application framework or the core framework.
Additionally, an application can forego frameworks entirely and
communicate directly with the kernel environment

VINUTHA M S,DEPT OF CSE,Dr AIT 18


2. Android: • media: Media library provides support to play and record an
The Android operating system was designed by the Open audio and video format.
Handset Alliance (led primarily by Google) and was developed for
Android smartphones and tablet computers. • surface manager: It is responsible for managing access to the
display subsystem.
1. Applications: An application is the top layer of the android
architecture. The pre-installed applications like camera, gallery, • SQLite: It provides database support.
home, contacts, etc., and third-party applications downloaded from 5. Linux Kernel: Linux Kernel is the heart of the android
the play store like games, chat applications, etc., will be installed on architecture. It manages all the available drivers such as
this layer. display, camera, Bluetooth, audio, memory, etc.,
2. Application Framework: provides several important classes used to required during the runtime. The Linux Kernel will provide
create an Android application. It includes different types of services, an abstraction layer between the device hardware and
such as activity manager, notification manager, view system, package the other android architecture components. It is
manager etc., which are helpful for the development of our responsible for the management of memory, power, devices
application according to the prerequisite. The Application Framework etc.
layer provides many higher-level services to applications in the form
of Java classes.
3. Application runtime: Android Runtime environment contains
components like core libraries and the Dalvik virtual machine
(DVM). It provides the base for the application framework and
powers our application with the help of the core libraries.
4. Platform libraries: The Platform Libraries include various C/C++ core
libraries and Java-based libraries such as Media, Graphics, Surface
Manager, OpenGL, etc., to support Android development.
• OpenGL: A Java interface to the OpenGL ES 3D graphics rendering API.
• WebKit: A set of classes intended to allow web-browsing capabilities
to be built into applications.

VINUTHA M S,DEPT OF CSE,Dr AIT 19

You might also like