Unit 1 OS

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 45

OPERATING SYSTEM(UNIT-1st)

1.Operating System and Functions


Why Study Operating Systems?

 Abstraction — how do you give the users the illusion of infinite resources (CPU time,
memory, file space)?
 Primary intersection point — OS is the point where hardware, software, programming
languages, data structures, and algorithms all come together.

OPERATING SYSTEM
“An OS is a program that controls the execution of application programs and acts as an
interface between the user of the computer and the computer hardware.”

Operating
System

USER INTERFACE HARDWARE


Figure 1.1: Operating system interaction with user and hardware of the system

 An OS provides standard services (an interface) which are implemented on the hardware,
including:
 Processes, CPU scheduling, memory management, file systems, networking.
 The OS coordinates multiple applications and users (multiple processes) in a fair and
efficient manner.
 The goal in OS development is to make the machine convenient to use and efficient.
 Operating System provides:
 Memory management — allocate memory to processes, move processes between
disk and memory.
 File system — allocate space for storage of programs and data on disk.
 Networks and distributed computing — allow computers to work together.
 Provides information protection.
 Gives each user a slice of the resources
 Acts as a control program.

Functions of Operating System


OS can be thought of as having three objectives:
1. Convenience: An OS makes a computer more convenient to use.
2. Efficiency: An OS allows the computer system resources to be used in an efficient manner.

1
3. Ability to evolve: An OS should be constructed in such a way as to permit the effective
development, testing, and introduction of new system functions without interfering with service.

Operating System as a User/Computer interface


A computer system can be divided roughly into four components: the hardware, the operating
system, the application programs, and the users (Figure 1.2).
The hardware-the CPU, the memory, and the input-output devices-provides the basic
computing resources for the system. The application programs- such as word processors,
spreadsheets, compilers, and Web browsers-define the ways in which these resources are used to
solve users computing problems. The operating system controls the hardware and coordinates its
use among the various application programs for the various users.

Figure 1.2: Abstract view of the components of the computer system

The user's view of the computer varies according to the interface being used. Most computer
users sit in front of a PC, consisting of a monitor, keyboard, mouse, and system unit. Such a
system is designed for one user. In this case, the operating system is designed mostly for ease of
use, with some attention paid to performance and none paid to resource utilization.
Performance is important to the user; but such systems are optimized for the single-user
experience rather than the requirements of multiple users.

In other cases, a user sits at a terminal connected to a mainframe or minicomputer.


Other users are accessing the same computer through other terminals. These users share
resources and may exchange information. The operating system in such cases is designed to

2
maximize resource utilization-to assure that all available CPU time, memory, and I/0 are used
efficiently.

In still other cases, users sit at workstations connected to networks of other workstations
and servers. These users have dedicated resources at their disposal, but they also share resources
such as networking and servers-file, compute, and print servers. Therefore, their operating
system is designed to compromise between individual usability and resource utilization.

Operating System as a Resource Manager


A computer is a set of resources for the movement, storage, and processing of data and for the
control of these functions. The OS is responsible for managing these resources.
OS provides a control mechanism in two respects:

 The OS functions in the same way as ordinary computer software; that is, it is a program
or suite of programs executed by the processor.
 The OS frequently relinquishes control and must depend on the processor to allow it to
regain control.( The OS directs the processor in the use of the other system resources and
in the timing of its execution of other programs. But in order for the processor to do any
of these things, it must cease executing the OS program and execute other programs).

Figure 1.3: OS as a Resource Manager

Figure 1.3 suggests the main resources that are managed by the OS. A portion of the OS is in
main memory. This includes the kernel, which contains the most frequently used functions in the
OS and, at a given time, other portions of the OS currently in use. The remainder of main
memory contains user programs and data. The allocation of this resource (main memory) is
controlled jointly by the OS and memory management hardware in the processor. The OS
decides when an I/O device can be used by a program in execution and controls access to and use

3
of files. The processor itself is a resource, and the OS must determine how much processor time
is to be devoted to the execution of a particular user program.

2.Classification of Operating System


2.1Simple Batch System
The problem of early systems was more setup time. So the problem of more set up time was
reduced by processing the jobs in batches, known as batch processing system. In this approach
similar jobs were submitted to the CPU for processing and were run together.

In the early job processing systems, the jobs were placed in a job queue and the memory
allocator managed the primary memory space, when space was available in the main memory, a
job was selected from the job queue and was loaded into memory. Once the job loaded into
primary memory, it competes for the processor. When the processor became available, the
processor scheduler selects job that was loaded in the memory and execute it.

Batch strategy is implemented to provide a batch file processing. So in this approach files of the
similar batch are processed to speed up the task.

Traditional Job Processing Batch File Processing

Figure 2.1: Traditional job processing VS Batch file processing

4
Memory management in batch system is very simple. Memory is usually divided into two areas:
Operating System and user program area.

Operating Resident Portion(Batch monitor)


System

Transient program
User Program
Area

Figure 2.2: Memory layout for a simple batch system

The main function of a batch processing system is to automatically keep executing the jobs in a
batch. This is the important task of a batch processing system i.e. performed by the 'Batch
Monitor' resided in the Resident portion of main memory.

The monitor controls the following sequence of events:

 The monitor reads in jobs one at a time


*/ A job is predefined sequence of commands, programs and data that are combined into a single unit
called job /*
 The current job placed in user program area
 Control passed on this job
 When the job is completed, it returns control to the monitor
 The monitor reads the next job ….and so on.

In this technique the jobs could be stored on the disk to create the pool of jobs for its execution
as a batch. First the pooled jobs are read and executed by the batch monitor, and then these jobs
are grouped; placing the identical jobs (jobs with the similar needs) in the same batch, So, in the
batch processing system, the batched jobs were executed automatically one after another saving
its time by performing the activities (like loading of compiler) only for once. It resulted in
improved system utilization due to reduced turn-around time.

In batch processing system, earlier; the jobs were scheduled in the order of their arrival i.e. First
Come First Served (FCFS).Even though this scheduling method was easy and simple to
implement but unfair for the situations where long jobs are queued ahead of the short jobs. To
overcome this problem, another scheduling method named as 'Shortest Job First' was used.

Though, it was an improved technique in reducing the system setup time but still there were
some limitations with this technique like as under-utilization of CPU time, non-interactivity of

5
user with the running jobs etc. In batch processing system, the jobs of a batch were executed one
after another. But while these jobs were performing I/O operations; meantime the CPU was
sitting idle resulting to low degree of resource utilization.

An example of batch processing is the way that credit card companies process billing. The
customer does not receive a bill for each separate credit card purchase but one monthly bill for
all of that month's purchases. The bill is created through batch processing, where all of the data
are collected and held until the bill is processed as a batch at the end of the billing cycle

Advantages of Batch System

1. Move much of the work of the operator to the computer.


2. Increased performance since it was possible for job to start as soon as the previous job
finished.

Disadvantages of Batch System

1. Turn around can be large from user standpoint.


2. A job could enter an infinite loop.
3. Due to lack of protection scheme, one batch job can affect pending jobs.

2.1.2 Spooling
Spooling refers to as a process that putting jobs in a buffer or say spool, or temporary storage
area, a special area in memory or on a disk where a device can access them when it is ready.
Spooling is useful because devices access data at different rates.

DISK

Card CPU Printer


Reader

Figure 2.3: Spooling

The buffer provides a waiting station where data can rest while the slower device catches up.
However, unlike a spool of thread, the first jobs sent to the spool are the first ones to be
processed (FIFO, not LIFO).

6
The most common spooling application is print spooling. In print spooling, documents are
loaded into a buffer (usually an area on a disk), and then the printer pulls them off the buffer at
its own rate. Because the documents are in a buffer where they can be accessed by the printer,
you can perform other operations on the computer while the printing takes place in the
background.

Spooling also lets you place a number of print jobs on a queue instead of waiting for each one to
finish before specifying the next one.

2.2 Interactive System


The user has to be present and program cannot proceed until there is some input from the user.
User interacts directly with the computer (via keyboard and display terminal) to request the
execution of a job (program).
Interactive computer systems are programs that allow users to enter data or commands. Most
popular programs, such as word processors and spreadsheet applications, are interactive.

An example of interactive processing is the ATM machine.


A non-interactive program is one that, when started, continues without requiring human contact.
A compiler is a non-interactive program, as are all batch processing applications.

Single-user, single task operating system


As the name implies, this operating system is designed to manage the computer so that one user
can effectively do one thing at a time. DOS and the Palm OS for Palm handheld computers is a
good example of a modern single-user, single-task operating system.

Single User – Multi-Tasking operating system


This OS allows a single user to simultaneously run multiple applications on their computer. This
is the type of operating system found on most personal desktop and laptop computers. The
personal editions of Windows (Microsoft) and Macintosh (Apple) platforms are the most popular
single-user, multi-tasking OS. For example, it’s entirely possible for a Windows user to be
writing a note in a word processor while downloading a file from the Internet while printing the
text of an e-mail message.
Note: Multiprogramming System- When two or more programs are in memory at the same time, sharing the
processor is referred to the multiprogramming operating system. Multiprogramming assumes a single processor that
is being shared. This system ensures that the CPU is never idle unless there are no jobs. Different forms of
multiprogramming operating system are multitasking, multi-process and multi-user operating systems.

2.3 Time Sharing System

 A time sharing system allows many users to share the computer resources
simultaneously.
 A multiprogramming environment that’s also interactive.

7
 Logical extension of multiprogramming.

In other words, time sharing refers to the allocation of computer resources in time slots to several
programs simultaneously. For example a mainframe computer that has many users logged on to
it. Each user uses the resources of the mainframe -i.e. memory, CPU etc. The users feel that they
are exclusive user of the CPU, even though this is not possible with one CPU i.e. shared among
different users.

The time sharing systems were developed to provide an interactive use of the computer system.
A time shared system uses CPU scheduling and multiprogramming to provide each user with a
small portion of a time-shared computer. It allows many users to share the computer resources
simultaneously. As the system switches rapidly from one user to the other, a short time slot is
given to each user for their executions.

The time sharing system provides the direct access to a large number of users where CPU time is
divided among all the users on scheduled basis. The OS allocates a set of time to each user.
When this time is expired, it passes control to the next user on the system. The time allowed is
extremely small and the users are given the impression that they each have their own CPU and
they are the sole owner of the CPU. This short period of time during that a user gets attention of
the CPU; is known as a time slice or a quantum. The concept of time sharing system is shown in
figure.

Figure 2.4: Time sharing with 6 users

In above figure the user 5 is active but user 1, user 2, user 3, and user 4 are in waiting state
whereas user 6 is in ready status.

As soon as the time slice of user 5 is completed, the control moves on to the next ready user i.e.
user 6. In this state user 2, user 3, user 4, and user 5 are in waiting state and user 1 is in ready
state. The process continues in the same way and so on.

8
The time-shared systems are more complex than the multi-programming systems. In time-shared
systems multiple processes are managed simultaneously which requires an adequate management
of main memory so that the processes can be swapped in or swapped out within a short time.

Note: The term 'Time Sharing' is no longer commonly used, it has been replaced by 'Multitasking System'.

2.4 Real Time Systems

 Inputs immediately affect the outputs. Timing is critical (Response Time is already fixed)
 A real time system is one that must react to inputs and responds to them quickly.
 A real time system has well defined, fixed time constraints.
A primary objective of real-time system is to provide quick response times. User convenience
and resource utilization are of secondary concern to real time system. Real time system is used at
those places in which we require higher and timely response.
Example: control of nuclear power plants, oil refining, telephone switching system, traffic light
system, chemical processing and air traffic control systems etc.
There are two types of real time system:

A Hard Real Time System guarantees that critical tasks complete on time. This requires that all
delays in the system be bounded.

Example: nuclear reactor control system, Missile, etc.

A Soft Real Time System is a less restrictive type. In this a critical real-time task gets priority
over other tasks and retains that priority until it completes.

Example: Live video streaming, Windows CE, RTlinux, etc.

2.5 Multiprocessor Systems


―An Operating System capable of supporting and utilizing more than one computer processor is
called a Multiprocessor Operating system.‖
At any given time there is a technological limit on the speed with which a single processor can
operate. If the system workload cannot be handled satisfactorily by the single processor. The
response is to apply multiple processors to the problem and is known as multiprocessor
environment. This also provides the increased reliability and economy of scale.
Multiprocessor system share the computer bus, system clock and input-output devices and
sometimes memory. In multiprocessing system, it is possible for two processes to run in parallel.
Multiprocessor systems are broadly divided into:
1. Tightly-coupled 2. Loosely-coupled

9
Tightly-coupled: Consists of a set of processors that share a common main memory and are
under the integrated control of an operating system. Processors work in close association. This is
the typical multiprocessor model.

The tightly-coupled multiple-processor systems in use today are of two types.

1) Asymmetric (Master/Slave) 2) Symmetric.

Asymmetric (Master/Slave): A master processor controls the OS and other processors look to the
master for instruction. This scheme defines a master-slave relationship. The master processor
schedules and allocates work to the slave processors.

This arrangement allows the parallel execution of a single task by allocating several subtasks to
multiple processors concurrently. Since the operating system is executed by only master
processors this system is relatively simple to develop and efficient to use

Figure 2.5: Asymmetric multiprocessing architecture

The problem with this model is that with many CPUs, the master will become a bottleneck. Thus
this model is simple and workable for small multiprocessors, but for large ones it fails.
Symmetric : In symmetric multiprocessing all processors are peers; no master-slave relationship

Figure 2.6: Symmetric multiprocessing architecture

exists between processors, each processor runs an identical copy of the OS and they
communicate with one another as needed. Since there is no master, it introduces its own

10
problems. In particular, if two or more CPUs are running operating system code at the same
time, disaster will result. Imagine two CPUs simultaneously picking the same process to run or
claiming the same free memory page. The simplest way around these problems is to associate a
mutex (i.e., lock) with the operating system, making the whole system one big critical region.
When a CPU wants to run operating system code, it must first acquire the mutex. If the mutex is
locked, it just waits. In this way, any CPU can run the operating system, but only one at a time.
Loosely-coupled: Consists of a collection of relatively autonomous systems, each processor
having its own main memory and I/O channels.

The loosely-coupled multiple-processor systems can be classified as:

1) Multi-computer (cluster) 2) distributed systems (wide area Multi-computer)

Multi-computer (cluster): A cluster consists of two or more computers working together to


provide a higher level of availability, reliability, and scalability than can be obtained by using a
single computer. For the end users it appears as single computer.

Clustering can be structured asymmetrically or symmetrically.

In asymmetric clustering, one machine is in hot-standby mode while the other is running the
applications. The hot-standby mode machine does nothing but monitor and record the active
server state. If that server fails, the hot-standby host becomes the active server.

In symmetric mode, two or more hosts are running applications, and are monitoring each other.
This mode is obviously more efficient, as it uses all of the available hardware.

distributed systems (wide area Multi-computer): These are similar to the multi computers in that
each node has its own private memory with no shared physical memory in the system. However
distributed systems are even more loosely couple than the multicomputer.

Finally, all the nodes of a multicomputer run the same operating system share a single file
system and are under a common administration. Whereas nodes of a distributed systems may run
different operating systems, each have their own file system and be under
different administrators.

2.6 Multi-user Systems


A multi-user operating system allows multiple users on different computers or terminals to
access a single system with one OS on it. The users will typically be at terminals or computers
that give them access to the system through a network, as well as other machines on the system
such as printers. Multi-user means the operating system has clear distinctions between users.
Users cannot destroy each other's files, and unprivileged users cannot make changes to the
system itself, like install new software.
A multi-user operating system allows many different users to take advantage of the computer’s

11
resources simultaneously. The operating system must make sure that the requirements of the
various users are balanced, and that each of the programs they are using has sufficient and
separate resources so that a problem with one user doesn’t affect the entire community of users.
Unix, Solaris, Linux, Windows NT are examples of multi-user operating systems.

Note: A dedicated transaction processing system such as railway reservation system that
hundreds of terminals under control of a single program is an example of multi-user operating
system. On the other hand, general purpose time sharing systems incorporate features of both
multiuser and multiprogramming operating system.

2.7 Multi-process Systems


Multi-process system increases CPU utilization by organizing jobs in such a way that CPU
always has one job to execute until there is no more job to execute.
The operating system keeps several jobs in memory simultaneously. Since the main memory is
too small to accommodate all jobs, the jobs are kept initially on the disk in the job pool, waiting
to acquire main memory on the basis of scheduling.
The operating system picks and begins to execute one of the jobs in memory. Eventually, the job
has to wait for some task, such as I/O operation to complete. The OS simply switches to, and
execute another job and so on.

2.7 Multi-threaded Systems


Thread:

 Separate path of execution because it have a separate call stack per thread.
 Basic unit of CPU utilization; it contains thread ID, program counter, register set and
stack.
 Same process threads shares code section, data section, and other OS resources such as
open file.
A Web browser might have one thread display images or text while another thread retrieves data
from the network. In certain situations, a single application may be required to perform several
similar tasks. For example, a Web server accepts client requests for web pages, images, sound,
and so forth. A busy Web server may have several clients concurrently accessing it. If the Web
server ran as a traditional single-threaded process, it would be able to service only one client at a
time, and a client might have to wait a very long time for its request to be serviced.
Another example, A word processor may have threads for;

 Displaying graphics
 Responding to key stroke from user
 Performing spelling and grammar checking.

12
Figure 2.7: Single threaded and multi threaded process

If the Web-server process is multithreaded, the server will create a separate thread that listens for
client requests. When a request is made, rather than creating another process, the server will
create a new thread to service the request and resume listening for additional requests.

Figure 2.8: Multi threaded server architecture

13
3.Kernel

 The kernel is a computer program that manages input/output requests from software and
translates them into data processing instructions for the central processing unit and other
electronic components of a computer. The kernel is a fundamental part of a modern
computer's operating system.
 Kernel is the heart of OS which manages the core features of an OS while if some useful
applications and utilities are added over the kernel, then the complete package becomes
an OS. So, it can easily be said that an operating system consists of a kernel space and a
user space.
 The central module of an operating system. It is the part of the operating system that
loads first, and it remains in main memory. Because it stays in memory, it is important
for the kernel to be as small as possible while still providing all the essential services
required by other parts of the operating system and applications.
 The layer between the hardware and the software is called the HAL [hardware abstraction
layer] in modern operating systems, and is just one part of the kernel (kernel's interface).
 Kernels also usually provide methods for synchronization and communication between
processes called inter-process communication (IPC).
 The main tasks of the kernel are :

 Process management
 Device management
 Memory management
 Interrupt handling
 I/O communication
 File system...etc.

A kernel connects the application


software to the hardware of a computer
Figure 3.1: Kernel architecture

Kernel basic facilities

14
The kernel's primary function is to manage the computer's hardware and resources and allow
other programs to run and use these resources. Typically, the resources consist of:

 The CPU. This is the most central part of a computer system, responsible for running or
executing programs. The kernel takes responsibility for deciding at any time which of the
many running programs should be allocated to the processor or processors (each of which
can usually run only one program at a time)
 The computer's memory. Memory is used to store both program instructions and data.
Typically, both need to be present in memory in order for a program to execute. Often
multiple programs will want access to memory, frequently demanding more memory than
the computer has available. The kernel is responsible for deciding which memory each
process can use, and determining what to do when not enough is available.
 Any Input/Output(I/O) devices present in the computer, such as keyboard, mouse, disk
drives, USB devices, printers, displays, network adapters, etc. The kernel allocates
requests from applications to perform I/O to an appropriate device (or subsection of a
device, in the case of files on a disk or windows on a display) and provides convenient
methods for using the device (typically abstracted to the point where the application does
not need to know implementation details of the device).

Types of Kernels

When a computer program (process) makes requests of the kernel, the request is called a system
call. Various kernel designs differ in how they manage system calls (time-sharing) and resources.
Kernels may be classified mainly in three categories:

1. Monolithic
2. Micro Kernel
3. Reentrant Kernel

3.1. Monolithic Kernels

―A monolithic kernel executes all the operating system instructions in the same address space to
improve the performance of the system.‖

Monolithic kernels, which have traditionally been used by Unix-like operating systems, contains
all the operating system core functions and the device drivers.

Earlier in this type of kernel architecture, all the basic system services like process and memory
management, interrupt handling etc were packaged into a single module in kernel space. This
type of architecture led to some serious drawbacks like:
1) Size of kernel, which was huge.
2) Poor maintainability.

In a modern day approach to monolithic architecture, the kernel consists of different modules
15
which can be dynamically loaded and un-loaded. This modular approach allows easy extension
of OS's capabilities. With this approach, maintainability of kernel became very easy as only the
concerned module needs to be loaded and unloaded. So, there is no need to bring down and
recompile the whole kernel for a smallest bit of change.

Linux follows the monolithic modular approach.

Advantages of Monolithic Kernel

1) Simple to design and implement.


2) Simplicity provides speed on simple hardware.
3) Can be expanded using module systems.
4) Time tested and design well known.

Disdvantages of Monolithic Kernel

1) Module system may not provide runtime loading and unloading.


2) Lower fault tolerance, if core module or section of kernel fails, the whole thing does.

3.2. Microkernels

―A microkernel runs most of the operating system's background process in user space, to make
the operating system more modular and, therefore, easier to maintain.‖

Microkernel is a small operating system core that provides the foundation for modular
extensions. The main function of microkernel is to provide a communication facility between the
client program and the various services that are also run in user space.

Figure 3.2: Microkernel architecture

16
This architecture majorly caters to the problem of ever growing size of kernel code which we
could not control in the monolithic approach. This architecture allows some basic services like
device driver management, protocol stack, file system etc to run in user space. This reduces the
kernel code size and also increases the security and stability of OS as we have the bare minimum
code running in kernel.

In this architecture, all the basic OS services which are made part of user space are made to run
as servers which are used by other programs in the system through inter process communication
(IPC).

Apart from

 Managing memory protection


 Process scheduling
 Inter Process communication (IPC)

all other basic services can be made part of user space and can be run in the form of servers.

Advantages of Microkernel

1) Microkernel allows the addition of new services.


2) It is flexible because existing features can be subtracted to produce a smaller & more
efficient architecture.
3) Modular design helps to enhance reliability.
4) Microkernel supports object oriented operating system.

Comparison between Monolithic and Microkernel

Sr. Monolithic kernel Microkernel


No.
1. Kernel size is large. Kernel size is small.

2. OS is complex to design. OS is easy to design, implement and


install.
3. Request is serviced faster. Request is serviced slower than monolithic
kernel.
4. All the OS services are included in the Kernel provides only IPC and low level
kernel device management services.
5. No message passing and no context Microkernel requires message passing and
switching are required while the kernel context switching.
is performing the job.
17
3.3. Reentrant Kernel

A reentrant kernel is one where many processes or threads can execute in the same kernel
program concurrently without affecting one another. A reentrant kernel enables processes to give
away the CPU while in kernel mode, not hindering other processes from entering kernel modes.

If the kernel is not reentrant, a process can only be suspended while it is in user mode (to be
more precise, it could also suspend the process in kernel mode, but would block kernel mode
execution on all other processes). If the kernel is reentrant, several processes may be executed in
the kernel mode at the same time when the process is suspended.

Example: A typical use case is I/O wait. If process wants to read a file.
When the kernel is not reentrant: It calls a kernel function for this. Inside the kernel function, the
disk controller is asked for the data. Getting the data will take some time & the function is
blocked during that time.
When the kernel is reentrant: The scheduler will assign CPU to another process until an interrupt
from disk controller indicates that the data is available and our thread can be resumed. Hence
with this concept, throughput of the system increases rapidly. A kernel that is not entrant needs
to use a lock to make sure that no two processes are executing in kernel mode at the same time.

Normally the OS is composed of reentrant and non-reentrant functions. The reentrant functions
may modify local data, but they do not modify any global data.

All UNIX kernels are reentrant. This means that several processes may be executing in kernel
mode at the same time.

4.Operating System Structure


Modern operating systems are large and complex. Operating system consists of different type of
components. These components are interconnected and melded into kernels. For designing the
system, different type of system structures are used.
These structures are:
1) Simple Structure
2) Layered approach
3) Microkernels

4.1. Simple Structure


Simple structure operating systems are small, simple and limited systems.
Many commercial operating systems do not have well-defined structures. Frequently, such
systems started as small, simple, and limited systems and then grew beyond their original scope.
MS-DOS is an example of such a system. It was originally designed and implemented by a few

18
people who had no idea that it would become so popular. It was written to provide the most
functionality in the least space, so it was not divided into modules carefully.

Figure 4.1: MS-DOS layer structure

In MS-DOS, the interfaces and levels of functionality are not well separated. For instance,
application programs are able to access the basic I/O routines to write directly to the display and
disk drives. Such freedom leaves MS-DOS vulnerable to errant (or malicious) programs, causing
entire system crashes when user programs fail and also limited by the hardware. Another
example of limited structuring is the original UNIX operating system. Like MS-DOS, UNIX
initially was limited by hardware functionality.

Note: A resident program, upon termination, does not return all memory back to DOS. Instead, a
portion of the program remains resident, ready to be reactivated by some other program at a
future time.
4.2. Layered Structure
Layered operating systems are those systems in which functions are organized hierarchically and
interaction only takes place between adjacent layers. In the layered approach, most or all of the
layers execute in kernel mode.
In layered approach, the operating system is broken into a number of layers (levels). The bottom
layer (layer 0) is the hardware; the highest (layer N) is the user interface.

19
Figure 4.2: A layered Operating System Structure

Layered structure provides good modularity. Each layer of the operating system forms a module
with a clearly defined functionality and interface, with the rest of the operating system. This
approach simplifies debugging and system verification. It also provides facility of information
hiding.
Debugging is done at first layer. While debugging, if any error is found, the error must be on that
layer, because the layers below it are already debugged. Thus, the design and implementation of
the system are simplified.
Each layer is implemented with only those operations provided by lower level layers. A layer
does not need to know how these operations are implemented; it needs to know only what these
operations do. Hence, each layer hides the existence of certain data structures, operations, and
hardware from higher-level layers.

A final problem with layered implementations is that they tend to be less efficient than other
types. For instance, when a user program executes an I/0 operation, it executes a system call that
is trapped to the I/0 layer, which calls the memory-management layer, which in turn calls the
CPU-scheduling layer, which is then passed to the hardware. At each layer, the parameters may
be modified, data may need to be passed, and so on. Each layer adds overhead to the system call;
the net result is a system call that takes longer than does one on a non-layered system.

4.3. Microkernels: ( Discussed in previous section 3.2 )

5. System Components
Modern operating systems share the goal of supporting the system components. The system
components are:
1) Process management
2) Main memory management
3) Secondary storage management
4) I/O system management

20
5) File management
6) Protection system
7) Networking
8) Command interpreter system
Process Management :

 Process refers to as program in execution.


 A program or a fraction of a program that is loaded in main memory.
Motivation: We do not need a whole program code at once. To process an instruction, CPU
fetches and executes one instruction of a process after another (i.e., the execution of a process
progresses in a sequential fashion) in main memory.

Tasks of Process Management of an OS:


 Create, load, execute, suspend, resume, and terminate processes
 Switch system among multiple processes in main memory (process scheduling)
 Provides communication mechanisms so that processes can send (or receive) data to (or
from) each other (process communication).
 Control concurrent access to shared data to keep shared data consistent (process
synchronization).
 Allocate/de-allocate resources properly to prevent or avoid deadlock situation

Main memory management :

 Memory is a large array of words or bytes, each with its own address; it is a quickly
accessible storage repository shared by the CPU and I/O devices.
 Processes must be loaded into main memory to be executed.

Motivations:
 Increase system performance by increasing ―hit‖ ratio (e.g., optimum: when CPU read
data or instruction, it is in the main memory always).
 Maximize memory utilization

Tasks of Main Memory Management of OS:


 Keeping track of which parts of memory are currently being used and by whom
 Deciding which processes to load when memory space becomes available
 Allocating and de-allocating memory space as needed

Secondary storage management :

 Main memory (primary storage) is volatile and too small to accommodate all data and
programs, the computer system must provide non-volatile secondary storage to back up
main memory.
Motivations:
 Increase data availability of system
 Maximize memory utilization

21
Tasks of Secondary Memory Management of OS:
 Free space management
 Storage allocation
 Disk scheduling

I/O system management :


Motivations:
 Provide an abstract level of H/W devices and keep the details from applications to ensure
proper use of devices, to prevent errors, and to provide users with convenient and
efficient programming environment.

Tasks of I/O System Management of OS:


 Hide the details of H/W devices
 Manage main memory for the devices using cache, buffer, and spooling
 Maintain and provide device driver interfaces

File management :

 A file is a collection of related information defined by its creator; files represent


programs and data.
 Files are grouped into directories; directories as well as file, forming a hierarchy.

Motivations:
 Almost everything is stored in the secondary storage. Therefore, secondary storage
accesses must be efficient (i.e., performance) and convenient (i.e., easy to program I/O
function in application level).
 Important data are duplicated and/or stored in tertiary storage.
Tasks of I/O File Management of OS:
 File creation and deletion
 Directory creation and deletion
 Support of primitives for manipulating files and directories
 Mapping files onto secondary storage
 File backup on stable (nonvolatile) storage media

Protection system :

 Protection refers to a mechanism for controlling access by programs, processes, or users


to both system and user resources.
Tasks of Protection System of OS:
 Distinguish between authorized and unauthorized usage
 Specify the controls to be imposed
 Provide a means of enforcement

22
 Protect hardware resources, Kernel code, processes, files, and data from erroneous
programs and malicious programs.

Networking :

 Networking enables computer users to share resources and speed up computation.


 The processors in the system are connected through a communication network; For
example: Distributed System.
Tasks of Networking of OS:
 Computation speed-up, Increased data availability, Enhanced reliability
 Connection/Routing strategies
 "Circuit" management --- circuit, message, packet switching
 Communication mechanism
 Data/Process migration

Command interpreter system :

 Command interpreter is the interface between user and the operating system.
 It is a system program for an operating system.
Motivations:
 Allow human users to interact with OS
 Provide convenient programming environment to users
Tasks of Command Interpreter System of OS:
 Execute a user command by calling one or more number of underlying system programs
or system calls.

6. Operating System Services


An operating system provides an environment for the execution of programs. It provides certain
services to programs and to the users of those programs. The service provided by one operating
system is different than other operating system. These operating-system services are provided for
the convenience of the programmer, to make the programming task easier.

The common services provided by the operating system are listed below:

23
Figure 6.1: A View of Operating System Services

 User Interface: Almost all operating systems have a user interface (UI). This interface
can take several forms.
Command-Line Interface (CLI): It uses text commands and a method for entering them.
Batch Interface: Batch interfaces are non-interactive user interfaces, where the user
specifies all the details of the batch job in advance to batch processing,
and receives the output when all the processing is done. The computer
does not prompt for further input after the processing has started.
Graphical User Interface: This interface is a window system with a pointing device to
direct I/0, choose from menus, and make selections and a keyboard to
enter text.
 Program Execution: The system must be able to load a program into memory and to run
that program. The program must be able to end its execution, either normally or
abnormally (indicating error).
 I/O Operations: A running program may require I/0, which may involve a file or an I/0
device. For efficiency and protection, users usually cannot control I/0 devices directly.
Therefore, the operating system must provide a means to do I/0.
 File-System Manipulation: Programs need to read and write files and directories. They
also need to create and delete them by name, search for a given file, and list file
information. Operating system provides a permissions management to allow or deny
access to files or directories
 Communication: Communication may occur between processes that are executing on
the same computer or between processes that are executing on different computer
systems tied together by a computer network. Communications may be implemented via
shared memory or through message passing.

24
Note: In the shared-memory model, a region of memory that is shared by cooperating processes is
established. In the message passing model, communication takes place by means of messages exchanged
between the cooperating processes (via kernel).

 Error Detection: Error may occur in CPU, in I/O devices or in the memory hardware.
The operating system constantly needs to be aware of possible errors. It should take the
appropriate action to ensure correct and consistent computing.
Operating system with multiple users provides following services.
1. Resource allocation 2. Accounting 3.Protection

 Resource Allocation: When there are multiple users or multiple jobs running at the same
time, resources must be allocated to each of them. Many different types of resources are
managed by the operating system. Some resources require special allocation code, i.e.,
main memory, CPU cycles and file storage.
There are some resources which require only general request and release code. For
allocating CPU, CPU scheduling algorithms are used for better utilization of CPU. There
may also be routines to allocate printers, modems, USB storage drives, and other
peripheral devices.
 Accounting: We want to keep track of which users use how much and what kinds of
computer resources. This record keeping may be used for accounting purposes. This
accounting data can be used for statistics or for the billing. It also used to improve system
efficiency.
 Protection: When several separate processes execute concurrently, it should not be
possible for one process to interfere with the others or with the operating system itself.
Protection involves ensuring that all access to system resources is controlled. Such
security starts with requiring each user to authenticate himself or herself to the system,
usually by means of a password, to gain access to system resources.

System Calls
System calls provide the interface between a running program and the operating system. A
system call instruction is an instruction that generates an interrupt.

User process

User process executing Calls system call Return from system call

Kernel trap mode bit=0 return mode bit=1

Execute system call

Figure: General Structure of System Call

25
Mode bit is added to h/w of computer to indicate the current mode: Kernel (0), User (1).
Whenever trap (interrupt) occurs h/w switches from user mode to kernel mode (change the mode
bit to 0).
System Call Paramaters: Three general methods exist for passing parameters to the OS:
1. Parameters can be passed in registers.
2. When there are more parameters than registers, parameters can be stored in a block and the
block address can be passed as a parameter to a register.
3. Parameters can also be pushed on or popped off the stack by the operating system.

Figure: Passing of Parameters as a Table

Types of System Calls: There are 5 different categories of system calls:

1. Process control 2. File manipulation 3. Device manipulation


4. Information maintenance 5. Communication

Process Control
A running program needs to be able to stop execution either normally or abnormally. When
execution is stopped abnormally, often a dump of memory is taken and can be examined with a
debugger.

File Management
Some common system calls are create, delete, read, write, reposition, or close. Also, there is a
need to determine the file attributes – get and set file attribute. Many times the OS provides an
API to make these system calls.

Device Management
Process usually requires several resources to execute, if these resources are available, they will
be granted and control returned to the user process. These resources are also thought of as
devices. Some are physical, such as a video card, and others are abstract, such as a file.

26
User programs request the device, and when finished they release the device. Similar to files, we
can read, write, and reposition the device.

Information Management
Some system calls exist purely for transferring information between the user program and the
operating system. An example of this is time, or date.

The OS also keeps information about all its processes and provides system calls to report this
information.

Communication
There are two models of inter-process communication, the message-passing model and the
shared memory model.
Message-passing model exchanges messages b/w one another to transfer information. System
calls order during the message-passing in a networking scenario can be:
a) Open connection().
b) Get host Id(), get process Id().
c) Wait for connection().
d) Read message(), Write message().
e) Close connection().

In shared memory we generally use shared memory create() and shared memory attach() system
call.

27

You might also like