Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 13

Types

Real-time
A real-time operating system is a
multitasking operating system that aims at
executing real-time applications. Real-time
operating systems often use specialized
scheduling algorithms so that they can
achieve a deterministic nature of behavior.
The main object of real-time operating
systems is their quick and predictable
response to events. They either have an
event-driven or a time-sharing design. An
event-driven system switches between tasks
based on their priorities while time-sharing
operating systems switch tasks based on clock
interrupts.
Multi-user vs. Single-user
A multi-user operating system allows multiple
users to access a computer system
concurrently. Time-sharing system can be
classified as multi-user systems as they
enable a multiple user access to a computer
through the sharing of time. Single-user
operating systems, as opposed to a multi-user
operating system, are usable by a single user
at a time. Being able to have multiple
accounts on a Windows operating system does
not make it a multi-user system. Rather, only
the network administrator is the real user.
But for a Unix-like operating system, it is
possible for two users to login at a time and
this capability of the OS makes it a multi-
user operating system.
Multi-tasking vs. Single-tasking
When a single program is allowed to run at a
time, the system is grouped under a single-
tasking system, while in case the operating
system allows the execution of multiple tasks
at one time, it is classified as a multi-
tasking operating system. Multi-tasking can
be of two types namely, pre-emptive or co-
operative. In pre-emptive multitasking, the
operating system slices the CPU time and
dedicates one slot to each of the programs.
Unix-like operating systems such as Solaris
and Linux support pre-emptive multitasking.
Cooperative multitasking is achieved by
relying on each process to give time to the
other processes in a defined manner. MS
Windows prior to Windows 95 used to support
cooperative multitasking.
Distributed
A distributed operating system manages a
group of independent computers and makes them
appear to be a single computer. The
development of networked computers that could
be linked and communicate with each other,
gave rise to distributed computing.
Distributed computations are carried out on
more than one machine. When computers in a
group work in cooperation, they make a
distributed system.
Embedded
Embedded operating systems are designed to be
used in embedded computer systems. They are
designed to operate on small machines like
PDAs with less autonomy. They are able to
operate with a limited number of resources.
They are very compact and extremely efficient
by design. Windows CE and Minix 3 are some
examples of embedded operating systems.

Multiprogramming

Computer multiprogramming is the allocation


of a computer system and its resources to
more than one concurrent application, job or
user

Early computers ran one process at a time. While


the process waited for servicing by another
device, the CPU was idle. In an I/O intensive
process, the CPU could be idle as much as 80% of
the time. Advancements in operating systems led
to computers that load several independent
processes into memory and switch the CPU from
one job to another when the first becomes
blocked while waiting for servicing by another
device. This idea of multiprogramming reduces
the idle time of the CPU. Multiprogramming
accelerates the throughput of the system by
efficiently using the CPU time.

Programs in a multiprogrammed environment appear


to run at the same time. Processes running in a
multiprogrammed environment are called
concurrent processes. In actuality, the CPU
processes one instruction at a time, but can
execute instructions from any active process.
Batch processing is execution of a series of
programs ("jobs") on a computer without
manual intervention.
Batch jobs are set up so they can be run to
completion without manual intervention, so
all input data is preselected through scripts
or command-line parameters. This is in
contrast to "online" or interactive programs
which prompt the user for such input. A
program takes a set of data files as input,
processes the data, and produces a set of
output data files. This operating environment
is termed as "batch processing" because the
input data are collected into batches of
files and are processed in batches by the
program.

Benefits
Batch processing has these benefits:
It allows sharing of computer resources among
many users and programs,
It shifts the time of job processing to when
the computing resources are less busy,
It avoids idling the computing resources with
minute-by-minute manual intervention and
supervision,
By keeping high overall rate of utilization,
it better amortizes the cost of a computer,
especially an expensive one.
Time-sharing is the sharing of a computing
resource among many users by means of
multiprogramming and multi-tasking.
By allowing a large number of users to
interact concurrently with a single computer,
time-sharing dramatically lowered the cost of
providing computing capability, made it
possible for individuals and organizations to
use a computer without owning one, and
promoted the interactive use of computers and
the development of new interactive
applications.

Batch processing
Batch processing
The earliest computers were extremely
expensive devices, and very slow. Machines
were typically dedicated to a particular set
of tasks and operated by control panel, the
operator manually entering small programs via
switches in order to load and run other
programs. These programs might take hours,
even weeks, to run. As computers grew in
speed, run times dropped, and suddenly the
time taken to start up the next program
became a concern. The batch processing
methodologies evolved to decrease these dead
times, queuing up programs so that as soon as
one completed, the next would start.
To support a batch processing operation, a
number of card punch or paper tape writers
would be used by programmers, who would use
these inexpensive machines to write their
programs "offline". When they completed
typing them, they were submitted to the
operations team, who would schedule them for
running. Important programs would be run
quickly, less important ones were
unpredictable. When the program was finally
run, the output, generally printed, would be
returned to the programmer. The complete
process might take days, during which the
programmer might never see the computer.
The alternative, allowing the user to operate
the computer directly, was generally far too
expensive to consider. This was because the
user had long delays where they were simply
sitting there entering code. This limited
developments in direct interactivity to
organizations that could afford to waste
computing cycles, large universities for the
most part. Programmers at the universities
decried the inhumanist behaviors that batch
processing imposed, to the point that
Stanford students made a short film
humorously critiquing it. They experimented
with new ways to directly interact with the
computer, a field today known as human-
computer interaction.
Time-sharing
Time-sharing developed out of the realization
that while any single user was inefficient, a
large group of users together were not. This
was due to the pattern of interaction; in
most cases users entered bursts of
information followed by long pause, but a
group of users working at the same time would
mean that the pauses of one user would be
used up by the activity of the others. Given
an optimal group size, the overall process
could be very efficient. Similarly, small
slices of time spent waiting for disk, tape,
or network input could be granted to other
users.
Implementing a system able to take advantage
of this would be difficult. Batch processing
was really a methodological development on
top of the earliest systems; computers still
ran single programs for single users at any
time, all that batch processing changed was
the time delay between one program and the
next. Developing a system that supported
multiple users at the same time was a
completely different concept; the "state" of
each user and their programs would have to be
kept in the machine, and then switched
between quickly. This would take up computer
cycles, and on the slow machines of the era
this was a concern. However, as computers
rapidly improved in speed, and especially in
size of core memory in which users' states
were retained, the overhead of time-sharing
continually decreased, relatively.
The concept was first described publicly in
early 1957 by Bob Bemer as part of an article
in Automatic Control Magazine. The first
project to implement a time-sharing system
was initiated by John McCarthy in late 1957,
on a modified IBM 704, and later on an
additionally modified IBM 7090 computer.
Although he left to work on Project MAC and
other projects, one of the results of the
project, known as the Compatible Time-Sharing
System or CTSS, was demonstrated in November
1961. CTSS has a good claim to be the first
time-sharing system and remained in use until
1973. Another contender for the first
demonstrated time-sharing system was PLATO
II, created by Donald Bitzer at a public
demonstration at Robert Allerton Park near
the University of Illinois in early 1961.
Bitzer has long said that the PLATO project
would have gotten the patent on time-sharing
if only the University of Illinois had known
how to process patent applications faster,
but at the time university patents were so
few and far between, they took a long time to
be submitted. The first commercially
successful time-sharing system was the
Dartmouth Time Sharing System.

System Components

Even though, not all systems have the same


structure many modern operating systems share
the same goal of supporting the following types
of system components.

 
Process Management

The operating system manages many kinds of


activities ranging from user programs to system
programs like printer spooler, name servers,
file server etc. Each of these activities is
encapsulated in a process. A process includes
the complete execution context (code, data, PC,
registers, OS resources in use etc.).

It is important to note that a process is not a


program. A process is only ONE instant of a
program in execution. There are many processes
can be running the same program. The five major
activities of an operating system in regard to
process management are

 Creation and deletion of user and system


processes.
 Suspension and resumption of processes.
 A mechanism for process synchronization.
 A mechanism for process communication.
 A mechanism for deadlock handling.
 

Main-Memory Management

Primary-Memory or Main-Memory is a large array of words or bytes. Each word or byte has its
own address. Main-memory provides storage that can be access directly by the CPU. That is to
say for a program to be executed, it must in the main memory.

The major activities of an operating in regard to memory-management are:

 Keep track of which part of memory are currently being used and by whom.
 Decide which process are loaded into memory when memory space becomes available.
 Allocate and deallocate memory space as needed.

File Management

A file is a collected of related information defined by its creator. Computer can store files on the
disk (secondary storage), which provide long term storage. Some examples of storage media are
magnetic tape, magnetic disk and optical disk. Each of these media has its own properties like
speed, capacity, data transfer rate and access methods.

A file systems normally organized into directories to ease their use. These directories may
contain files and other directions.

The five main major activities of an operating system in regard to file management are

1. The creation and deletion of files.


2. The creation and deletion of directions.
3. The support of primitives for manipulating files and directions.
4. The mapping of files onto secondary storage.
5. The back up of files on stable storage media.

I/O System Management

I/O subsystem hides the peculiarities of specific hardware devices from the user. Only the device
driver knows the peculiarities of the specific device to whom it is assigned.

Secondary-Storage Management

Generally speaking, systems have several levels of storage, including primary storage, secondary
storage and cache storage. Instructions and data must be placed in primary storage or cache to be
referenced by a running program. Because main memory is too small to accommodate all data
and programs, and its data are lost when power is lost, the computer system must provide
secondary storage to back up main memory. Secondary storage consists of tapes, disks, and other
media designed to hold information that will eventually be accessed in primary storage (primary,
secondary, cache) is ordinarily divided into bytes or words consisting of a fixed number of bytes.
Each location in storage has an address; the set of all addresses available to a program is called
an address space.

The three major activities of an operating system in regard to secondary storage management are:

1. Managing the free space available on the secondary-storage device.


2. Allocation of storage space when new files have to be written.
3. Scheduling the requests for memory access.

Networking

A distributed systems is a collection of processors that do not share memory, peripheral devices,
or a clock. The processors communicate with one another through communication lines called
network. The communication-network design must consider routing and connection strategies,
and the problems of contention and security.
 

Protection System

If a computer systems has multiple users and allows the concurrent execution of multiple
processes, then the various processes must be protected from one another's activities. Protection
refers to mechanism for controlling the access of programs, processes, or users to the resources
defined by a computer systems.

Command Interpreter System

A command interpreter is an interface of the operating system with the user. The user gives
commands with are executed by operating system (usually by turning them into system calls).
The main function of a command interpreter is to get and execute the next user specified
command. Command-Interpreter is usually not part of the kernel, since multiple command
interpreters (shell, in UNIX terminology) may be support by an operating system, and they do
not really need to run in kernel mode. There are two main advantages to separating the command
interpreter from the kernel.

1. If we want to change the way the command interpreter looks, i.e., I want to change the
interface of command interpreter, I am able to do that if the command interpreter is separate
from the kernel. I cannot change the code of the kernel so I cannot modify the interface.
2. If the command interpreter is a part of the kernel it is possible for a malicious process to gain
access to certain part of the kernel that it showed not have to avoid this ugly scenario it is
advantageous to have the command interpreter separate from kernel.

Operating Systems Services

Following are the five services provided by an operating systems to the convenience of the users.

Program Execution

The purpose of a computer systems is to allow the user to execute programs. So the operating
systems provides an environment where the user can conveniently run programs. The user does
not have to worry about the memory allocation or multitasking or anything. These things are
taken care of by the operating systems.

Running a program involves the allocating and deallocating memory, CPU scheduling in case of
multiprocess. These functions cannot be given to the user-level programs. So user-level
programs cannot help the user to run programs independently without the help from operating
systems.

I/O Operations

Each program requires an input and produces output. This involves the use of I/O. The operating
systems hides the user the details of underlying hardware for the I/O. All the user sees is that the
I/O has been performed without any details. So the operating systems by providing I/O makes it
convenient for the users to run programs.

For efficiently and protection users cannot control I/O so this service cannot be provided by user-
level programs.

File System Manipulation

The output of a program may need to be written into new files or input taken from some files.
The operating systems provides this service. The user does not have to worry about secondary
storage management. User gives a command for reading or writing to a file and sees his her task
accomplished. Thus operating systems makes it easier for user programs to accomplished their
task.

This service involves secondary storage management. The speed of I/O that depends on
secondary storage management is critical to the speed of many programs and hence I think it is
best relegated to the operating systems to manage it than giving individual users the control of it.
It is not difficult for the user-level programs to provide these services but for above mentioned
reasons it is best if this service s left with operating system.

Communications

There are instances where processes need to communicate with each other to exchange
information. It may be between processes running on the same computer or running on the
different computers. By providing this service the operating system relieves the user of the worry
of passing messages between processes. In case where the messages need to be passed to
processes on the other computers through a network it can be done by the user programs. The
user program may be customized to the specifics of the hardware through which the message
transits and provides the service interface to the operating system.

Error Detection

An error is one part of the system may cause malfunctioning of the complete system. To avoid
such a situation the operating system constantly monitors the system for detecting the errors.
This relieves the user of the worry of errors propagating to various part of the system and
causing malfunctioning.

This service cannot allowed to be handled by user programs because it involves monitoring and
in cases altering area of memory or deallocation of memory for a faulty process. Or may be
relinquishing the CPU of a process that goes into an infinite loop. These tasks are too critical to
be handed over to the user programs.  A user program if given these privileges can interfere with
the correct (normal) operation of the operating systems.

interrupt

(n.) A signal informing a program that an event has occurred. When a program receives an
interrupt signal, it takes a specified action (which can be to ignore the signal). Interrupt signals
can cause a program to suspend itself temporarily to service the interrupt.

Interrupt signals can come from a variety of sources. For example, every keystroke generates an
interrupt signal. Interrupts can also be generated by other devices, such as a printer, to indicate
that some event has occurred. These are called hardware interrupts. Interrupt signals initiated by
programs are called software interrupts. A software interrupt is also called a trap or an
exception.

PCs support 256 types of software interrupts and 15 hardware interrupts. Each type of software
interrupt is associated with an interrupt handler -- a routine that takes control when the interrupt
occurs. For example, when you press a key on your keyboard, this triggers a specific interrupt
handler. The complete list of interrupts and associated interrupt handlers is stored in a table
called the interrupt vector table, which resides in the first 1 K of addressable memory

You might also like