Download as pdf or txt
Download as pdf or txt
You are on page 1of 31

LECTURE NOTES ON INTRODUCTION TO

OPERATING SYSTEM I

COURSE FACILITATOR: Mr FOTSO ALAIN

Academic Year: 2021/2022


General instructional objectives

At the end of this course the student should be able to:

 Bounce the history of operating systems


 stretch the Operating system structure
 discourse OS terms such as
 Processes, files, system call, the shell
 Process Management
 Process description and control
 Process Interrupts
 Context Swapping
 Process scheduling:
 First Come First Served
 Round Robin Scheduling
 Shortest Process Next
 Shortest Remaining Time
 Threads, Symmetric Multiprocessing
 Inter-process Communication & Clock Synchronization
 Mutual exclusion and critical section
 Race Conditions
 Semaphores
 IPC Problems
OPERATING SYSTEM

I. INTRODUCTION TO OPERATING SYSTEM

An Operating system is a collection of programs that


controls the execution of application programs and acts
as an interface between the user of a computer and the
computer hardware. Operating system along with
hardware, application and other system software,
and users constitute a computer system. It is the most
important part of any computer system.

At the simplest level, an operating system does two


things:

1. It manages the hardware and software resources


of the system. These resources include such
things as the processor, memory, disk space
and more

2. It acts as an interface between the user and the physical machine that is it provides a stable,
consistent way for applications to deal with the hardware without having to know all the
details of the hardware.

II. Functions of Operating System:


Process Management

A process is a program in execution. A process needs certain resources,


including CPU time, memory, files, and I/O devices, to accomplish its task.
The operating system is responsible for the following activities in connection with
process management.
✦ Process creation and deletion.
✦ process suspension and resumption.
✦ Provision of mechanisms for:
• process synchronization
• process communication
Main-Memory Management
Memory is a large array of words or bytes, each with its own address. It is a
repository of quickly accessible data shared by the CPU and I/O devices.
Main memory is a volatile storage device. It loses its contents in the case of system
failure.
The operating system is responsible for the following activities in connections with
memory management:
♦ Keep track of which parts of memory are currently being used and by whom.
♦ Decide which processes to load when memory space becomes available.
♦ Allocate and de-allocate memory space as needed.
File Management
A file is a collection of related information defined by its creator. Commonly, files
represent programs
(both source and object forms) and data.
The operating system is responsible for the following activities in connections with file
management:
✦ File creation and deletion.
✦ Directory creation and deletion.
✦ Support of primitives for manipulating files and directories.
✦ Mapping files onto secondary storage.
✦ File backup on stable (nonvolatile) storage media.
I/O System Management
The I/O system consists of:
✦ A buffer-caching system
✦ A general device-driver interface
✦ Drivers for specific hardware devices
Secondary-Storage Management
Since main memory (primary storage) is volatile and too small to accommodate
all data and programs permanently, the computer system must provide secondary
storage to back up main memory.
Most modern computer systems use disks as the principle on-line storage
medium, for both programs and data. The operating system is responsible for the
following activities in connection with disk management:
✦ Free space management
✦ Storage allocation
✦ Disk scheduling
Networking (Distributed Systems)
♦ A distributed system is a collection processor that do not share memory or a
clock. Each processor has its own local memory.
♦ The processors in the system are connected through a communication network.
♦ Communication takes place using a protocol.
♦ A distributed system provides user access to various system resources.
♦ Access to a shared resource allows:
✦ Computation speed-up
Error detection: The operating system constantly needs to be aware of possible
errors in CPU, in I/O devices or in memory hardware. It should take the appropriate action to ensure
correct and consistent computing.
Resource allocation: Operating system manages different types of resources require special
allocation code, i.e. main memory, CPU cycles and file storage.

III. TYPES OF OPERATING SYSTEM

1.Batch Processing Operating System:


 This type of OS accepts more than one jobs and these jobs are batched/ grouped
together according to their similar requirements. This is done by computer operator.
Whenever the computer becomes available, the batched jobs are sent for execution
and gradually the output is sent back to the user.
 It allowed only one program at a time.
 This OS is responsible for scheduling the jobs according to priority
and the resource required.
2. Multiprogramming Operating System:
 This type of OS is used to execute more than one jobs simultaneously by a single
processor. it increases CPU utilization by organizing jobs so that the CPU always has
one job to execute.
 The concept of multiprogramming is
described as follows:
All the jobs that enter the system are stored in the job pool ( in disc). The operating
system loads a set of jobs from job pool into main memory and begins to execute.
 During execution, the job may have to wait for some event, such as an I/O
operation, to complete. In a multiprogramming system, the operating system simply
switches to another job and executes. When that job needs to wait, the CPU is
switched to another job, and so on.
 When the first job finishes waiting and it gets the CPU back.
 As long as at least one job needs to execute, the CPU is never idle.
Multiprogramming operating systems use the mechanism of jobscheduling
and CPU scheduling.
3. Time-Sharing/multitasking Operating Systems
Time sharing (or multitasking) OS is a logical extension of multiprogramming. It
provides extra facilities such as:
 Faster switching between multiple jobs to make processing faster.
 Allows multiple users to share computer system simultaneously.
 The users can interact with each job while it is running.
These systems use a concept of virtual memory for effective utilization of memory
space. Hence, in this OS, no jobs are discarded. Each one is executed using virtual
memory concept. It uses CPU scheduling, memory management, disc management
and security management. Examples: CTSS, MULTICS, CAL, UNIX etc.
4. Multiprocessor Operating Systems
Multiprocessor operating systems are also known as parallel OS or tightly coupled
OS. Such operating systems have more than one processor in close communication
that sharing the computer bus, the clock and sometimes memory and peripheral
devices. It executes multiple jobs at same time and makes the processing faster.
Multiprocessor systems have three main advantages:
 increased throughput: By increasing the number of processors, the system
performs more work in less time.
The speed-up ratio with N processors is less than N.
 Economy of scale: Multiprocessor systems can save more money than multiple
single-processor systems,
because they can share peripherals, mass storage, and power supplies.
 Increased reliability: If one processor fails to done its task, then each of the
remaining processors must pick up a share of the work of the failed processor. The
failure of one processor will not halt the system, only slow it down.

The ability to continue providing service proportional to the level of surviving


hardware is called graceful degradation. Systems designed for graceful
degradation are called fault tolerant.
The multiprocessor operating systems are classified into two categories
1. Symmetric multiprocessing system
2. Asymmetric multiprocessing system
 In symmetric multiprocessing system, each processor runs an identical copy of
the operating system, and these copies communicate with one another as needed.
 In asymmetric multiprocessing system, a processor is called master processor
that controls other processors called slave processor. Thus, it establishes master-slave
relationship. The master processor schedules the jobs and manages the memory for
entire system.
5. Distributed Operating Systems
 In distributed system, the different machines are connected in a network and
each machine has its own processor and own local memory.
 In this system, the operating systems on all the machines work together to
manage the collective network resource.
 It can be classified into two categories:
1. Client-Server systems
2. Peer-to-Peer systems
Advantages of distributed systems.
 Resources Sharing
 Computation speed up – load sharing
 Reliability
 Communications
 Requires networking infrastructure.
 Local area networks (LAN) or Wide area networks (WAN)
.
6. Desktop Systems/Personal Computer Systems
 The PC operating system is designed for maximizing user convenience and
responsiveness. This system is neither multi-user nor multitasking.
 These systems include PCs running Microsoft Windows and the Apple
Macintosh. The MS-DOS operating system from Microsoft has been superseded
by multiple flavors of Microsoft Windows and IBM has upgraded MS-DOS to
the OS/2 multitasking system.
 The Apple Macintosh operating system has been ported to more advanced
hardware, and now includes new
features such as virtual memory and multitasking.
7. Real-Time Operating Systems (RTOS)
 A real-time operating system (RTOS) is a multitasking operating system intended
for applications with fixed deadlines (real-time computing). Such applications is
characterized by their timeliness in response to the user. They include some small
embedded systems, automobile engine controllers, industrial robots, spacecraft,
industrial control, and some large-scale computing systems.
 The real time operating system can be classified into two categories:
1. hard real time system and 2. soft real time system.
 A hard real-time system guarantees that critical tasks be completed on time.
This goal requires that all delays in the system be bounded, from the retrieval of
stored data to the time that it takes the operating system to finish any request made
of it. Such time constraints dictate the facilities that are available in hard real-time
systems.
A soft real-time system is a less restrictive type of real-time system. Here, a critical
real-time task gets
priority over other tasks and retains that priority until it completes. Soft real time
system can be mixed with other types of systems. Due to less restriction, they are
risky to use for industrial control and robotics.
IV. EVOLUTION OF OS

Historically operating systems have been tightly related to the computer architecture. Operating
systems have evolved through a number of distinct phases or generations which corresponds roughly
to the decades.

III.1 The 1940's - First Generations: Serial Processing


The earliest electronic digital computers had no operating systems. Machines of the time were so
primitive that programs were often entered one bit at time on rows of mechanical switches (plug
boards). Programming languages were unknown (not even assembly languages).

III.2 The 1950's - Second Generation:


By the early 1950's, The General Motors Research Laboratories implemented the first operating
systems in early 1950's for their IBM 701. The system of the 50's generally ran one job at a time.
These were called single-stream batch processing systems because programs and data were
submitted in groups or batches.

III.3 The 1960's - Third Generation


The systems of the 1960's were also batch processing systems, but they were able to take better
advantage of the computer's resources by running several jobs at once. So operating systems designers
developed the concept of multiprogramming in which several jobs are in main memory at once; a
processor is switched from job to job as needed to keep several jobs advancing while keeping the
peripheral devices in use. Another feature present in this generation was time-sharing technique, a
variant of multiprogramming technique, in which each user has an on-line (i.e., directly connected)
terminal.

III.4 Fourth Generation


Microprocessor technology evolved to the point that it become possible to build desktop computers
as powerful as the mainframes of the 1970s. Two operating systems have dominated the personal
computer scene: MS-DOS, written by Microsoft, Inc. for the IBM PC and other machines using the
Intel 8088 CPU and its successors, and UNIX, which is dominant on the large personal computers
using the Motorola 6899 CPU family.

V. OPERATNG SYSTEM INTERFACES

Almost all operating systems have a user interface (UI). This interface can take several forms.

IV.1 command-line interface(CLI)


With a command line interface the user interacts with the
computer by typing commands. An interaction with a
computer using a command line interface usually follows
these two steps :
- The user types a command e.g. "dir".
- The computer carries out the command and
displays its results.
Examples of systems which use a command line interface
are: MS-DOS, BBC Micro

IV.2 Menu-Driven Interface


With a menu driven interface the user interacts with
the computer by selecting options from a menu. A
typical program will have many menus which the user
can access. Menus can be either full screen or pull-
down. Pull-down menus are accessed by selecting the
menu from a menu bar. Pop-up menus are activated
by pressing a button on the mouse.

IV.3 Graphical User Interface (GUI)


With a Graphical User Interface (GUI) the user interacts with
the computer by using a pointing device such as a mouse or
trackball. The most popular form of GUI is a Windows, Icon,
Menu and Pointer (WIMP) system. The important features of a
WIMP system are :
1. Window : An area of the screen which is used to display a
particular program or piece of work. Many windows can
be displayed on the screen at the same time.
2. Icon : An informative picture / symbol displayed on the
screen which the user chooses to select an action.
3. Menu : A list of options which the user can pick from. Menus can be pull-down (selected
from a menu bar at the top of the screen) or pop-up (selected by pressing a mouse button).
e.g.
4. Pointer : A symbol such as an arrow which is moved by a pointing device and can be used to
select objects.
When you use a program such as a word processor that has a WIMP interface it is often the case that
the document you are creating looks exactly the same on the screen as it will when it is printed out.
If this is the case then the program is described as being WYSIWYG. This stands for “What You
See Is What You Get”.

VI. OPERATING SYSTEM CONPONENTS

OS has two parts. (1)Kernel. (2)Shell.


(1)Kernel is an active part of an OS i.e., it is the part of OS running at all times. It is a
program, which can interact with the hardware, which create the relationship between the
hardware and the software. Ex: Device driver, dll files, system files etc.
(2) Shell is called as the command interpreter. It is a set of programs used to interact
with the application programs. It is responsible for execution of instructions given to OS (called
commands).
Operating systems can be explored from two viewpoints: the user and the system.
User View: From the user’s point view, the OS is designed for one user to monopolize its
resources, to maximize the work that the user is performing and for ease of use.
System View: From the computer's point of view, an operating system is a control program that
manages the execution of user programs to prevent errors and improper use of the computer.
It is concerned with the operation and control of I/O devices.
Operating System Services
Following are the five services provided by operating systems to the convenience of
the users.
1. Program Execution
The purpose of computer systems is to allow the user to execute
programs. So the operating system provides an environment where the user can
conveniently run programs. Running a program involves the allocating and
deallocating memory, CPU scheduling in case of multiprocessing.
2. I/O Operations
Each program requires an input and produces output. This involves the
use of I/O. So the operating systems are providing I/O makes it convenient for
the users to run programs.
3. File System Manipulation
The output of a program may need to be written into new files or input
taken from some files. The operating system provides this service.
4. Communications
The processes need to communicate with each other to exchange
information during execution. It may be between processes running on the
same computer or running on the different computers. Communications can be
occur in two ways: (i) shared memory or (ii) message passing
5. Error Detection
An error is one part of the system may cause malfunctioning of the
complete system. To avoid such a situation operating system constantly
monitors the system for detecting the errors. This relieves the user of the worry
of errors propagating to various part of the system and causing malfunctioning.
Following are the three services provided by operating systems for
ensuring the efficient operation of the system itself.
1. Resource allocation
When multiple users are logged on the system or multiple jobs are
running at the same time, resources must be allocated to each of them. Many
different types of resources are managed by the operating system.
2. Accounting
The operating systems keep track of which users use how many and
which kinds of computer resources. This record keeping may be used for
accounting (so that users can be billed) or simply for accumulating usage
statistics.
3. Protection
When several disjointed processes execute concurrently, it should not be
possible for one process to interfere with the others, or with the operating
system itself. Protection involves ensuring that all access to system resources
is controlled. Security of the system from outsiders is also important. Such
security starts with each user having to authenticate him to the system, usually
by means of a password, to be allowed access to the resources.

V.3 System call


In computing, a system call is how a program requests a service from an operating system's kernel
that it does not normally have permission to run. System calls provide the interface between a process
and the operating system. Application developers often do not have direct access to the system calls,
but can access them through an application-programming interface (API). The functions that are
included in the API invoke the actual system calls.

V.4 Interrupt
An interrupt in an interruption in the normal execution of the program. When the CPU is interrupt,
then it stops its current activities like execution of the program. And transfer the control to interrupting
device to check the interrupt. The three types of interrupts are

 software interrupts or trap (syscall) - invoked by software


 external interrupts - invoked by external devices
 exceptions - invoked by the processor when errors occur
Interrupt Handling
The code that is installed at the target address for interrupts is called an interrupt handler. The first
thing that it has to do is save the state of the currently executing process. Then it calls a subprogram
to deal with the specific type of interrupt. When that subprogram returns, the interrupt handler restores
the state of the process that was executing when the interrupt occurred.

VII. TYPES OF OPERATING SYSTEMS

Following are few of the important types of operating system which are most commonly used.

VI.1 Real-time operating system (RTOS)

Abbreviated as RTOS, a real-time operating system or embedded operating system is a computer


operating system designed to handle events as they occur. Real Time System is used at those Places
in which we Requires higher and Timely Response. These Types of Systems are used in
Reservation. They are also found and used in robotics, communications, and has various military and
government uses.

VI.2 Single-user, single task


As the name implies, this operating system is designed to manage the computer so that one user can
effectively do one thing at a time. The Palm OS for Palm handheld computers is a good example of
a modern single-user, single-task operating system.

VI.3 Single-user, multi-tasking

This is the type of operating system most people use on their desktop and laptop computers today.
Microsoft's Windows and Apple's MacOS platforms are both examples of operating systems that will
let a single user have several programs in operation at the same time.

VI.4 Multi-user

A multi-user operating system allows many different users to take advantage of the computer's
resources simultaneously. Unix, VMS and mainframe operating systems, such as MVS, are examples
of multi-user operating systems.

VI.5 Multiprocessing OS
Multiprogramming OS have two or more processors for a single running process. Processing
takes place in parallel and is also called parallel processing. Each processor works on different
parts of the same task, or, on two or more different tasks. Linux, UNIX and Windows 7 are examples
of multiprocessing OS.

VI.6 Time sharing Operating System:

Multitasking or time sharing refers to term where multiple jobs are executed by the CPU
simultaneously by switching between them. Switches occur so frequently that the users may interact
with each program while it is running.

VI.7 Distributed operating System


Distributed means “data is stored and processed on multiples locations”. Distributed Operating
Systems manages a collection of independent computers and make them appear to the users of the
system as a single computer. Users are not aware of multiplicity of machines. Access to remote
resources is similar to access to local resources.

VI.8 Network operating System


Network Operating System (NOS) runs on a server and and provides server the capability to manage
data, users, groups, security, applications, and other networking functions. The primary purpose of
the network operating system is to allow shared file and printer access among multiple computers in
a network, typically a LAN, a private network or to other networks. Examples of network operating
systems are Microsoft Windows Server 2003, Microsoft Windows Server 2008, UNIX, Linux, Mac
OS X, Novell NetWare, and BSD.

VI.9 Embedded Operating System:


An embedded operating system refers to the operating system that is self-contained in the device and
resident in the read-only memory (ROM).

VII PROCESS MANAGEMENT

VII. 1 What is a process?


A process is a name given to a program instance that has been loaded into memory and managed by
the operating system. Shortly, a process is a program in execution. (Program = static file (image).
Process = executing program = program + execution state)

What is the difference between process and program?

1) Both are same beast with different name or when this beast is sleeping (not
executing) it is called program and when it is executing becomes process.
2) Program is a static object whereas a process is a dynamic object.
3) A program resides in secondary storage whereas a process resides in main memory.
4) The span time of a program is unlimited but the span time of a process is limited.
5) A process is an 'active' entity whereas a program is a 'passive' entity.
6) A program is an algorithm expressed in programming language whereas a
process is expressed in assembly language or machine language.

VII.2 Process States


A Process in execution will have to pass through different stages:

→ New: The process is being created..


Ready: The process is ready to be assigned to the processor. It has al the resources required to
be executed but needs the CPU to execute. A ready state process is runnable but temporarily
stopped running to let another process run
→ Running: The Process is currently using CPU to execute its instruction is in this stage.
→ Waiting: The process is waiting for a signal, a resource or an event before running.
→ Terminated: The Process that finishes its execution will be terminated.
Steps followed in the Process execution:

 A Process when first created will be in New stage, then when it is waiting for CPU time in
order to get executed will in ready stage. In Ready stage Process will be waiting for CPU
time in Ready Queue (A queue where Process waits for the CPU).
 Once the CPU become free and process acquires it, Process will enter into Running stage
where it will execute its instructions. If during this stage any interrupt occurs then it will move
back to ready stage and wait for the scheduler dispatcher. If it has I/O operation then it will
move to Waiting stage where it will perform I/O operation and then move back to ready stage
and waits for the scheduler dispatcher.
 After the process get executed it will terminate.
The change of the state of the process from one form to another is called context change and this
course of action is known as context switching.

Fig: Process states

PROCESS CONTROL BLOCK

Process Control Block is a data structure that contains information of the process related
to it. The process control block is also known as a task control block, entry of the process
table, etc:
The following is stored inside the PCB
Process State - It can be running, waiting etc.
Process ID and parent process ID.
CPU registers and Program Counter.
Program Counter holds the address of the next instruction to be executed for that
process.
CPU Scheduling information - Such as priority information and pointers to scheduling
queues.
Memory Management information - Eg. page tables or segment tables.
Accounting information - user and kernel CPU time consumed, account numbers, limits, etc.
I/O Status information - Devices allocated, open file tables, etc.
Process Scheduling Queues

 Job Queue: This queue consists of all processes in the system; those
processes are entered to the system as new processes.
 Ready Queue: This queue consists of the processes that are residing in
main memory and are ready and
waiting to execute by CPU. This queue is generally stored as a linked list.
A ready-queue header contains pointers to the first and final PCBs in the list.
Each PCB includes a pointer field that points to the next PCB in the ready
queue.
 Device Queue: This queue consists of the processes that are waiting
for a particular I/O device. Each device has its own device queue.

Schedulers

A scheduler is a decision maker that selects the processes from one


scheduling queue to another or allocates CPU for execution. The Operating
System has three types of scheduler:
1. Long-term scheduler or Job scheduler
2. Short-term scheduler or CPU scheduler
3. Medium-term scheduler

Long-term scheduler or Job scheduler

 The long-term scheduler or job scheduler selects processes from discs and
loads them into main memory for execution. It executes much less
frequently.
 It controls the degree of multiprogramming (i.e., the number of processes in
memory).
 Because of the longer interval between executions, the long-term
scheduler can afford to take more time to select a process for execution.

Short-term scheduler or CPU scheduler

 The short-term scheduler or CPU scheduler selects a process from


among the processes that are ready to execute and allocates the CPU.

Medium-term scheduler

The medium-term scheduler schedules the processes as intermediate level of


scheduling
Processes can be described as either:
✦I/O-bound process – spends more time doing I/O than computations, many short
CPU bursts.
✦CPU-bound process – spends more time doing computations; few very long CPU
bursts.

Context Switch

• in computing , a context switch is the process of storing the state of


a process or thask , so that it can be restored and resume execution
at later point

Dispatch latency – time it takes for the dispatcher to stop one process and start
another running.
VII.3 Process Scheduling
In a multiprogrammed system, at any given time, several processes will be competing for the CPU
time. Thus, a choice has to be made which process to allocate the CPU next. This procedure of
determining the next process to be executed on the CPU is called process scheduling and the module
of operating system that makes this decision is called a scheduler.

VII.3.1 Preemptive and Non-preemptive Scheduling

Preemptive scheduling allows a process to be interrupted in the midst of its execution, taking the
CPU away and allocating it to another process. Nonpreemptive scheduling ensures that a process
relinquishes control of the CPU only when it finishes with its current
CPU burst.

CPU-scheduling decisions may take place under the following four circumstances:

1. When a process switches from the running state to the waiting state (for example, as the
result of an I/O request,…).
2. When a process switches from the running state to the ready state (for example, when an
interrupt occurs).
3. When a process switches from the waiting state to the ready state (for example, at
completion of I/O, on a semaphore, or for some other reason).
4. When a process terminates.

For situations 1 and 4, there is no choice in terms of scheduling. A new process (if one exists in the
ready queue) must be selected for execution.

There is a choice, however, for situations 2 and 3. When scheduling takes place only under
circumstances 1 and 4, we say that the scheduling scheme is nonpreemptive or cooperative;
otherwise, it is pre-emptive.

VII.3.2 Algorithm to select process to execute

The scheduler uses some scheduling procedure to carry out the selection of a process for execution.
The efficiency of each algorithm is judged according to the average waiting time and the average
turnaround time.

 1. Arrival time: This is the time at which a process enters the ready queue.
2. Waiting Time: the amount of time spent by the process waiting in the ready queue for getting the
CPU
Waiting Time = Turnaround Time – burst Time
Burst Time: This is the amount of time required by a process for executing on CPU
3. Completion Time: Time at which process completes its execution.
2. TURN ARROUNG TIME: this is the total t i m e s p e n t b y a p r o c e s s i n t h e s y s t e m .
TURN ARROUND TIME = BURST TIME + WATING TIME OR
Turn Around Time: Time Difference between completion time and arrival time.
turnaroundTime = burstTime + waitingTime = finishTime- arrivalTime)

Turn Around Time = Completion Time – Arrival Time


3. Waiting Time (W.T): Time Difference between turnaround time and burst time.
Waiting time - amount of time a process has been waiting in the ready queue (waitingTime =
startTime – arrivalTime)
Waiting Time = Turn Around Time – Burst Time
1) First-Come-First-Served:
As the name suggests, in FCFS scheduling, the processes are executed in the order of their arrival in
the ready queue. To implement FCFS scheduling procedure, the ready queue is managed as a first-in
first-out (FIFO) queue. It is a nonpreemptive scheduling.

Example:

Process Arrival time Burst time Service Time Wait Time : Service Time -
Arrival Time
P0 0 5 0 0-0=0
P1 1 3 5 5-1=4
P2 2 8 8 8-2=6
P3 3 6 16 16 - 3 = 13

Average Wait Time: (0+4+6+13) / 4 = 5.55

2) Round Robin:
Here each process is provided a fix time to execute called quantum or slice time. After this time has
elapsed, the process is preempted and added to the end of the ready queue and other process executes
for given time period. Context switching is used to save states of preempted processes.

Example: Quantum = 3

14 17 19 21

Process Arrival time Bust Time Wait Time : Service Time - Arrival Time
P0 0 5 (0-0) + (12-3) = 9
P1 1 3 (3-1) = 2
P2 2 8 (6-2) + (14-9) + (19-17) = 11
P3 3 6 (9-3) + (17-12) = 11
Average Wait Time: (9+2+11+11) / 4 = 8.25

3) Shortest-Job-First Scheduling, SJF


The idea behind the SJF algorithm is to pick the quickest fastest little job that needs to be done, get it
out of the way first, and then pick the next smallest fastest job to do next. (Technically this algorithm
picks a process based on the next shortest CPU burst, not the overall process time.). For example,
the Gantt chart below is based upon the following CPU burst times, (and the assumption that all jobs
arrive at the same time.)

Process Arrival time Burst time Service Time Wait Time : Service Time -
Arrival Time
P0 0 5 0 0-0=0
P1 1 3 5 5-1=4
P2 2 8 14 14 - 2 = 12
P3 3 6 8 8-3=5

P3 P2

Average Wait Time: (0+4+12+5) / 4 = 5.250

4) Priority Scheduling
Each process is assigned a priority. Process with highest priority is to be executed first and so on.
Processes with same priority are executed on first come first serve basis. Priority can be decided based
on memory requirements, time requirements or any other resource requirement. A major problem with
priority-scheduling algorithms is indefinite blocking of one or more processes (call starvation).

P0 P3 P1

5 11 14

Wait time of each process is following

Process Wait Time : Service Time - Arrival Time


P0 0-0=0
P1 11 - 1 = 10
P2 14 - 2 = 12
P3 5-3=2

Average Wait Time: (0+10+12+2) / 4 = 6

5) Shortest-Remaining-Time (SRT) Scheduling


Shortest remaining time, also known as shortest remaining time first (SRTF), is the preemtive
counterpart of SJF and useful in time-sharing environment. The process with the smallest estimated
run-time to completion is run next, including new arrivals. It picks a process and lets it run for a
maximum of some fixed time.

VI. Operations on Process


a) Process Creation
Through appropriate system calls, such as fork or spawn, processes may create other processes.
The process which creates other process, is termed the parent of the other process, while the
created sub-process is termed its child.

b) Process Termination

By making the exit(system call), typically returning an int, processes may request their own
termination. This int is passed along to the parent if it is doing a wait(), and is typically zero on
successful completion and some non-zero code in the event of any problem.

Processes may also be terminated by the system for a variety of reasons, including :

The inability of the system to deliver the necessary system resources.


In response to a KILL command or other unhandled process interrupts.
A parent may kill its children if the task assigned to them is no longer needed i.e. if the
need of having a child terminates.
If the parent exits, the system may or may not allow the child to continue without a parent
(In UNIX systems, orphaned processes are generally inherited by init, which then
proceeds to kill them.)

When a process ends, all of its system resources are freed up, open files flushed and closed, etc.
The process termination status and execution times are returned to the parent if the parent is
waiting for the child to terminate, or eventually returned to init if the process already became an
orphan.

The processes which are trying to terminate but cannot do so because their parent is not waiting
for them are termed zombies. These are eventually inherited by init as orphans and killed off.

VII. Notions of Threads?


Thread is an execution unit which consists of its own program counter, a stack, and a set of
registers. Threads are also known as Lightweight processes. Threads are popular way to improve
application through parallelism. The CPU switches rapidly back and forth among the threads
giving illusion that the threads are running in parallel.

As each thread has its own independent resource for process execution, multiple processes can
be executed parallel by increasing number of threads.

Why Multithreading?
Thread is also known as lightweight process. The idea is achieve parallelism by dividing a
process into multiple threads. For example, in a browser, multiple tabs can be different threads.
MS word uses multiple threads, one thread to format the text, other thread to process inputs etc.
More advantages of multithreading are discussed below
Process vs Thread?
The typical difference is that threads within the same process run in a shared memory space,
while processes run in separate memory spaces.
Threads are not independent of one other like processes as a result threads shares with other threads their
code section, data section and OS resources like open files and signals. But, like process, a thread has its
own program counter (PC), a register set, and a stack space. Advantages of Thread over Process
1. Responsiveness: If the process is divided into multiple threads, if one thread completed its
execution, then its output can be immediately responded.

2. Faster context switch: Context switch time between threads is less compared to process context
switch. Process context switch is more overhead for CPU.

3. Effective Utilization of Multiprocessor system: If we have multiple threads in a single process, then we
can schedule multiple threads on multiple processor. This will make process execution faster.

4. Resource sharing: Resources like code, data and file can be shared among all threads within a process.
Note : stack and registers can’t be shared among the threads. Each thread have its own stack and registers.

5. Communication: Communication between multiple thread is easier as thread shares common address
space. while in process we have to follow some specific communication technique for communication
between two process.

6. Enhanced Throughput of the system: If process is divided into multiple threads and each thread
function is considered as one job, then the number of jobs completed per unit time is increased. Thus,
increasing the throughput of the system.

VIII DEADLOCK

VIII.1 Definition and examples of deadlock


Let’s consider the situation where there are two utility programs A and B that each want to copy a file from tape
to disk and print the file to printer
- A holds tape and disk, then requests for a printer
- B holds printer, then requests for tape and disk
- A tries to get ownership of the printer, but is told to wait for B to
release it.
- B tries to get ownership of the tape, but is told to wait for A to release
it.
None of the process will be able to print and the two processes will remain
in deadlock.
Deadlock is a situation where a set of processes are blocked because each process is holding a resource and
waiting for another resource acquired by some other process. Consider an
example when two trains are coming toward each other on same track and there is only one track, none of the
trains can move once they are in front of each other. Similar situation occurs in operating systems when there
are two or more processes hold some resources and wait for resources held by other(s). For example, in the
below diagram, Process 1 is holding Resource
1 and waiting for resource 2 which is acquired by process 2, and process 2 is waiting for resource1.

Page 22 of 31
Other examples of deadlocks

1) Bridge traffic can only be in one


direction.
 Each entrance of the bridge can be
viewed as a resource.
 Starvation is possible (Processes wait indefinitely).

2) In the automotive world deadlocks are called gridlocks.


 The processes are the cars.
 The resources are the spaces occupied by the cars

VIII.2 Necessary Conditions


The following four conditions (Coffman; Havender) are necessary but not sufficient for deadlock. Repeat: They
are not sufficient.

1. Mutual Exclusion Condition: The resources involved are non-shareable.


Explanation: At least one resource must be held in a non-shareable mode, that is, only one process
at a time claims exclusive control of the resource. If another process requests that resource, the requesting
process must be delayed until the resource has been released.
2. Hold and Wait Condition: Requesting process hold already the resources while waiting for
requested resources.
Explanation: There must exist a process that is holding a resource already allocated to it while waiting
for additional resource that are currently being held by other processes.
4. No-Preemptive Condition: Resources already allocated to a process cannot be
preempted.i.e : A resource cannot be taken from a process unless the process releases the
resource.

Explanation: Resources cannot be removed from the processes are used to completion or released
voluntarily by the process holding it.

Page 23 of 31
4. Circular Wait Condition: The processes in the system form a circular list or chain where each
process in the list is waiting for a resource held by the next process in the list. There exists a set {P0,
P1, …, P0} of
waiting processes such that P0 is waiting for a resource that is held by P1, P1 is waiting for a resource
that is
held by P2, …, Pn–1 is waiting for a resource that is held by Pn, and P0 is waiting for a resource that is
held by P0.
Note: It is not possible to have a deadlock involving only one single process. The deadlock involves a circular
“hold-and-wait” condition between two or more processes, so “one” process cannot hold a resource, yet be

VIII.3 Resource allocation graph

The deadlock conditions can be modelled using a directed graph called a resource allocation graph (RAG).
Below is the Resource Allocation Graph,

 The processes are circles.


 The resources are squares.
 An arc (directed line) from a process P to a resource R
signifies that process P has requested (but not yet been
allocated) resource R.
 An arc from a resource R to a process P indicates that process
P has been allocated resource R.
If the graph does not contain a cycle, then no deadlock exists. If the graph does contain a cycle, then a deadlock
might exist

an example of a no deadlock situation. an example of a deadlock situation

NB: The presence of a cycle in a RAG is a necessary and not a sufficient condition for the deadlock to occur. It
becomes sufficient when there is only one instance of each resource.

Page 24 of 31
VIII.4 Methods for Handling Deadlocks
Generally speaking there are three ways of handling deadlocks:
1) Deadlock prevention or avoidance: The idea is to not let the system into deadlock state.

2) Ignore the problem all together: If deadlock is very rare, then let it happen and reboot the system.
Both Windows and UNIX take this approach.
2) Deadlock detection and recovery: Let deadlock occur, then do preemption to handle it once occurred.

3) Deadlock detection and recovery: Let deadlock occur, then do preemption to handle it once occurred. In
resource preemption, the operator or system preempts some resources from processes and give these
resources to other processes until the deadlock cycle is broken.
If preemption is required to deal with deadlocks, then three issues need to be addressed:
1. Selecting a victim: The system or operator selects which resources and which processes are
to be preempted based on cost factor.
2. Rollback: The system or operator must roll back the process to some safe state and restart it from that
state.
3. Starvation: The system or operator should ensure that resources would not always be pre-empted from the
same process. i.e a process should not be indefinitely allow in a blocking state?

X FILE MANAGEMENT

X.1 Notion of file and file management system


File can be defined as a collection of related information recorded on secondary storage (e.g., disks). Almost all
information stored in a computer must be in a file. a location for storing files on your computer is called a folder
or a directory.

Page 25 of 31
A file management system is that set of system software that provides services to users and applications in the
use of files. Following objectives for a file management system:
 To meet the data management needs and requirements of the user which include storage of data and the
ability to perform the aforementioned operations.
 To guarantee, to the extent possible, that the data in the file are valid.
 To optimize performance, both from the system point of view in terms of overall throughput.
 To provide I/O support for a variety of storage device types.
 To minimize or eliminate the potential for lost or destroyed data.
 To provide a standardized set of I/O interface routines to use processes.
X.2 What is a File system?

1) Definition
File system is a system for organizing directories and files, generally in terms of how it is implemented in the
disk operating system. While the memory manager is responsible for the maintenance of primary memory, the
file manager is responsible for the maintenance of the file system.

2) Example of file systems


a) Windows file systems
→ FAT (File Allocation Table): FAT file system is one of the most simple types of file systems. There
exist different types of FAT: FAT12, FAT16, FAT32. The number in FAT12, FAT16, FAT32 stands
for the number if bits used to enumerate file system block. This means that FAT12 may use up to 4096
different block references, FAT16 - 65536 and FAT32 - 4294967296.
→ NTFS (New Technology File System): This is a default file system for disk partitions and the only file
system that is supported for disk partitions over 32GB.
→ ReFS (Resilient File System):: ReFS is the latest development of Microsoft presently available for
Windows 8 Servers.

b) MacOS file systems


Apple Mac OS operating system applies HFS+ file system, an extension to their own HFS file system that was
used on old Macintosh computers. HFS+ file system is applied to Apple desktop products, including Mac
computers, iPhone, iPod, as well as Apple X Server products. Advanced server products also use Apple Xsan
file system, clustered file system derived from StorNext or CentraVision file systems.

c) Linux file systems


Among huge amount of various file system types the most popular Linux file systems nowadays are:

→ Ext2, Ext3, Ext4 - 'native' Linux file system. This file system falls under active developments and
improvements. Ext4 file system is frequently used as 'root' file system for most Linux installations.
→ ReiserFS - alternative Linux file system designed to store huge amount of small files.
→ XFS - file system derived from SGI company that initially used it for their IRIX servers. Now XFS
specifications are implemented in Linux.
→ JFS - file system developed by IBM for their powerful computing systems. Currently this file system is
open-source and is implemented in most modern Linux distributions.

d) BSD, Solaris, Unix file systems

Page 26 of 31
→ UFS (Unix File System) also often referred to FFS (Fast File System – fast compared to a previous file
system used for Unix).
→ ZFS for Solaris,

XI DEVICE MANAGEMENT

Device management in an operating system refers to the process of managing various devices connected to the
computer. The device manager manages the hardware resources and provides an interface to the hardware for
application programs. A device communicates with the computer system by sending signals over a cable. The
device communicates with the machine through a connection point called port.

Broadly, managing input and output is a matter of managing queues and buffers. A buffer is a temporary storage
area that takes a stream of bits from a device like keyboard to a serial communication port. Buffers hold the bits
and then release them to the CPU at a convenient rate so that the CPU can act on it.

Spooling: SPOOL stands for simultaneous peripheral operation on-line. Spooling refers to storing jobs in a
buffer so that CPU can be efficiently utilized. Spooling is useful because devices access data at different rates.
The buffer provides a waiting station where data can rest while the slower device catches up. The most common
spooling application is print spooling. In print spooling, documents are loaded into a buffer, and then the printer
pulls them off from the buffer at its own rate.

XII SECURITY MANAGEMENT

Security in terms of a computer system covers every aspect of its protection in case of a catastrophic event,
corruption of data, loss of confidentiality and so on. Security requires ample protection not only within the system,
but also from the external environment, in which the system operates. Various security techniques employed by
the operating system to secure the information are user authentication and backup of data.

1) User Authentication
The process of authenticating users can be based on a user's possession like a key or card, user information
like the username and password or user attributes like fingerprints and signature. Among these techniques,
user information is often the first and most significant line of defence in a multiuser system. Unfortunately,
passwords can often be guessed, illegally transferred or exposed. To avoid such situations, a user should keep the
following points in mind:
1) Password should be at least six characters in length.
2) The system should keep track of any event about any attempt to break the password.
3) The system should allow limited number of attempts for submitting a password on a particular
system.
4) Password based on dictionary words should be discouraged by the system. Alphanumeric
passwords, such as PASS011, should be used.

2) Backup of Data
To backup is to copy files to a second medium (a disk or tape) as a precaution in case the first medium fails. One
of the cardinal rules in using computers is back up your files regularly. Operating system should provide a feature

Page 27 of 31
of backing up of data, for example, from a disk to another storage device such as a floppy disk or an optical disk.
The purpose of keeping backups is to be able to restore individual files or complete file system in case of data
loss.

3) Data encryption
Encryption is the conversion of data (called plaintext) into a form (called a cyphertext) that cannot be easily
understood by unauthorised user. Before being able to understand the content of the data, the receiver must be in
possession of an encryption key used to decrypt the cyphertext.

XIII EXAMPLES OF OS

XIII.1 Commonly used OS


Some of the commonly used operating systems are discussed below:

1. DOS (Disk Operating System): MS- DOS was the widely used operating system before the introduction of
the Windows operating system. Even now the MS- DOS commands are used for carrying out many jobs like
copying the files, deleting the files etc. The main functions of DOS are to manage files, allocate system
resources according to the requirement. It provides essential features to control hardware devices such a
keyboard, screen, disk drives, printers, modems etc.
2. Windows: Microsoft launched Windows 1.0 operating system in 1985 and since then Windows has ruled the
world’s software market. Various versions of Windows have been launched like Windows 95, 98, Win NT,
XP, 7 and the latest being Windows 8.
3. Linux: Linux is a free and open software which means it is freely available for use and its source code is
also available so anybody can use it, modify it and redistribute it. It is a very popular operating system used
and supported by many companies.
4. MAC OS (Macintosh operating system): It is the operating system developed by Apple for Mac computers
5. BOSS (Bharat Operating System Solutions): This is an Indian distribution of GNU/Linux. It consists
of Linux operating system kernel, office application suite, Bharateeya OO, Internet browser (Firefox),
multimedia applications and file sharing.
6. UNIX: It is a multitasking, multiuser operating system originally developed in 1969 at Bell Labs. It
was one of the first operating systems developed in a high level language, namely C. Due to its portability,
flexibility and power, UNIX is widely being used in a networked environment.
7. Solaris: It is a free Unix based operating system introduced by Sun Microsystems in 1992. It is now also
known as Oracle Solaris.
XIII.2 Mobile Operating Systems (Mobile OS)
It is the operating system that operates on digital mobile devices like smart phones and tablets. It extends the
features of a normal operating system for personal computers so as to include touch screen, Bluetooth, Wi-
Fi, GPS mobile navigation, camera, music player and many more. The most commonly used mobile
operating systems are Android and Symbian

Android: It is a Linux derived Mobile OS released on 5th November 2007 and by 2011 it had more than 50% of
the global Smartphone market share. Google’s open and free software includes an operating system, middleware

Page 28 of 31
and some key applications for use on mobile devices. Various versions of Android OS have been released like
1.0, 1.5, 1.6, 2.x, 3.0 etc.

Symbian: This Mobile OS by Nokia designed for smartphones. It offers high level of functional integration
between communication and personal information management. It has an integrated mail box and it completely
facilitates the usage of all Google applications in your smartphone easily. Various versions like S60 series,
S80 series, S90 series, Symbian Anna etc have been released

EXERCISE 1: (25 marks)

1. What is an Operating? ( 1 mark )


2. State and explain four function of an Operating ( 4 mark )
3. What is process management? ( 1 mark )
4. Draw a graph showing the different states of a process with the associated events helpful to move from one
state to another. ( 6 mark )
5. What is process scheduling? When it is it said to be preemptive or non-preemptive?
(3 mark )
6. State two examples of a preemptive and a non-preemptive algorithm ( 2 mark )
7. Write down short notes on each of the following scheduling algorithm: (7.5 marks )
i. FCFS; iv. Priority;
ii. Round robin ; v. SRT
iii. SJF;

Exercise 2 (20 marks)


1. What a is a process? (1 mark)
2. What is deadlock and what are the conditions necessary for a deadlock to occur? (5 mark)
3. State any two examples of dead in real life (2 mark)
4. Let consider the following processes scheduling table

a. draw its corresponding Gantt chart for each of the following scheduling method:
( 4 mark )
 FCFS
 SRT
b. Calculate the Turnaround Time, Wainting Time for each and the average waiting time for all the
processes. ( 5 mark )
c. What can you conclude about the two scheduling methods? ( 1 mark )

Exercise 3 (15 marks)


Page 29 of 31
1. The processes P1 to P5 have resources R1 to R7 allocated to them in the following ways

P1 holds R1 and R7 and needs R2


P2 Holds R2 and needs R3 and R4
P3 Holds R3 and needs R4
P4 Holds R4 and need R6 and R5
P5 Holds R6 and R5 needs R1
b) Draw the resource allocation graph for the processes above (5 mark)
c) From your graph, state and explain whether or not deadlock is possible (1 mark )

2. Discuss the following types Operating systems giving examples (2marksx8 = 16 marks)

Page 30 of 31
a. Batch processing OS
b. Online OS
c. Embedded OS
d. Real time OS
e. Mobile OS
f. Network OS
g. Mono tasking OS
h. Multiprocessing OS

Page 31 of 31

You might also like