Professional Documents
Culture Documents
Module 1 - OS
Module 1 - OS
Module 1 - OS
18EC71
Textbook
• OPERATING SYSTEMS CONCEPTS AND DESIGN by Milan
Milenkovic
Contents
• Introduction
• Evolution of Operating Systems
• Types of Operating Systems
• Different Views of The Operating System
• Processes: The Process Concept
• The System Programmer’s View of Processes
• The Operating System's View of Processes
• Scheduling & Scheduling Algorithms.
Introduction
“An operating system may be viewed as an organized collection of
software extensions of hardware, consisting of control routines for
operating a computer and for providing an environment for execution
of programs.”
“An operating system acts as the interface between the user and the
system hardware.”
“An operating system is a set of programs that manage computer
hardware resources and provide common services for application
software. The operating system is the most important type of system
software in a computer system.”
• Other programs relay on facilities provided by the operating system to
gain access to computer system resources such as files, memory and
input/output devices.
• Programs usually invoke services of operating system by means of
operating system calls.
• In addition users may interact with the operating system directly by
means of operating system commands.
• The range and extent of services provided by an operating system
depend on a number of factors.
• Operating Systems are viewed as resource managers.
Resources managed by OS: computer hardware in the form of
processors, memory, input/output devices.
In this role the operating system keeps track of the status of each
resource and decides who gets a resource for how long and when.
Evolution of Operating Systems
• An OS may process its workload serially or concurrently.
• Resources of a computer system may be dedicated to a single
program until its completion or they may be dynamically reassigned
among a collection of active programs in different stages of execution.
• Variations of serial and multiprogrammed operating systems are,
1. Serial Processing
2. Batch Processing
3. Multiprogramming
Serial Processing
• In theory, every computer system can be programmed in its machine
language with no system-software support.
• Programs for the “bare machine” can be developed by manually
translating sequences of instructions into binary.
• Instructions and data are entered into the computer by means of
console switches or perhaps through a hexadecimal keyboard.
• Programs are started by loading the program counter with the address
of the first instruction.
• Results of execution are obtained by examining the contents of the
relevant registers and memory locations
• Input/output devices, if any must be controlled by the executing
program directly say by reading and writing the related I/O ports
• Evidently programming of the bare machine results in low productivity
of both users and hardware
• Next evolutionary step – advent of input/output devices such as punched cards and
paper tape and of language translators
• Programs now coded in a programming language are translated into executable form
by a computer program such as a compiler or an interpreter
• Another program called the loader automates the process of loading executable
programs into memory
• The user places a program and its input data on an input device and the loader
transfer information from that input device into memory
• After transferring control to the loaded program by manual or automatic means,
execution of the program commences
• The executing program reads its input from the designated input device and may
produce some output on an output device such as printer or display device
• If run-time errors are detected, the state of the machines can be
examined and modified by means of console switches or with the
assistance of a program called a debugger.
• In addition to language translators, system software includes the loader
and possibly editor and debugger programs.
• The mode of operation described here is used in late fifties.
• Improvement over bare – machine approach
• Running of the computer system may require frequent manual loading of
programs and data – Low utilization of system resources
• Low productivity in multiuser environments
Batch Processing
• Early computers were very expensive, and therefore it was important to
maximize processor utilization.
• The wasted time due to scheduling and setup time in Serial Processing was
unacceptable.
• Housekeeping operations such as mounting of tapes and filling out log forms
take a long time relative to processor and memory speeds.
• To improve utilization, the concept of a batch operating system was developed.
• Batch is defined as a group of jobs with similar needs. The operating system
allows users to form batches. Computer executes each batch sequentially,
processing all jobs of a batch considering them as a single process called batch
processing.
• For example by batching several Fortran compilation jobs together, the
Fortran compiler can be loaded only once to process all of them in a
row.
• Job Control Language (JCL) commands instruct OS how to treat each
individual job
• A memory resident portion of the batch OS called batch monitor –
reads, intercepts and executes these commands.
• In response to them batch jobs are executed one at a time.
• When a JOB_END command is encountered, the monitor may look for
another job, which may be identified by a JOB_START command.
Batch Processing
By reducing component idle time due to slow manual operations, batch processing offers a
greater potential for increased system resource utilization and throughput than simple
serial processing.
Disadvantages:
• With a batch operating system, processor time alternates between execution of user
programs and execution of the monitor. There have been two sacrifices: Some main
memory is now given over to the monitor and some processor time is consumed by the
monitor. Both of these are forms of overhead.
• The turnaround time measured from the time the job is submitted until its output is
received may be quite long in batch systems.
Multiprogramming
• A single program cannot keep either CPU or I/O devices busy at all times.
• Multiprogramming increases CPU utilization by organizing jobs in such a
manner that CPU has always one job to execute.
• If computer is required to run several programs at the same time, the
processor could be kept busy for the most of the time by switching its
attention from one program to the next. Additionally I/O transfer could
overlap the processor activity i.e, while one program is awaiting for an I/O
transfer, another program can use the processor. So CPU never sits idle or if
comes in idle state then after a very small time it is again busy.
Multiprogramming with 2 Programs
Multiprogramming with 3 Programs
Types of Operating Systems
• Batch Operating Systems
• Multiprogramming Operating Systems
• Time Sharing Systems
• Real-Time Systems
• Combination Operating Systems
• Distributed Operating Systems
Batch Operating Systems
Batch Operating System
• The programs are loaded on to punch cards.
• The programs to be executed are provided to the operator.
• The programs are sorted into batches based on the similarities in the programs.
• The batches are submitted to OS.
• All the jobs in one batch are executed together
Advantages:
• It saves the time that was being wasted earlier for each individual process in
context switching from one environment to another environment.
• No manual intervention is needed.
Disadvantages of Batch Operating Systems
• The jobs cannot be prioritized.
• The process may have to starve for CPU.
• CPU may remain idle for a long time if the jobs in a batch require I/O
operations.
• No interaction between the user and the jobs. This may affect
performance if the jobs require user input.
Multiprogramming Systems
• Sharing the processor, when two or more programs reside in memory at the
same time, is referred as multiprogramming. Multiprogramming assumes a
single shared processor. Multiprogramming increases CPU utilization by
organizing jobs so that the CPU always has one to execute.
• The instructions can be CPU bound (computation) and I/O bound (input output
operations). During execution of I/O bound instructions, the CPU time is wasted
as it is idle, to overcome this the other jobs which are in main memory and
which are CPU bound are fetched and executed.
• Benefits: Less execution time, Increased utilization of memory, Increased
throughput (number of jobs completed per unit time).
Multiprogramming
Time-Sharing Systems
• Another mode for delivering computing services is provided by time sharing
operating systems.
• In this environment a computer provides computing services to several or
many users concurrently on-line. Here, the various users are sharing the
central processor, the memory, and other resources of the computer system
in a manner facilitated, controlled, and monitored by the operating system.
• The user, in this environment, has nearly full interaction with the program
during its execution, and the computer’s response time may be expected to
be no more than a few seconds.
• The CPU time is shared across multiple users connected and multiple
programs residing in the main memory.
Real-Time Operating Systems (RTOS)
• An RTOS typically has very little user-interface capability, and no end-user
utilities.
• A very important part of an RTOS is managing the resources of the
computer so that a particular operation executes in precisely the same
amount of time every time it occurs. In a complex machine, having a part
move more quickly just because system resources are available may be
just as catastrophic as having it not move at all because the system is busy.
• An RTOS aims at providing expected results within the deadline.
Example: Guided missile systems, air traffic control systems, anti-lock brake
systems in automobiles etc.
Distributed Operating Systems
• Distributed Operating System is a model where distributed applications are
running on multiple computers linked by communications.
• A distributed operating system is an extension of the network operating
system that supports higher levels of communication and integration of the
machines on the network.
• These systems are referred as loosely coupled systems where each
processor has its own local memory and processors communicate with one
another through various communication lines, such as high speed buses or
telephone lines. By loosely coupled systems, we mean that such computers
possess no hardware connections at the CPU – memory bus level, but are
connected by external interfaces that run under the control of software.
Distributed Operating Systems
Different views of the operating system
• The Command-Language User's View of the Operating System.
• The System-Call User's View of the Operating System
User View of an OS
The user view depends on the system interface that is used by the users. The
different types of user view experiences can be explained as follows:
1. Personal Computers: User friendly interface, need not worry about resource
sharing as the resources are meant for one single user.
2. Mainframe systems: Users are connected through terminals. The CPU and
other resources are shared between users.
3. Workstations: In scenarios of work stations connected to the networks, apart
from the resource utilization of the workstation, sharing of information along
the network should also be taken into account.
4. Handheld computing devices: The resource utilization should be very
efficient to avoid draining of the battery.
System view of an OS
• OS acts as a resource allocator: CPU time, memory space, file storage
space, I/O devices etc. that are required by processes for execution.
• Works as a control program managing all the processes and I/O
devices ensuring smooth operation without errors.
• OS acts as an intermediate providing an easy interface to user to
utilize the hardware resources.
Process
• A process is a program in execution. Formally, we can define a process as an executing program,
including the current values of the program counter, registers, and variables. The subtle difference
between a process and a program is that the program is a group of instructions whereas the process is
the activity.
• Processes can be user programs as well as system activities. Example: A batch job is a process,
similarly a time shared user program is also a process.
• A process would need certain resources such as CPU time, memory, files, I/O devices etc to
accomplish the task.
• A program can be made up of one or more processes.
• Processes can be operating system processes that execute system code or user process that execute
the user code.
Process
In multiprogramming systems, processes are performed in a pseudo-parallelism as if each
process has its own processor. In fact, there is only one processor but it switches back and
forth from process to process.
Henceforth, by saying execution of a process, we mean the processor’s operations on the
process like changing its variables, etc. and I/O work means the interaction of the process
with the I/O operations like reading something or writing to somewhere. They may also be
named as “processor (CPU) burst” and “I/O burst” respectively.
Programs can be classified as:
1. Processor bound programs: Program having long processor bursts.
2. I/O bound programs: Program having short processor bursts.
Multiprocessing and Multiprogramming
Multiprocessing: Multiple processes are running concurrently and
multiple hardware processors are available for running these processes.
https://www.youtube.com/watch?v=4hCih9eLc7M&list=PL3-wYxbt4yCjp
cfUDz-TgD_ainZ2K3MUZ&index=19
Performance Criteria
• Processor utilization: It refers to the average fraction of the time
during which the processor is busy. It can refer to time spent on
executing the user programs and executing the operating system. It is
also important to note that, with processor utilization approaching
100%, average wait times and average queue lengths tend to grow
excessively.
• Throughput: It refers to the amount of work completed in unit time. It
refers to the number of user jobs executed in a unit of time.
Throughputs can be a measure of scheduling efficiency.
Performance Criteria
• Turnaround time (T): It is defined as the time that elapses from the moment
a program is submitted until it is completed by the system. It is the time
spent in the system that includes the execution time and wait time.
• Waiting time (W): It is the time that a process spends waiting a resource
allocation due to contention with others in a multiprogramming systems. It
is the penalty imposed for sharing resources with others.
W(x) = T(x) - x
W(x) -> Waiting time of the job requiring x units of service
x-> service time
T(x)-> Is the job’s turn around time
Performance Criteria
• Response time: In an interactive system, the response time is defined
as the time that elapses from a moment the last character of
command line launching a program or a transaction is entered until
the first result appears on the terminal.
Scheduler Design
• A typical scheduler design process involves selecting one or more primary
performance criteria and ranking them in the relative order of importance.
• Next step is to design a scheduling strategy that maximize the performance for
the specified set of criteria.
• No scheduling strategy can guarantee optimized performance but rather can
deliver a near optimal performance. This is due to the overhead that is
incurred by computing the optimal strategy at run time.
• One of the challenges in designing a scheduler is that the performance criteria
often conflict with each other.
• Example: Increased processor utilization is usually achieved by increasing the
number of active processes but then response time deteriorates.
Scheduling Algorithms
• Scheduling algorithms define the way the processes are executed.
• Scheduling can be of two types: Pre-emptive and Non Pre-emptive.
• Non Pre-emptive: The running process retains the ownership of the
processor and the allocated resources until it voluntarily surrenders
the control to the operating system. No higher priority processes in
the ready queue will be able to pre-empt the currently running
process.
• Pre-emptive: Whenever a higher priority process enters the ready
queue, it would pre-empt or stop the currently running process and
take over the processor.
Scheduling Algorithms
• First-Come, First-Served (FCFS) Scheduling
• No Preemption
• Shortest Remaining Time Next (SRTN)
• Both Preemption and Non Preemption
• Time Sliced Scheduling (Round Robin Scheduling)
• Processor time divided into slices
• Priority based Pre-emptive Scheduling (Event Driven Scheduling)
• Each process is assigned a priority level
• Priorities may be static or dynamic
Asynchronous online learning:
https://www.youtube.com/watch?v=fJEVP91dXaE&list=PL3-wYxbt4yCjpcfUDz-TgD_ainZ
2K3MUZ&index=20
• Multiple Level Queues (MLQ) Scheduling
Multiple Level Queues Scheduling
• In systems with time critical events, a multitude of interactive users
and some very long interactive jobs, in such systems comprising of
mixed tasks the OS processes and device interrupts may be subjected
to event driven scheduling, interactive programs to round robin
scheduling and batch jobs to FCFS or SRTN scheduling.
• This can be implemented by classifying workload into characteristics
and to separate process queues serviced by different schedulers.
• Implementation of multiple level queues scheduling is as shown in the
next slide.
Multiple Level Queues Scheduling
Multiple-Level Queues with Feedback Scheduling
• To increase the effectiveness of the scheduling, multiple level queues with
feedback can be used.
• Instead of fixed classes being allocated to specific queues, the idea is to make
the process traverse through the system depending on its run-time behavior.
• The process may start with the top level queue. If the process completes
execution within the given time slice, then it departs from the system.
• If a process needs more than one time slice, then it is moved to the next
lower priority queue which would get lower percentage of processor time.
• If the process still does not complete execution, then it is moved to the next
lower priority queue.
Multiple-Level Queues with Feedback Scheduling - Continued