Module 1 - OS

You might also like

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 79

Operating Systems – Module 1

18EC71
Textbook
• OPERATING SYSTEMS CONCEPTS AND DESIGN by Milan
Milenkovic 
Contents
• Introduction
• Evolution of Operating Systems
• Types of Operating Systems
• Different Views of The Operating System
• Processes: The Process Concept
• The System Programmer’s View of Processes
• The Operating System's View of Processes
• Scheduling & Scheduling Algorithms.
Introduction
“An operating system may be viewed as an organized collection of
software extensions of hardware, consisting of control routines for
operating a computer and for providing an environment for execution
of programs.”
“An operating system acts as the interface between the user and the
system hardware.”
“An operating system is a set of programs that manage computer
hardware resources and provide common services for application
software. The operating system is the most important type of system
software in a computer system.”
• Other programs relay on facilities provided by the operating system to
gain access to computer system resources such as files, memory and
input/output devices.
• Programs usually invoke services of operating system by means of
operating system calls.
• In addition users may interact with the operating system directly by
means of operating system commands.
• The range and extent of services provided by an operating system
depend on a number of factors.
• Operating Systems are viewed as resource managers.
Resources managed by OS: computer hardware in the form of
processors, memory, input/output devices.
In this role the operating system keeps track of the status of each
resource and decides who gets a resource for how long and when.
Evolution of Operating Systems
• An OS may process its workload serially or concurrently.
• Resources of a computer system may be dedicated to a single
program until its completion or they may be dynamically reassigned
among a collection of active programs in different stages of execution.
• Variations of serial and multiprogrammed operating systems are,
1. Serial Processing
2. Batch Processing
3. Multiprogramming
Serial Processing
• In theory, every computer system can be programmed in its machine
language with no system-software support.
• Programs for the “bare machine” can be developed by manually
translating sequences of instructions into binary.
• Instructions and data are entered into the computer by means of
console switches or perhaps through a hexadecimal keyboard.
• Programs are started by loading the program counter with the address
of the first instruction.
• Results of execution are obtained by examining the contents of the
relevant registers and memory locations
• Input/output devices, if any must be controlled by the executing
program directly say by reading and writing the related I/O ports
• Evidently programming of the bare machine results in low productivity
of both users and hardware
• Next evolutionary step – advent of input/output devices such as punched cards and
paper tape and of language translators
• Programs now coded in a programming language are translated into executable form
by a computer program such as a compiler or an interpreter
• Another program called the loader automates the process of loading executable
programs into memory
• The user places a program and its input data on an input device and the loader
transfer information from that input device into memory
• After transferring control to the loaded program by manual or automatic means,
execution of the program commences
• The executing program reads its input from the designated input device and may
produce some output on an output device such as printer or display device
• If run-time errors are detected, the state of the machines can be
examined and modified by means of console switches or with the
assistance of a program called a debugger.
• In addition to language translators, system software includes the loader
and possibly editor and debugger programs.
• The mode of operation described here is used in late fifties.
• Improvement over bare – machine approach
• Running of the computer system may require frequent manual loading of
programs and data – Low utilization of system resources
• Low productivity in multiuser environments
Batch Processing
• Early computers were very expensive, and therefore it was important to
maximize processor utilization.
• The wasted time due to scheduling and setup time in Serial Processing was
unacceptable.
• Housekeeping operations such as mounting of tapes and filling out log forms
take a long time relative to processor and memory speeds.
• To improve utilization, the concept of a batch operating system was developed.
• Batch is defined as a group of jobs with similar needs. The operating system
allows users to form batches. Computer executes each batch sequentially,
processing all jobs of a batch considering them as a single process called batch
processing.
• For example by batching several Fortran compilation jobs together, the
Fortran compiler can be loaded only once to process all of them in a
row.
• Job Control Language (JCL) commands instruct OS how to treat each
individual job
• A memory resident portion of the batch OS called batch monitor –
reads, intercepts and executes these commands.
• In response to them batch jobs are executed one at a time.
• When a JOB_END command is encountered, the monitor may look for
another job, which may be identified by a JOB_START command.
Batch Processing
By reducing component idle time due to slow manual operations, batch processing offers a
greater potential for increased system resource utilization and throughput than simple
serial processing.
Disadvantages:
• With a batch operating system, processor time alternates between execution of user
programs and execution of the monitor. There have been two sacrifices: Some main
memory is now given over to the monitor and some processor time is consumed by the
monitor. Both of these are forms of overhead.
• The turnaround time measured from the time the job is submitted until its output is
received may be quite long in batch systems.
Multiprogramming
• A single program cannot keep either CPU or I/O devices busy at all times.
• Multiprogramming increases CPU utilization by organizing jobs in such a
manner that CPU has always one job to execute.
• If computer is required to run several programs at the same time, the
processor could be kept busy for the most of the time by switching its
attention from one program to the next. Additionally I/O transfer could
overlap the processor activity i.e, while one program is awaiting for an I/O
transfer, another program can use the processor. So CPU never sits idle or if
comes in idle state then after a very small time it is again busy.
Multiprogramming with 2 Programs
Multiprogramming with 3 Programs
Types of Operating Systems
• Batch Operating Systems
• Multiprogramming Operating Systems
• Time Sharing Systems
• Real-Time Systems
• Combination Operating Systems
• Distributed Operating Systems
Batch Operating Systems
Batch Operating System
• The programs are loaded on to punch cards.
• The programs to be executed are provided to the operator.
• The programs are sorted into batches based on the similarities in the programs.
• The batches are submitted to OS.
• All the jobs in one batch are executed together

Advantages:
• It saves the time that was being wasted earlier for each individual process in
context switching from one environment to another environment.
• No manual intervention is needed.
Disadvantages of Batch Operating Systems
• The jobs cannot be prioritized.
• The process may have to starve for CPU.
• CPU may remain idle for a long time if the jobs in a batch require I/O
operations.
• No interaction between the user and the jobs. This may affect
performance if the jobs require user input.
Multiprogramming Systems
• Sharing the processor, when two or more programs reside in memory at the
same time, is referred as multiprogramming. Multiprogramming assumes a
single shared processor. Multiprogramming increases CPU utilization by
organizing jobs so that the CPU always has one to execute.
• The instructions can be CPU bound (computation) and I/O bound (input output
operations). During execution of I/O bound instructions, the CPU time is wasted
as it is idle, to overcome this the other jobs which are in main memory and
which are CPU bound are fetched and executed.
• Benefits: Less execution time, Increased utilization of memory, Increased
throughput (number of jobs completed per unit time).
Multiprogramming
Time-Sharing Systems
• Another mode for delivering computing services is provided by time sharing
operating systems.
• In this environment a computer provides computing services to several or
many users concurrently on-line. Here, the various users are sharing the
central processor, the memory, and other resources of the computer system
in a manner facilitated, controlled, and monitored by the operating system.
• The user, in this environment, has nearly full interaction with the program
during its execution, and the computer’s response time may be expected to
be no more than a few seconds.
• The CPU time is shared across multiple users connected and multiple
programs residing in the main memory.
Real-Time Operating Systems (RTOS)
• An RTOS typically has very little user-interface capability, and no end-user
utilities.
• A very important part of an RTOS is managing the resources of the
computer so that a particular operation executes in precisely the same
amount of time every time it occurs. In a complex machine, having a part
move more quickly just because system resources are available may be
just as catastrophic as having it not move at all because the system is busy.
• An RTOS aims at providing expected results within the deadline.
Example: Guided missile systems, air traffic control systems, anti-lock brake
systems in automobiles etc.
Distributed Operating Systems
• Distributed Operating System is a model where distributed applications are
running on multiple computers linked by communications.
• A distributed operating system is an extension of the network operating
system that supports higher levels of communication and integration of the
machines on the network.
• These systems are referred as loosely coupled systems where each
processor has its own local memory and processors communicate with one
another through various communication lines, such as high speed buses or
telephone lines. By loosely coupled systems, we mean that such computers
possess no hardware connections at the CPU – memory bus level, but are
connected by external interfaces that run under the control of software.
Distributed Operating Systems
Different views of the operating system
• The Command-Language User's View of the Operating System.
• The System-Call User's View of the Operating System
User View of an OS
The user view depends on the system interface that is used by the users. The
different types of user view experiences can be explained as follows:
1. Personal Computers: User friendly interface, need not worry about resource
sharing as the resources are meant for one single user.
2. Mainframe systems: Users are connected through terminals. The CPU and
other resources are shared between users.
3. Workstations: In scenarios of work stations connected to the networks, apart
from the resource utilization of the workstation, sharing of information along
the network should also be taken into account.
4. Handheld computing devices: The resource utilization should be very
efficient to avoid draining of the battery.
System view of an OS
• OS acts as a resource allocator: CPU time, memory space, file storage
space, I/O devices etc. that are required by processes for execution.
• Works as a control program managing all the processes and I/O
devices ensuring smooth operation without errors.
• OS acts as an intermediate providing an easy interface to user to
utilize the hardware resources.
Process
• A process is a program in execution. Formally, we can define a process as an executing program,
including the current values of the program counter, registers, and variables. The subtle difference
between a process and a program is that the program is a group of instructions whereas the process is
the activity.
• Processes can be user programs as well as system activities. Example: A batch job is a process,
similarly a time shared user program is also a process.
• A process would need certain resources such as CPU time, memory, files, I/O devices etc to
accomplish the task.
• A program can be made up of one or more processes.
• Processes can be operating system processes that execute system code or user process that execute
the user code.
Process
In multiprogramming systems, processes are performed in a pseudo-parallelism as if each
process has its own processor. In fact, there is only one processor but it switches back and
forth from process to process.
Henceforth, by saying execution of a process, we mean the processor’s operations on the
process like changing its variables, etc. and I/O work means the interaction of the process
with the I/O operations like reading something or writing to somewhere. They may also be
named as “processor (CPU) burst” and “I/O burst” respectively.
Programs can be classified as:
1. Processor bound programs: Program having long processor bursts.
2. I/O bound programs: Program having short processor bursts.
Multiprocessing and Multiprogramming
Multiprocessing: Multiple processes are running concurrently and
multiple hardware processors are available for running these processes.

Multitasking: Concurrent execution of programs on a single processor


without necessarily supporting elaborate forms of memory
management and file management.

Multiprogramming: A more general concept that provides memory and


file management along with concurrent execution of program.
Example: Text Editor program on
Multiprogrammed Multiuser System
• User invokes the text editor program > OS loads the editor program into
memory > OS creates a editor process and schedules it for execution.
• User performs necessary operations using the terminal on the files. Once
the session is complete, user issues a EXIT command to the editor process.
• The editor process performs some housekeeping and terminates itself by
calling the OS.
• OS closes the data files, erases the records of that specific instance of
editor process.
• Deletion of editor process does not affect the data stored in the file nor
the editor program.
Example: Text Editor program on
Multiprogrammed Multiuser System
• If another user invokes the editor program before the first user terminates
the session, the OS responds by creating a separate version of the editor
process.
• Although it is the same editor program file used, each concurrent invocation
results in creation of new and unique editor process.
• This is necessary because, each process represents a different thread of
control, accepts commands from different users, has different run-time
state; including contents of registers, memory buffers and data files.
• The two processes are completely independent from the point of view of OS
and would compete with each other for allocation processor and other
system resources.
Process Management by OS
• Creating and removing (destroying) process.
• Controlling the progress of the process.
• Acting on exceptional conditions that arise during execution of the
process.
• Allocation of hardware resources among the processes.
• Providing a means of communication in the form of messages or
signals among the processes.
Implicit Tasking
• Implicit tasking means that processes are defined by the system. It is
commonly encountered in general purpose multiprogramming systems
such as time-sharing.
• In this approach, each program submitted for execution is treated by
the operating system as an independent process.
• In response to the user’s RUN command the system creates a process
to execute the program.
• Batch Job – several prcesses
• Processes created in this manner are usually transient in the sense that
they are destroyed and disposed of by the system after each run.
Explicit Tasking
• Explicit tasking means that programmers explicitly define each process
and some of its attributes. To improve the performance, a single logical
application is divided into various related processes.
• Explicit tasking is used in situations where high performance is desired.
• System programs such as parts of the operating system and real time
applications are common examples of programs defined processes.
• After dividing program into several independent processes, a system
programmer defines the confines of each individual process. A parent
process is then commonly added to create the environment for and to
control execution of individual processes.
Benefits of using explicit tasking
• Faster execution of applications
• Driving I/O devices that have latency: One task waiting for I/O,
another portion of application can make progress
• User convenience: A GUI can allow users to launch several operations
concurrently
• Multiprocessing: A program coded as a collection of taks can be
relatively easily ported to a multiprocessor
Process relationships
• There are two fundamental relations among concurrent processes:
• Competition
• Cooperation
• All the concurrent processes compete with each other for allocation
of system resources.
• A collection of processes that collectively represent a single logical
application often cooperate with each other.
• Cooperation among processes is possible due to explicit tasking.
• Cooperating processes exchange data and synchronization signals.
Process relationships
• In case of competition among the processes, careful resource allocation
and protection in terms of isolated address spaces is required.
• Cooperation among processes depends on existence of mechanisms for
controlled usage of shared data and the exchange of synchronization
signals.
• Family of Processes: Cooperating processes share resources and
attributes and together can be grouped as a family of processes.
Relationships such as parent-child processes exists where the child
processes inherit attributes from their parents at the time of process
creation.
System Programmer’s View of Processes
• System Programmers often explicitly deal with processes.
• OS or a systems implementation language in some scenarios provide
them with the facilities for defining the confines of a process, its
attributes, nature of residence in memory (fixed or swappable).
• By defining the processes, the system programmer informs the
operating system which activities may be scheduled for concurrent
execution.
• By defining the values of process attributes, the system programmer
can control many aspects of run-time process behavior and
management.
Multitasking Example
• Data Acquisition System (DAC): Monitor some physical process
continuously, record its behavior and to identify and report
significant changes.
• Sequence of activities: COLLECT (from sensor), LOG (to disk), STAT
(statistical processing) and REPORT (print significant changes).
Multitasking Example
Single Task DAC pseudo code:
begin
while true do
begin
collect_ad;
log_d;
stat_p
report_pr
end [while]
end [single task adc]
Multitasking Example
• Identifying the operations that can be coded as a separate process and to
determine their relative precedence.
• In the given example, the process of COLLECT, LOG, STAT and REPORT are
the processes. The order of activities of LOG and STAT is not specified.
REPORT can run only after STAT is completed, because it prints data
provided by STAT.
• In the first run, COLLECT (C1) runs, it prepares the data for first run of LOG
(L1) and STAT (S1). Both L1 and S1 can execute concurrently. When L1 and
S1 finish, the second run of COLLECT (C2) can execute concurrently with
the first run of REPORT (R1) which will print the changes furnished by
STAT (S1).
Precedence
relationships in
Multitasking
Interprocess Synchronization
• Signals are one of the interprocess synchronization mechanisms that, as a
group, are among the most important services provided by multitasking
operating systems.
• Let us assume a process waiting on one or more signals would be suspended
and made ineligible for execution by the OS until all required signals arrive.
• Problem split into four processes, each process executes its own sequential
stream of instructions, more or less independently.
• An individual process has no way of knowing the state or progress of other
processes.
• The system programmer can control the aspects of interprocess
synchronization by defining the identity and meaning of the signals exchanged.
Interprocess Synchronization
• In the previous example, the LOG process has to wait for a signal from
COLLECT process in order to save the data on to the disk.
• The next run of COLLECT may be initialized only after LOG and STAT
have processed the previous batch.
• COLLECT must receive signal from both LOG and STAT.
Operating System’s View of Processes
• From OS view, the process is a smallest individually schedulable entity.
• The process would consist of code and data characterized by
attributes and dynamic state.
• The attributes may be assigned by system programmer or OS which
indicate the priority and access rights.
• Operating system views the execution of a typical process in the
course of its activity in the form of progression through a succession
of states.
Process states
• Dormant: The state which is not known to OS or tracked by OS. The processes
waiting for activation as well as the programs not submitted to the operating system.
• Ready: The state where a process possesses all resources needed for execution.
Processes usually assume ready state upon creation. The scheduler module picks
processes from ready state to execution.
• Running: The process has been provided with all resources including the processor
for execution. The running process executes the sequence of instructions and may
call OS to perform I/O operation or synchronization.
• Suspended: A process enters a suspended state when it lacks resources such as
synchronization signal. A running process becomes suspended when it invokes an I/O
operation whose results are needed in order to proceed or by awaiting for a signal
which is not yet produced.
Process states
Implementation of Processes
• Process Control Block (PCB)
• Process Name (ID)
• Priority
• State (ready, running, suspended)
• Hardware state (process registers and flags)
• Scheduling information and usage statistics
• Memory management information (registers, tables)
• I/O status
• File management information
• Accounting information
Process Control Block Structure
Process Switch
• A process switch (also sometimes referred to as a context switch or a
task switch) is the switching of the CPU (central processing unit) from
one process or to another. A context is the contents of a CPU’s
registers and program counter at any point in time.
• A process switch is sometimes described as the kernel suspending
execution of one process on the CPU and resuming execution of some
other process that had previously been suspended.
• Process switch usually occurs in response to events that change the
system state.
Steps involved in Context Switching
• Saving the state of the first process. The state includes all registers,
program counter and any operation specific data. All of this data is
stored in one data structure called as Process Control Block (PCB).
• The states are saved can be reloaded when the process is loaded
again for execution.
Process Switching Between Two Processes
Threads
• Threads represent a software approach to improving performance of operating systems by reducing
the overhead of process switching.
• In thread based systems, threads take over the role of processes as the smallest individual unit of
scheduling.
• Threads, sometimes called lightweight processes (LWPs) are independently scheduled parts of a
single program. We say that a task is multithreaded if it is composed of several independent sub-
processes which do work on common data, and if each of those pieces could (at least in principle)
run in parallel.
• If we write a program which uses threads – there is only one program, one executable file, one task
in the normal sense. Threads simply enable us to split up that program into logically separate
pieces, and have the pieces run independently of one another, until they need to communicate.
• Threads are cheaper than normal processes, and that they can be scheduled for execution in a
user-dependent way, with less overhead. Threads are cheaper than a whole process because they
do not have a full set of resources each.
Threads
• Whereas the process control block for a heavyweight process is large
and costly to context switch, the PCBs for threads are much smaller,
since each thread has only a stack and some registers to manage. It
has no open file lists or resource lists, no accounting structures to
update.
• Note: No thread can exist outside a process.
• Threads can be used to exploit concurrency within an application.
• Communication between thread is easier due to an accessible shared
memory within an enclosing process.
Why use threads?
• Using threads it is possible to organize the execution of a program in
such a way that something is always being done, whenever the
scheduler gives the heavyweight process, CPU time.
Scheduling
• Scheduling refers to a set of policies and mechanisms built into the
OS that govern the order in which the work to be done by a
computer system is completed.
• A scheduler is an OS module that selects the next job to be admitted
into the system and the next process to run.
• The primary objective of scheduling is to optimize the system
performance in accordance with the criteria deemed most important
by the system designers.
Types of Schedulers
• Long term scheduler
• Medium term scheduler
• Short term scheduler
Long term scheduler
• Long term scheduler works with the batch queue and selects next
batch job to be executed.
• Batch is usually reserved for resource intensive (processor time,
memory, I/O devices), low priority programs that may be used as
fillers to keep system resources busy during periods of low activity of
interactive jobs.
• Batch jobs also contains estimates of the resource needs such as
memory, execution time and device requirements.
Long term scheduler
• Primary objective is to provide a balanced mix of jobs such a
processor bound and I/O bound to the short term scheduler.
• Long term scheduler acts as the first level in keeping resource
utilization at desired level.
• Long term scheduler can be invoked to admit jobs when processor
utilization is low and conversely it may opt to reduce the rate of batch
job when the utilization factor becomes high.
• Long term scheduler is in-charge of dormant-to-ready transitions.
• Ready processes are placed in the ready queue for consideration by the
short-term scheduler.
Medium-term scheduler
• The medium term scheduler controls the suspended-to-ready state transitions of
swapped processes.
• A running process may get suspended by making an I/O request or a system call. A
suspended process would not make any progress until the suspending condition is
removed.
• Suspended processes may be removed from main memory as the presence of too
many suspended processes in the main memory would impair the functioning of
short term scheduler.
• In systems with no virtual memory, the suspended processes are swapped from main
memory to secondary memory.
• Once the suspended condition is removed, the medium term scheduler tries to
allocate the main memory and swap the process in to make it ready for execution.
Short-term scheduler
• Short-term scheduler allocates the processor among the pool of ready
processes resident in the memory.
• The objective is to maximize the system performance with a chosen set
of criteria.
• It is the in-charge of ready-to-running state transitions.
• Short-term scheduler should be invoked for each process switch to
select the next process to be run as well as whenever an event changes
the global state of the system to change.
• These events may determine which process would be scheduled next
for execution.
Short-term scheduler
• Some of the events that could cause rescheduling by virtue of their
ability to change global system state are:
• Clock ticks (time-based interrupts)
• Interrupts and I/O complications
• Operational OS calls
• Sending and receiving of signals
• Activation of interactive programs
Whenever one of these events occurs, the OS invokes the short-term scheduler
to determine whether another process should be scheduled for execution.
Scheduling and performance criteria
• Some of the performance measures and optimization criteria that
schedulers use to maximize system performance are as mentioned below:
1. Processor utilization
2. Throughput
3. Turnaround time
4. Waiting time
5. Response time
Asynchronous online learning:

https://www.youtube.com/watch?v=4hCih9eLc7M&list=PL3-wYxbt4yCjp
cfUDz-TgD_ainZ2K3MUZ&index=19
Performance Criteria
• Processor utilization: It refers to the average fraction of the time
during which the processor is busy. It can refer to time spent on
executing the user programs and executing the operating system. It is
also important to note that, with processor utilization approaching
100%, average wait times and average queue lengths tend to grow
excessively.
• Throughput: It refers to the amount of work completed in unit time. It
refers to the number of user jobs executed in a unit of time.
Throughputs can be a measure of scheduling efficiency.
Performance Criteria
• Turnaround time (T): It is defined as the time that elapses from the moment
a program is submitted until it is completed by the system. It is the time
spent in the system that includes the execution time and wait time.
• Waiting time (W): It is the time that a process spends waiting a resource
allocation due to contention with others in a multiprogramming systems. It
is the penalty imposed for sharing resources with others.
W(x) = T(x) - x
W(x) -> Waiting time of the job requiring x units of service
x-> service time
T(x)-> Is the job’s turn around time
Performance Criteria
• Response time: In an interactive system, the response time is defined
as the time that elapses from a moment the last character of
command line launching a program or a transaction is entered until
the first result appears on the terminal.
Scheduler Design
• A typical scheduler design process involves selecting one or more primary
performance criteria and ranking them in the relative order of importance.
• Next step is to design a scheduling strategy that maximize the performance for
the specified set of criteria.
• No scheduling strategy can guarantee optimized performance but rather can
deliver a near optimal performance. This is due to the overhead that is
incurred by computing the optimal strategy at run time.
• One of the challenges in designing a scheduler is that the performance criteria
often conflict with each other.
• Example: Increased processor utilization is usually achieved by increasing the
number of active processes but then response time deteriorates.
Scheduling Algorithms
• Scheduling algorithms define the way the processes are executed.
• Scheduling can be of two types: Pre-emptive and Non Pre-emptive.
• Non Pre-emptive: The running process retains the ownership of the
processor and the allocated resources until it voluntarily surrenders
the control to the operating system. No higher priority processes in
the ready queue will be able to pre-empt the currently running
process.
• Pre-emptive: Whenever a higher priority process enters the ready
queue, it would pre-empt or stop the currently running process and
take over the processor.
Scheduling Algorithms
• First-Come, First-Served (FCFS) Scheduling
• No Preemption
• Shortest Remaining Time Next (SRTN)
• Both Preemption and Non Preemption
• Time Sliced Scheduling (Round Robin Scheduling)
• Processor time divided into slices
• Priority based Pre-emptive Scheduling (Event Driven Scheduling)
• Each process is assigned a priority level
• Priorities may be static or dynamic
Asynchronous online learning:
https://www.youtube.com/watch?v=fJEVP91dXaE&list=PL3-wYxbt4yCjpcfUDz-TgD_ainZ
2K3MUZ&index=20
• Multiple Level Queues (MLQ) Scheduling
Multiple Level Queues Scheduling
• In systems with time critical events, a multitude of interactive users
and some very long interactive jobs, in such systems comprising of
mixed tasks the OS processes and device interrupts may be subjected
to event driven scheduling, interactive programs to round robin
scheduling and batch jobs to FCFS or SRTN scheduling.
• This can be implemented by classifying workload into characteristics
and to separate process queues serviced by different schedulers.
• Implementation of multiple level queues scheduling is as shown in the
next slide.
Multiple Level Queues Scheduling
Multiple-Level Queues with Feedback Scheduling
• To increase the effectiveness of the scheduling, multiple level queues with
feedback can be used.
• Instead of fixed classes being allocated to specific queues, the idea is to make
the process traverse through the system depending on its run-time behavior.
• The process may start with the top level queue. If the process completes
execution within the given time slice, then it departs from the system.
• If a process needs more than one time slice, then it is moved to the next
lower priority queue which would get lower percentage of processor time.
• If the process still does not complete execution, then it is moved to the next
lower priority queue.
Multiple-Level Queues with Feedback Scheduling - Continued

• The idea of multiple-level queues with feedback is to give preferential


treatment to short processes and have the resource consuming
processes slowly sink into the lower level queues.
• The feedback in multiple level queues mechanisms tend to rank the
processes dynamically according to the observed amount of attained
service with a preference to those who have received less.
• The introduction of feedback makes scheduling adaptive and
responsive to the actual measured run-time behavior of processes as
opposed to fixed classification.

You might also like