Os Full Slides

You might also like

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 368

CHAPTER 1

OPERATING SYSTEM
OVERVIEW
OBJECTIVES
• Describe the key function of an operating
system (OS).
• Discuss the evolution of OS
• Explain each of the major achievements in
OS.
• Discuss the key design areas in the
development of modern OS
TOPICS
• OS definition, objectives and functions
• Evolution of the OS
• Major achievements
• Characteristic of modern OS
PART 1 – OS DEFINITION,
OBJECTIVES & FUNCTIONS
WHAT MAKES A COMPUTER?

• Hardware
• Application
WHAT IS AN OPERATING SYSTEM
(OS)?
• A program that controls the execution of application
programs
• An interface between applications and hardware

OS
LAYERS AND VIEWS
APPLICATION PROGRAM?
• Any program written to perform specific day to day function
to the user (office apps, calculator, web browsing, etc)
UTILITIES?
• A set of system programs in assisting program
creation, management of files and controlling
I/O
OPERATING SYSTEM
OBJECTIVES
• Convenience
• Makes the computer more convenient to use
• Efficiency
• Allows computer system resources to be used in an efficient
manner
• Ability to evolve
• Permits effective development, testing, and introduction of new
system functions without interfering with service
OPERATING SYSTEM OBJECTIVES #1 -
CONVENIENCE
Program Development
 Editors and Debuggers to assist
program development
In the computer, OS
provide service in the Program Execution

following area Number of tasks need to be performed in executing a
program
 OS handle the scheduling duties for the user

Access to I/O Devices


 Provide a uniform interface that hides the details so
programmer can access devices using simple read and
writes.
Accounting Controlled Access to Files
 collect statistics ,monitor  Protection mechanism
performance
 anticipate future System Access
enhancements  In shared or public system, OS control
 used for billing users (in a Error Detection and Response access to the system
multiuser system)  Error occurs while the computer is running:
 Internal and external hardware errors /Software errors
 OS denies request of application
 OS provide response to error condition with least impact to running applications
 Ending program that cause error
 Retrying the operation
 Reporting error to application
OPERATING SYSTEM OBJECTIVES #2 -
EFFICIENCY
So what are these RESOURCES?

1.Memory

2. I/O Devices

3.Processor
OPERATING SYSTEM AS RESOURCE MANAGE
• OS MANAGES these resources
 OS functions same way as ordinary computer software
 It is a suite of program that is executed
 The difference is OS:
 directs the processor in the use of the other system
resources
 Schedule the timing of programs / processes

MANAGE
OPERATING SYSTEM OBJECTIVES #3 – ABILITY TO
EVOLVE

Technology wise, OS evolve overtime…


1. Hardware upgrades / new hardware emerges
2. New services
3. Fix a flaw from existing / previous versions

This constant need for change placed a


certain requirement on the current
generation of OS implementation
 The OS should be modular in its
construction
• Clear defined interfaced between
modules
• Well documented
PART 2 – OS EVOLUTION
EVOLUTION OF OPERATING
SYSTEMS
Serial Processing

Simple Batch System

Multiprogramming Batch System

Time sharing System


EVOLUTION OF OPERATING
SYSTEMS

• Major OS will evolve over the time because:-

• Hardware upgrades plus new types of hardware


• New services
• Fixes
EVOLUTION OF OPERATING
SYSTEMS
1. Serial processing
 No operating system
 programmers interacted
directly with the computer
hardware
 Computers ran from a console
with display lights, toggle
switches, some form of input
device, and a printer
 Users have access to the
computer in “series”
 One problem/task at a time
• Serial processing problems:-
 Scheduling
 most installations used a hardcopy sign-up sheet
to reserve computer time
 Wasting computer time if time reserve is more
than needed

 Setup time
 An amount of time reserved was spent just on
setting up the hardware in order for the program
to run
 Setting up a program to run is longer than running
the program
2. Simple batch system
 Early computers were very expensive
 important to maximize processor utilization
 The use of a software call Monitor
1. user no longer has direct access to processor
2. job is submitted to computer operator who batches them together and
places them on an input device
3. program branches back to the monitor when finished

X X
Monitor Point of View
 Monitor controls the sequence of events
 Resident Monitor is software
 always in memory
 Monitor reads in job and gives control
 Job returns control to monitor
Processor Point of View
 Processor executes instruction from the memory containing the monitor
 Executes the instructions in the user program until it encounters:
 an ending
 error condition

 With each job, instructions are


included in a primitive form of job
control language (JCL)
JOB CONTROL LANGUAGE (JCL)
Special type of programming $JOB
language used to provide instructions $FTN
to the monitor …


[FORTRAN program]
what compiler to use …


$LOAD
$RUN
what data to use …
[data]

$END
HARDWARE FEATURES FOR
SIMPLE BATCH SYSTEM

Memory protection for monitor

• while the user program is executing, it must not alter the memory area containing the
monitor

Timer

• prevents a job from monopolizing the system

Privileged instructions

• can only be executed by the monitor

Interrupts

• gives OS more flexibility in controlling user programs


SIMPLE BATCH
SYSTEMS…
Considerations of memory protection and privileged
instruction lead to the concept of MODES OF OPERATION

User Mode Kernel Mode


• user program executes in user • monitor executes in kernel
mode mode
• certain areas of memory are • privileged instructions may be
protected from user access executed
• certain instructions may not be • protected areas of memory may
executed be accessed
SIMPLE BATCH SYSTEM OVERHEAD
 Processor time alternates between execution of user
programs and execution of the monitor
 Sacrifices
some main memory is now given over to the monitor
some processor time is consumed by the monitor
 Despite overhead, the simple batch system improves
utilization of the computer
2. Multiprogrammed batch system

 How it came to be…


 Processor is often
idle
 even with automatic
job sequencing
 I/O devices are slow
compared to
processor
UNIPROGRAMMING
 The processor spends a certain amount of time executing, until it reaches
an I/O instruction;
 it must then wait until that I/O instruction concludes before proceeding
MULTIPROGRAMMING (MULTI-
PROGRAMMED BATCH SYSTEM)
 There must be enough memory to hold the OS (resident monitor) and one
user program
 When one job needs to wait for I/O, the processor can switch to the other
job, which is likely not waiting for I/O
MULTIPROGRAMMING (MULTI-
PROGRAMMED BATCH SYSTEM)

 Multiprogramming
also known as multitasking
memory is expanded to hold three, four, or
more programs and switch among all of them
Central theme for MODERN OS
EXAMPLE
UTILIZATION
HISTOGRAMS
EFFECTS ON RESOURCE
UTILIZATION
MULTIPROGRAMMING AFTER THOUGHTS

 To have several jobs ready to run, they must be kept in main memory,
which in turn requires MEMORY MANAGEMENT
 Additionally, process must decide which job to run, a task which
requires SCHEDULING algorithm.
TIME SHARING SYSTEMS
 Although multiple batch programming was efficient, it is still desirable
to have a system where the user can interact directly with the computer
 In the 60s the idea of having a dedicated personal computer was
non-existent, back then, a concept of time sharing were developed.
TIME SHARING SYSTEM
 Can be used to handle multiple interactive jobs
 Processor time is shared among multiple users
 Multiple users simultaneously access the system through
terminals;
 with the OS
interleaving the
execution of each
user program in a
short burst or
quantum of
computation
BATCH MULTIPROGRAMMING
VERSUS TIME SHARING
COMPATIBLE TIME-SHARING SYSTEMS
(CTSS)
Time Slicing
 System clock generates interrupts at a rate of approximately one every
0.2 seconds
 At each interrupt OS regained control and could assign processor to
another user
 At regular time intervals the current user would be preempted and another
user loaded in
 Old user programs and data were written out to disk
 Old user program code and data were restored in main memory when that
program was next given a turn
TIME SHARING SYSTEM
AFTER THOUGHTS
 The CTSS approach is considered primitive compared to
present day time sharing, but it was effective
 Time sharing and multiprogramming raised some new
problems for the OS.
1. If multiple jobs are in memory, they must be protected from
interfering with each other
2. With multiple users, the file system must be protected so that
only authorize users have access to a particular file.
3. The contention for resources must be handled
PART 3 – MAJOR ACHIEVEMENTS
MAJOR ACHIEVEMENTS
 Operating Systems are among the most complex pieces of
software ever developed
• Processes
• Memory management
• Information protection and security
• Scheduling and resource management
• System structure

• Taken together, these five areas span many of the key


design and implementation issues of modern operating
systems
PART 4 – CHARACTERISTICS OF
MODERN OS
CHARACTERISTIC OF
MODERN OS
 In response to
 New hardware development
 i.e. Multiprocessor machine, High-speed networks, Faster
processor and larger memory
 New software needs
 i.e. Multimedia applications, internet and web access,
client/server applications
THE OS KERNEL
 The kernel is the central module of an operating
system (OS). It is the part of the operating system that
loads first, and it remains in main memory.

 The kernel possess ALL THE ESSENTIAL


SERVICES required by other parts of the operating
system and applications.

 The kernel code is usually loaded into a protected area


of the main memory to prevent it from being
overwritten by programs or other parts of the operating
system.
THE OS KERNEL
(CONTINUE)
• Typically, the kernel is responsible for:
1. memory management
2. process and task management
3. disk management.

• The kernel connects the


system hardware to the
application software.

• Every operating system has a


kernel.
• Basically, the Kernel, just like the OS is also A
SOFTWARE
• It is a set of code written to do:
1. Memory management
2. Process / Task management
3. Disk Management
OS Codes

Kernel Codes
• Mem mgmt.
• Process mgmt.
• Disk mgmt

Monolithic Kernel
• Large kernel
• Most OS functionality provided in these large kernel
 scheduling, file system, networking, device drivers,
memory management and more
Operating Systems:
Internals and Design Principles, 6/E
William Stallings

CHAPTER 2
PROCESS DESCRIPTION AND CONTROL
• Define the term process
• Explain the relationship between processes and process
control blocks
• Explain the concept of process state and the state transition
• Describe the purpose of data structures and data structures
elements use by an OS to manage the processes
• Describe the requirements for process control by the OS
• Describe the key security issues that relate to OS
TOPICS

• What is a Process?
• Process States
• Process Description
• Process Control
• Security Issues
EARLIER CONCEPTS

 A computer platform
consists of a collection of
hardware resources

 Computer applications
are developed to perform
some task

 The OS was developed to provide a convenient,


feature-rich, secure, and consistent interface for
applications to use
OS MANAGEMENT OF
APPLICATION EXECUTION

1. Resources are made available


to multiple applications
2. The processor is switched
among multiple applications so
all will appear to be
progressing
3. The processor and I/O devices
can be used efficiently
PART 2.1 : WHAT IS PROCESS?
WHAT IS A “PROCESS”?

• A program in execution
• An instance of a program running on a computer
• The entity that can be assigned to and executed on a
processor
• A unit of activity characterized by the execution of a
sequence of instructions, a current state, and an associated
set of system instructions
PROCESS ELEMENTS

Process

Program code A set of data Execution Context


PROCESS ELEMENTS

 The execution context:


While the program is executing, this
process can be uniquely characterized by a
number of elements, including:

identifier

state priority program counter

I/O status
memory pointers context data accounting information
information
PROCESS ELEMENTS

A unique identifier
associated with this
process, to distinguish it
from all other processes.
identifier

state priority program counter

I/O status
memory pointers context data accounting information
information
PROCESS ELEMENTS

identifier
Is the process
running / waiting /
blocked? priority program counter
state

I/O status
memory pointers context data accounting information
information
PROCESS ELEMENTS

Priority level
identifierrelative to other
processes.

state priority program counter

I/O status
memory pointers context data accounting information
information
PROCESS ELEMENTS

identifier The address of the next


instruction in the
program to be executed.
state priority program counter

I/O status
memory pointers context data accounting information
information
PROCESS ELEMENTS

Includes pointers
to the program
code and data identifier
associated
with this process,
plus any memory
blocks shared with state priority program counter
other processes.

I/O status
memory pointers context data accounting information
information
PROCESS ELEMENTS

identifier

state priority program counter

I/O status
memory pointers context data accounting information
information

These are data


that are present
in registers in the
processor
while the process
is executing.
PROCESS ELEMENTS

identifier

state priority program counter

I/O status
memory pointers context data accounting information
information
Includes outstanding I/O
requests, I/O devices assigned
to this process, a list of files in
use by the process, and so on.
PROCESS ELEMENTS

identifier

state priority program counter

I/O status
memory pointers context data accounting information
information
May include the
amount of processor
time and clock time
used, time limits,
account numbers,
and so on.
PROCESS CONTROL BLOCK

 Contains those process elements


 With it, it is possible to interrupt
a running process and later
resume execution as if the
interruption had not occurred
 Created and managed by the
operating system
Key tool that allows support for
multiple processes
PROCESS TRACE

• The behavior of an individual process is shown by listing


the sequence of instructions that are executed
• This list is called a Trace
• Dispatcher is a small program which switches the
processor from one process to another
PROCESS EXECUTION

• Consider three processes being executed


• All are in memory (plus the dispatcher)
TRACE FROM THE
PROCESSES POINT OF VIEW:
• Assume OS allowed a process to continue execution for a

maximum of 6 instruction cycles, then the process will be


interrupted, thus its prevent single process from
monopolizing processor time.
• Figure 3.3 shows the interleaved traces resulting from the
52 instruction cycles.
TRACE FROM PROCESSORS
POINT OF VIEW

Timeout
I/O
4 REASONS FOR PROCESS
CREATION

NEW BATCH JOB


Submission of a batch job
The OS is provided with a batch
job control stream

INTERACTIVE LOGON
A user logs on to the system

CREATED BY OS TO PROVIDE SERVICE


OS create a process to perform a
function on behalf of a user program
(e.g: Printing)

PROCESS SPAWNING A user program can dictate the


creation of a number of
processes

PARENT CHILD
PROCESS PROCESS
is the original, creating,
The newly created
process
process
PROCESS TERMINATION

• There must be a means for a process to indicate


its completion
1. A batch job should include a HALT
instruction or an explicit OS service call for
termination
2. For an interactive application, the action of the
user will indicate when the process is
completed (e.g. log off, quitting an
application)
Table 3.2
Reasons
for Process
Termination
TWO-STATE PROCESS MODEL

• When OS creates a new process, it enters that


process into Not Running State.
• The existence of the process is known by the OS and is
waiting for an opportunity to execute.
• Running process will be interrupted from time to
time and dispatcher will select a new process to run.
• Process will moves from the Running state to Not
Running state, another process will moves to the Running
state.
TWO-STATE PROCESS MODEL

• Process may be in one of two states


• Running
• Not-running
ABOUT QUEUEING DIAGRAM..

• Process that are not running must be kept in some sort of


queue, waiting their turn to be execute.
• There is a single queue in which each entry is a pointer to a
particular process.
• The queue is first in first out (FIFO) list
QUEUING DIAGRAM

Etc … processes moved by the dispatcher of the OS to the CPU then back to the
queue until the task is competed
TWO STATE PROCESS MODEL

• Single queue suggested will be effective if all


processes were always ready to execute
• BUT it is inadequate because some process that are in
Not Running state either
• ready to execute
• blocked because of waiting for I/O operation complete.
• By using a single queue, the dispatcher has to scan the
queue looking for the process that is not blocked and
has been waiting in queue the longest.
TWO STATE PROCESS MODEL
(CONT)

• A way to tackle this situation is to split the Not Running state into
two different states, which are:
• Ready state: Ready to execute
• Blocked state: waiting for I/O
• Now, instead of two states we have three states  Ready,
Running, Blocked

FIVE-STATE PROCESS MODEL


Adding anoter two additional states that is useful for
process management
New , Exit
FIVE-STATE
PROCESS MODEL

NEW

• A process that has just been created but has not yet been
admitted to the pool of executable processes by the OS.
• Typically, a new process has not yet been loaded into main
memory (RAM), although its process control block has been created.
FIVE-STATE
PROCESS MODEL

READ
Y

• A process that is prepared to execute when given the opportunity.


FIVE-STATE
PROCESS MODEL

RUNNING

The process that is currently being executed.


**For this chapter, we will assume a computer with a single processor,
so at most one process at a time can be in this state.
FIVE-STATE
PROCESS MODEL

BLOCKED

A process that cannot execute until some event occurs,


such as the completion of an I/O operation.
FIVE-STATE
PROCESS MODEL

EXIT

A process that has been released from the pool of executable


processes by the OS, either because it halted or because it aborted for
some reason.
PROCESS STATES FOR TRACE OF FIGURE 3.4
USING TWO QUEUES

As each process is admitted to the system, it is placed in the Ready queue. When it is
time for the OS to choose another process to run, it selects one from the Ready
queue.
USING TWO QUEUES

When a running process is removed from execution, it is either terminated or


placed in the Ready or Blocked queue, depending on the circumstances
USING TWO QUEUES

Finally, when an event occurs, any process in the Blocked queue that has been
waiting on that event only is moved to the Ready queue.
MULTIPLE BLOCKED QUEUES
SUSPENDED PROCESSES

• When the process need to be executed all the


processes components need to be in the main
memory.
• Processor is faster than I/O operation
• all processes could be waiting for I/O.
• Thus even with multiprogramming, processor
could be idle most of the time.
• How to solve it?
SUSPENDED PROCESSES

Expand the size of main memory


• Memory can accommodate more
processses
• Disadvantage:

SOLUTION • Cost
• Result for larger processes not
more processes.

Swapping
• Swap / moving these processes to
disk to free up more space in main
memory.
SWAPPING

• involves moving part of all of a process from


main memory to disk
• when none of the processes in main memory
is in the Ready state, the OS swaps one of
the blocked processes out on to disk into a
suspend queue
CHARACTERISTICS OF A
SUSPENDED PROCESS

1. The process is not 2. The process may or


immediately available may not be waiting on
for execution an event
 Other process is occupying
 Other process output
the processor
 Resource availability

3. The process was placed


in a suspended state by
an agent: either itself, a 4. The process may not be
parent process, or the removed from this state
OS, for the purpose of until the agent
preventing its execution explicitly orders the
removal
ONE SUSPEND STATE

 A suspended process is SWAPPED OUT and


resides in storage
TWO SUSPEND STATES

 A suspended process is SWAPPED OUT and


resides in storage
REASON FOR PROCESS SUSPENSION

SWAPPING
The OS need to release sufficient main
memory to bring in process that is ready for
execution
OTHER OS REASON
Suspend process that is suspected of causing
problem

INTERACTIVE USER REQUEST


A user wish to suspend a program for
debugging purposes or use of resources

TIMING
A process is executed periodically and
suspended while waiting for the next schedule

PARENT PROCESS REQUEST


The parent process wishes to examine of
coordinate the activities of its descendant
processes
PROCESSES
AND RESOURCES
OPERATING SYSTEM
CONTROL STRUCTURES

• In order for the OS to be able to manage processes and


resources, it must have information about the current status
of each process and resource.
• Tables are constructed for each entity the operating system
manages
OS CONTROL TABLES
MEMORY TABLES

• Memory tables are used to keep track of both main and


secondary memory.
• Must include this information:
• Allocation of main memory to processes
• Allocation of secondary memory to processes
• Protection attributes for access to shared memory regions
• Information needed to manage virtual memory
I/O TABLES

• Used by the OS to manage the I/O devices and channels of


the computer.
• The OS needs to know
• Whether the I/O device is available or assigned
• The status of I/O operation
• The location in main memory being used as the source or
destination of the I/O transfer
FILE TABLES

• These tables provide information about:


• Existence of files
• Location on secondary memory
• Current Status
• other attributes.
• Sometimes this information is maintained by a file
management system
PROCESS TABLES

• To manage processes the OS needs to know details of the processes


• Current state
• Process ID
• Location in memory
• etc
• Process control block
• Process image is the collection of program. Data, stack, and attributes
ROLE OF THE
PROCESS CONTROL BLOCK

• The most important data structure in an OS


• It defines the state of the OS
• Process Control Block requires protection
• A faulty routine could cause damage to the block destroying the
OS’s ability to manage the process
• Any design change to the block could affect many modules of
the OS
MODES OF EXECUTION

• Most processors support at least two modes of execution


• User mode
• Less-privileged mode
• User programs typically execute in this mode
• System mode
• More-privileged mode
• Kernel of the operating system
PROCESS CREATION

• Once the OS decides to create a new process it:


• Assigns a unique process identifier
• Allocates space for the process
• Initializes process control block
• Sets up appropriate linkages
• Creates or expand other data structures
SWITCHING PROCESSES

• Several design issues are raised regarding process switching


• What events trigger a process switch?
• We must distinguish between mode switching and process
switching.
• What must the OS do to the various data structures under its
control to achieve a process switch?
WHEN TO SWITCH PROCESSES

A process switch may occur any time that the OS has gained control from the
currently running process. Possible events giving OS control are:

Mechanism Cause Use


Interrupt External to the execution of the Reaction to an asynchronous
current instruction external event

Trap Associated with the execution of Handling of an error or an


the current instruction exception condition
Supervisor call Explicit request Call to an operating system
function

Table 3.8 Mechanisms for Interrupting the Execution of a Process


SECURITY ISSUES

• An OS associates a set of privileges with each process.


• Highest level being administrator, supervisor, or root, access.

A key security issue in the design of any OS


is to prevent, or at least detect, attempts by a
user or a malware from gaining
unauthorized privileges on the system and
from gaining root access
T W O S Y S T E M A C C E S S T H R E AT S

• Intruders • Malicious Software


 Often referred to as a hacker or  Most sophisticated types of
cracker (a Person) threats to computer systems (a
Classes: Program)
• Masquerader
• not authorized to use the computer Categories:
• Penetrates a system access control to exploit legitimate user’s account
• Misfeasor (authorized but misuse)
• Legitimate user who access data, programs or resources for which that
• those that need a host program (parasitic)
access is not authorized • viruses, logic bombs, backdoors
• Authorized for such access but misuse the privileges given
• Clandestine user • those that are independent
• Individual user who seizes supervisor control of the system and uses this
control to evade auditing and access controls to suppress audit collection
• worms, bots

 Objective is to gain access to a system or  Can be relatively harmless or


to increase the range of privileges very damaging
accessible on a system
 Attempts to acquire information that
should have been protected
COUNTERMEASURES:
INTRUSION DETECTION

“A security service that monitors and analyzes system events for the purpose
of finding, and providing real-time or near real-time warning of, attempts
to access system resources in an unauthorized manner” (RFC 2828)”
 May be host or network based
• It comprises three logical components:

sensors analyzers user interface

• IDSs are typically designed to detect human intruder behavior as well as


malicious software behavior
COUNTERMEASURES:
AUTHENTICATION

“The process of verifying an identity claimed by or for a system


entity.” (RFC2828)
• Two Stages:
• Identification
• Verification something
the
• Four Factors: individual
knows
something
the
individual is
(static
biometrics) Voice pattern,
handwriting
something characteristics
the
something the
individual
individual
does
possesses
(dynamic
biometrics)
COUNTERMEASURES:
ACCESS CONTROL

 A security policy that specifies:


1. who or what may have access to each specific system resource
2. the type of access that is permitted in each instance

 Mediates between a user and system resources


 A security administrator maintains an authorization database
 An auditing function monitors and keeps a record of user accesses to system
resources
COUNTERMEASURES:
FIREWALLS

• interfaces with computers


outside a network
A dedicated • has special security precautions
computer that: built into it to protect sensitive
files on computers within the
network

• all traffic must pass through the


firewall
Design goals of • only authorized traffic will be
a firewall: allowed to pass
• immune to penetration
Operating Systems:
Internals and Design Principles, 6/E
William Stallings

Chapter 3
Concurrency: Mutual Exclusion
Topic

• Definition and Principles of Concurrency and


Mutual Exclusion
• Key Terms
• Process Interaction
• Control Problems – Mutual exclusion, Deadlock,
Starvation and Data coherence
Introduction

 Basically we will get to know more about PROCESS


 In the computer there are many PROCESS.
Each…need RESOURCES to finish!
PROCESS
 MUTUAL EXCLUSION is a method to
ensure each PROCESS get these
RESOURCES

RESOURCES  We will get to know how OS provide


MUTUAL EXCLUSIVITY to
PROCESSES
Principles of Concurrency

Dining Philosophers problem

Five silent philosophers sit at


a round table with bowls of
spaghetti.
Forks are placed between
each pair of adjacent
philosophers.
Each philosopher must
alternately think and eat.
However, a philosopher can
only eat spaghetti when he has
both left and right forks.
Each fork can be held by only
one philosopher and so a
philosopher can use the fork
only if it's not being used by
another philosopher.
After he finishes eating, he
needs to put down both
forks so they become
available to others.
A philosopher can grab the
fork on his right or the one on
his left as they become
available, but can't start
eating before getting both
of them.
Definition of Concurrency

a property of a system in which several process


are executing simultaneously, and potentially
interacting with each other.
Key Terms

Atomic Critical
Deadlock
Operation Section

Mutual Race
Live lock
Exclusion Conditions

Starvation
Atomic Operation

• A function / action implemented as a sequence of one or more


instructions that appears to be indivisible (no other process can
see an intermediate state or interrupt the operation)
• The sequence is guaranteed to execute as a group or not execute
at all.
• Atomicity guarantees isolation from
concurrent process
• Atomic operations assist in implementing
Mutual Exclusion
Critical Section

• A section of code within a process that requires access to


shared resources
• Must not be executed while another process is in a
corresponding section of code
• Each process have
Critical for me to
their own critical get the fork
section.

Okla you eat first


Deadlock

• Situation in which two or more process are unable to


proceed because each is waiting for one of the others to
do something, thus neither ever does

I am also waiting for


I have 1 fork a fork

So none of you can


I am waiting for let go of one fork?
a fork
Live lock

• Situation in which two or more process continuously


changes their states in response to changes in the
other without doing anything useful

Blocked from eating..


Waiting for my forks. Then prepare to eat..
Searching for available But I have only one fork
spoon
Mutual exclusion

• refers to the requirement of ensuring that no two concurrent


processes are in their critical section at the same time.

• It is a requirement that
Ok, I eat first ya?
prevents simultaneous You guys wait
access to a shared Later I will put down my forks

resource used in
concurrency control
and to prevent race
conditions. After this it is my turn
Race condition

• It becomes a bug
when events do not
happen in the order I want those forks
the programmer I want the forks Too!
First!
intended.
• The term originates
with the idea of two
signals racing each
other to influence
the output first.
Starvation

• A situation in which a runnable program Is overlooked


indefinitely by the scheduler; although it is able to
proceed, it is never chosen.

Hmm..I am full
I haven’t eat 

burp..I am full I am full too!


Basic characteristic of a
multiprogramming System

The relative speed of execution of


processes depends on :
1. activities of other processes
2. the way the OS handles interrupts
3. scheduling policies of the OS
Difficulties in multiprogramming
system

• Sharing of global variable / resources


• Optimally managing the allocation of resources
• Difficult to locate programming errors as
results are not deterministic and reproducible.
A Simple Example

void echo()
{
chin = getchar();
chout = chin;
putchar(chout);
}
A Simple Example:
On a Multiprocessor

Process P1 Process P2
. .
chin = getchar(); .
. chin =
getchar();
chout = chin; chout = chin;
putchar(chout); .
. putchar(chout);
. .
Enforce Single Access (avoids
concurrency problems)

• If we enforce a rule that only one process may


enter the function at a time then:
• P1 & P2 run on separate processors
• P1 enters echo first,
• P2 tries to enter but is blocked – P2 suspends
• P1 completes execution
• P2 resumes and executes echo
Race Condition

• A race condition occurs when


• Multiple processes or threads read and write data
items
• They do so in a way where the final result depends
on the order of execution of the processes.
• The output depends on who finishes the race
last.
Example of race condition issue

• Two processes P3 and P4 share the global


variable B and C, with initial value
B = 1 and C = 2.
P3  B = B + C, P4  C = B + C.
• These two processes update different
variables.
• The final values of the two variables depend
on the order in which the two processes
execute these assignments.
Design and management issues raised by the existence of concurrency:

1. Be able to keep track


of various processes

2. Allocate and de-allocate


resources for each active
process
The
3. Protect the data and physical
OS must resources of each process
against interference by other
processes
4. ensure that the processes and
outputs are independent of the
processing speed but relative to
No matter how fast the
the speed of other concurrent processor is, its TIME will be
processes divided EQUALLY
among multiple processes
Reasons for Conflict

Concurrent processes come into conflict when


they are competing for use of the same resource
• for example: I/O devices, memory, processor time, clock

As a result of competing process, THREE


CONTROL PROBLEMS must be managed:
1. Need for mutual
exclusion
2. Deadlock
3. Starvation
Mutual Exclusion

• Suppose two or more processes require access to a


single non-sharable resource, such as a printer.
• During the course of execution, each process will be
sending commands to the I/O device, receiving status
information, sending data, and/or receiving data.
o Refer to such a resource as a critical resource, and the
portion of the program that uses it a critical section of the
program.
• Only one process at a time is allowed in the critical
section for a resource
Requirements for
Mutual Exclusion

• A process that halts in its noncritical section


must do so without interfering with other
processes
• No deadlock or starvation
Requirements for
Mutual Exclusion
1. Must be enforced
2. A halted process must NOT interfere with other
processes
3. Each process will get its turn eventually (no deadlock /
starvation)
4. A process must be given access to a critical section when
there is no other process using it
5. No assumptions are made about relative process speeds
or number of processes
6. A process remains inside its critical section for a finite
time only (limited time)
Deadlock

• Will be specifically discussed in next chapter.


• Consider two processes, P1 and P2, and two
resources, R1 and R2.
• Suppose that each process needs access to
both resources to perform its function.
Deadlock cont

• It is possible to have the following situation:


• P1 request for R2, receives it
• P2 request for R1, receives it
• P1 request R1, queue up waiting for P2
to release it
• P2 request R2, queue up waiting for P1
to release it
• Each process waiting for the resources
held by the other process.
Starvation

• Suppose that three processes (P1, P2, P3) each


require periodic access to resource R.
• Consider the situation:
• P1 is using the resource, both P2 and P3 are
delayed waiting for the resources.
• When P1 exits its critical section, either P2 or P3
should be allowed to access R.
• Assume OS granted request to P3, and P1 again
require access.
• There’s chance that P2 may indefinitely being
denied with resource allocation
Data Coherence

• Consider an application where various data


items may be updated.
• Suppose two items of data a and b are to be
maintained in the relationship a=b.
• So, any program that updates one value, must
also update the other to maintain the
relationship (a=b)
Data Coherence cont

• Consider the following two processes:

P1:
a=a+1
b=b+1

P2:
b=2*b
a=2*a
Data Coherence cont

• If each process taken separately, it will leave


the shared data in consistent state.
• Consider the following concurrent execution, in
which this two processes respect mutual
exclusion on each individual data item (a and b);
a=a+1
b=2*b
b=b+1
a=2*a
Data Coherence cont

• At the end of the execution sequence, the


condition a = b no longer holds.
• E.g. if we start with a = b = 1, at the end of the
execution sequence we have a = 4 and b = 3.
• How to solve?
• By declaring the entire sequence in each process to
be a critical section.
PROCESS INTERACTION I:
UNAWARE OF EACH OTHER
What?
 These are independent processes that
1. UNAWARE OF are not intended to work together.
EACH OTHER  OS needs to be concerned about
competition for resources
Relationship with other process
 Resource Competition

Each doing its own thing


Example
 Different program just competing for
resource

Each requiring resource


PROCESS INTERACTION II:
INDIRECTLY AWARE OF EACH OTHER
What?
 These processes are not necessarily
aware of each other by their
2. INDIRECTLY AWARE respective process IDs but they
OF EACH OTHER share access to some object, such
as an I/O buffer
Relationship with other process
 Cooperation by sharing resources

Example
Each doing its own thing
 Sharing resources such as i/o buffer

Each requiring resource


3. PROCESS INTERACTION III:
DIRECTLY AWARE OF EACH OTHER
What?
 These processes that are able to communicate
with each other by process ID and are
designed to work jointly on some activity.
 Such processes exhibit cooperation
Relationship with other process
 Cooperation by communication
Example
Each doing its own thing  Designed from the start to work
together

Each requiring resource


Summary of Process Interaction
O P E R AT I N G S Y S T E M S :
INTERNALS AND DESIGN PRINCIPLES, 6/E
W I L L I A M S TA L L I N G S

CHAPTER 4
CONCURRENCY: DEADLOCK AND
S TA R V AT I O N
TOPIC

• Principles of Deadlock
• Resource Categories
• Deadlock Strategies
– Deadlock Detection
– Deadlock Prevention
– Deadlock Avoidance
DEADLOCK

• Permanent blocking of a set of processes that either compete for


system resources or communicate with each other
• A set of processes is deadlocked when each process in the set is
blocked (awaiting an event that can only be triggered by another
blocked process in the set)
• Since none of the events ever triggered, deadlock is:-
 Permanent
 No efficient solution in the general case (hard reset, kill, OS needs to
intervene)
DEADLOCK
Potential Deadlock

I need quad
C and D I need quad
B and C

I need quad I need quad


D and A A and B
Actual Deadlock

HALT until HALT until


4 is free 3 is free

HALT until HALT until


1 is free 2 is free
2 Resource Categories

Reusable Consumable

can be safely used by only one one that can be created


process at a time and is not (produced) and destroyed
depleted by that use (consumed)
• processors, I/O channels, main • interrupts, signals, messages, and
and secondary memory, devices, information
and data structures such as files, • Information inside I/O buffers
databases, and semaphores
REUSABLE RESOURCES DEADLOCK

 consider two processes that compete for


exclusive access to a disk file D and a drive T.
 The programs engage in the operations depicted
above
Reusable Resources Deadlock

Deadlock occurs if each process holds one


resource and requests the other. For example:
D is still locked by P
T is still locked by Q
REUSABLE RESOURCES

• Space is available for allocation of 200Kbytes, and the following


sequence of events occur

P1 P2
... ...
Request 80 Kbytes; Request 70 Kbytes;
... ...
Request 60 Kbytes; Request 80 Kbytes;

• Deadlock occurs if both processes progress to their second request


CONSUMABLE RESOURCES DEADLOCK
• Consider a pair of processes, in which each process attempts to
receive a message from the other process and then send a message
to the other process:

• Deadlock occurs if receives blocking


– receiving process is blocked until the message is received
RESOURCE ALLOCATION
GRAPHS
• Directed graph that depicts a state of:
– the system of resources and processes
– with each process and each resource represented by a node
Resource Allocation Graphs

A graph edge directed from a process to a resource indicates a


resource that has been requested by the process but not yet
granted ( Figure a ).

A graph edge directed from a reusable resource node dot to a


process indicates a request that has been granted ( Figure b );
that is, the process has been assigned one unit of that resource
RESOURCE ALLOCATION GRAPHS

• Figure C shows an example deadlock. There is only one


unit each of resources Ra and Rb.
• Process P1 holds Rb and requests Ra, while P2 holds Ra
but requests Rb
RESOURCE ALLOCATION GRAPHS

Figure d has the same topology as Figure c , but there is


no deadlock because multiple units of each resource
are available.
Resource Allocation Graphs

 The resource allocation graph in this figure corresponds to


the deadlock situation in our earlier example.
 Note that in this case, we do not have a simple situation in
which two processes each have one resource the other
needs. Rather, in this case, there is a circular chain of
processes and resources that results in deadlock.
Exercise!!
 There are 3 processes PA, PB, PC and 3 resources:
 2 unit of R1
 1 unit of R2
 1 unit of R3
 Draw the resource allocation graph for:
 PA hold R1, request for R3
 PB hold R2, request for R1
 PC hold R3, request for R2
 Deadlock exist or not? Give your reason for this.
DEADLOCK CONDITIONS
• Mutual exclusion
– Only one process may use a resource at a time
• Hold-and-wait
– A process may hold allocated resources while awaiting assignment of others
• No preemption
– No resource can be forcibly removed form a process holding it
• Circular wait
– A closed chain of processes exists, such that each process holds at least one
resource needed by the next process in the chain
POSSIBILITY OF DEADLOCK

• Mutual Exclusion
• No preemption
• Hold and wait

 The first three conditions are necessary but not sufficient for a
deadlock to exist.
 For deadlock to actually take place, a fourth condition is required
EXISTENCE OF DEADLOCK

• Mutual Exclusion
• No preemption
• Hold and wait
• Circular wait
3 Approaches on Dealing with Deadlock

Prevent Deadlock
• adopt a policy that eliminates one of the conditions

Avoid Deadlock
• make the appropriate dynamic choices based on the
current state of resource allocation

Detect Deadlock
• attempt to detect the presence of deadlock and take
action to recover
DEADLOCK PREVENTION
Deadlock Prevention Strategy
 Design a system in such a way that the possibility of
deadlock is excluded
 Two main methods:
1. Indirect
 prevent the occurrence of one of the three necessary
conditions
2. Direct
 prevent the occurrence of a circular wait (the fourth
condition)
Indirect Prevention

Mutual
Hold and Wait
Exclusion

Cannot be disallowed
if access to a resource require that a process request all of
requires mutual its required resources at one time
exclusion then it must and blocking the process until all
be supported by the requests can be granted
OS simultaneously
Indirect Prevention

No Preemption

If a process holding certain resources is denied a further


request, that process must release its original resources
and request them again

OS may preempt the second process and require it to


release its resources
Direct Prevention

Circular Wait

define a linear ordering of resource types


Havender's Linear Ordering
Each resource type is labeled with a value with those
resource commonly needed at the beginning of a tasks
R6 CPU
having lower values than those that typically come at the
end of a task. P2
A process may request and hold resources in an ascending P1 R5 CPU
order only.
This means that a process may not request any MEM P2
resource of a lower value (ordering value) so long as R4
any resources of a higher value are being held.
For example, while process P1 has possession of R4 it MEM R3
P1
may not request an R3 or an R2.
P2
R2
I/0
 A Process may not request any
resource of a lower value if it still
holds the higher one R1
I/0
DEADLOCK AVOIDANCE
Sometimes it is not feasible to prevent deadlock.
 This can occur when we need the most effective use of
all our resources.
Allows the three necessary conditions but makes
judicious choices to assure that the deadlock point is
never reached.
Avoidance allows more concurrency than prevention
A decision is made dynamically whether the current
resource allocation request will, if granted, potentially
lead to a deadlock
Requires knowledge of future process requests
Two approaches to
Deadlock Avoidance

Deadlock
Avoidance
Resource
Allocation Denial Process
• do not grant an
Initiation Denial
incremental resource • do not start a
request to a process process if its
if this allocation demands might
might lead to lead to deadlock
deadlock
i. Resource Allocation Denial
• Referred to as the banker’s algorithm
• State of the system reflects the current allocation of
resources to processes
1. Safe state
There is at least one sequence of
resource allocations to processes that
does not result in a deadlock
2. Unsafe state is a state that is not
safe
There is not enough resource to allocate
to any process
Determination of a Safe State
• State of a system consisting of four processes and three resources
• Allocations have been made to the four processes

 Can any of the four processes be run to completion with the


resources available?
Determination of a Safe State
INITIAL STATE
R1 R2 R3 R1 R2 R3 R1 R2 R3
P1 3 2 2 P1 1 0 0 9 3 6
P2 6 1 3 P2 6 1 2
Resource Vector
Total resource in the computer
P3 3 1 4 P3 2 1 1
R1 R2 R3
P4 4 2 2 P4 0 0 2
0 1 1
Claim Matrix Allocation Matrix Available Vector
What the process wants What is already allocated What is left

Available Vector = Resource Vector – Allocation Matrix


Determination of a Safe State
CAN ANY OF THE FOUR PROCESSES BE RUN TO
COMPLETION WITH THE RESOURCES AVAILABLE?
• Steps needed to check either it is a safe state or not:
1. Construct the Need Matrix
 Need = Claim Matrix – Allocation Matrix
R1 R2 R3 R1 R2 R3 R1 R2 R3

P1 2 2 2 P1 3 2 2 P1 1 0 0

P2 0 0 1 P2 6 1 3 P2 6 1 2

P3 1 0 3 P3 3 1 4 P3 2 1 1
= -
P4 4 2 0 P4 4 2 2 P4 0 0 2

Need Matrix Claim Allocation


What the process wants What is already allocated

o it shows how many resources needed by each program in order for the process
to complete execution.
Determination of a Safe State

2. To check either the current state is safe or not safe,


o Compare the content of available vector with the need matrix.
o Is there any process which can be allocated with the available resources and the
process can run to completion?

R1 R2 R3
P1 2 2 2
R1 R2 R3
P2 0 0 1
0 1 1
P3 1 0 3
P4 4 2 0 Available Vector
What is left
Need Matrix

o For this example the system has 1 unit of R2 and 1 unit of R3.
Based from the claim matrix P2 can runs to completion.
Determination of a Safe State

3. After a process runs to completion, it will release all the resources to system.
Construct new available vector.
o New AV = Current Av + Allocation Matrix of the chosen process
= 011+ 612
= 623
R1 R2 R3 R1 R2 R3
R1 R2 R3 P1 1 0 0 6 2 3
P2 6 1 2
0 1 1 + =
New A.V
Previous balance of A.V P3 2 1 1 What is
left
P4 0 0 2

Allocation Matrix
What is already allocated

 Repeat step 2 and 3 until all processes complete execution.


Determination of a Safe State
Determination of a Safe State P1 Executes
R1 R2 R3 R1 R2 R3
P1 3 2 2 P1 2 2 2
R1 R2 R3
P2 0 0 0 P2 0 0 0
6 2 3
P3 3 1 4 P3 1 0 3
P4 4 2 2 P4 4 2 0
Available Vector
Previous balance of A.V
Claim Matrix Need Matrix
What the process wants
R1 R2 R3
R1 R2 R3
P1 1 0 0
7 2 3
P2 0 0 0

P3 2 1 1

P4 0 0 2
New Available
Vector
Allocation Matrix What is left
What is already allocated
Determination of a Safe State
Determination of a Safe State P3 Executes
R1 R2 R3 R1 R2 R3
P1 0 0 0 P1 0 0 0
R1 R2 R3
P2 0 0 0 P2 0 0 0
7 2 3
P3 3 1 4 P3 1 0 3
P4 4 2 2 P4 4 2 0
Available Vector
Previous balance of A.V

Claim Matrix Need Matrix


What the process wants
R1 R2 R3
R1 R2 R3
P1 0 0 0
9 3 4
P2 0 0 0

P3 2 1 1 New Available
Vector
P4 0 0 2
What is left

Allocation Matrix
What is already allocated
Determination of a Safe State
Determination of a Safe State P4 Executes
R1 R2 R3 R1 R2 R3
P1 0 0 0 P1 0 0 0
R1 R2 R3
P2 0 0 0 P2 0 0 0
9 3 4
P3 0 0 0 P3 0 0 0
P4 4 2 2 P4 4 2 0
Available Vector
Previous balance of A.V

Claim Matrix Need Matrix


What the process wants
R1 R2 R3
R1 R2 R3
P1 0 0 0
9 3 6
P2 0 0 0

P3 0 0 0 New Available
Vector
P4 0 0 2
What is left

Allocation Matrix
What is already allocated
Determination of a Safe State
P4 Runs to Completion
R1 R2 R3 R1 R2 R3 R1 R2 R3
P1 0 0 0 P1 0 0 0 9 3 6
P2 0 0 0 P2 0 0 0 Available Vector
P3 0 0 0 P3 0 0 0
R1 R2 R3
P4 0 0 0 P4 0 0 0
9 3 6
Claim Matrix Allocation Matrix
Resource Vector

 When all processes run to completion, the value for available vector is equal to
resource vector
Thus, the state defined
originally is a safe state
Determination of a Unsafe State

R1 R2 R3 R1 R2 R3 R1 R2 R3
P1 3 2 2 P1 2 0 1 9 3 6
P2 6 1 3 P2 5 1 1
Resource Vector
P3 3 1 4 P3 2 1 1 R1 R2 R3
P4 4 2 2 P4 0 0 2 1 1 2
Claim Matrix Allocation Matrix Available Vector

P1 request for 1 unit of R1 and R3


Try and calculate whether this is a
safe state or otherwise !!
Determination of a Unsafe State
P1 request for 1 unit of R1 and R3
R1 R2 R3 R1 R2 R3

P1 3 2 2 P1 2 0 1

P2 6 1 3 P2 5 1 1 R1 R2 R3
P3 3 1 4 P3 2 1 1 1 1 2
P4 4 2 2 P4 0 0 2 Available Vector
Claim Matrix Allocation Matrix
Is this a safe state? The answer is no, because each
NEED = CLAIM - ALLOCATION process will need at least one additional unit of R1, and
R1 R2 R3 there are none available. Thus, on the basis of deadlock
avoidance, the request by P1 should be denied and P1
P1 1 2 1
should be blocked.
P2 1 0 2
The deadlock avoidance strategy does not predict
P3 1 0 3
deadlock with certainty; it merely anticipates the
P4 4 2 0 possibility of deadlock and assures that there is never such
a possibility.
Need Matrix
EXE
RC
ISE

1. Find the Available Vector


2. Find the Need Matrix
3. Is this system is currently safe or in
unsafe state? If it is in a safe state, give
the order of process to finish it and if it is
unsafe, state your reason.
• Available Vector = 1 1 2 3 1 SO
LUT
• Need Matrix = ION
R1 R2 R3 R4 R5 
P1
0 4 3 4 3
P2
0 3 3 2 2
P3
0 2 2 1 0
P4
2 0 5 0 3
P5
0 0 0 0 0

Process to complete:
P5, AV = 1 3 4 3 4
P2, AV = 2 4 6 3 4
P3, AV = 2 5 7 3 4
P4, AV = 2 6 8 5 5
P1, AV = 3 7 8 5 5
Banker’s Algorithm
 Concept: ensure that the system of processes and
resources is always in a safe state
 Mechanism: when a process makes a request for a set
of resources Assume that the request is granted
Update the system state accordingly
Determine if the result is a safe state
 If so, grant the request;
 If not, block the process until it is safe
to grant the request
Deadlock Avoidance Advantages
It is not necessary to pre-empt and rollback
processes, as in deadlock detection
 Processes may still hold on to resources
It is less restrictive than deadlock prevention
 Processes only have to be block if their
continuation might result in a deadlock situation
4 Deadlock Avoidance Restrictions
1. Maximum resource requirement for each process
must be stated in advance
2. Processes under consideration must be
independent and with no synchronization
requirements
3. There must be a fixed number of resources to
allocate
4. No process may exit while holding resources
DEADLOCK DETECTION
Deadlock Strategies

Deadlock prevention strategies are very


conservative
• limit access to resources by imposing restrictions on
processes

Deadlock detection strategies do the opposite


• resource requests are granted whenever possible
Deadlock Detection Algorithms

 can be made as frequently as each resource request or, less frequently,


depending on how likely it is for a deadlock to occur

Advantages:
 it leads to early detection
 the algorithm is relatively simple
 Disadvantage
 frequent checks consume considerable processor time
Deadlock Detection Algorithms
1. Mark each process that has a row in the Allocation
matrix of all zeros.
 This process does not hold any resources
2. Initialize a temporary vector W to equal to Available
vector.
3. Find an index i such that process i is currently
unmarked and ith row of Q is less then or equal to W
(Q <= W). If no such row found terminate the
algorithm.
4. If such a row is found ( ith row exist), mark process i
and add the corresponding row of the Allocation
matrix to W. Then return to step 3.
Deadlock Detection Algorithms

What is requested What is already allocated


EXE
RC
ISE

SO
LUT
ION

Mark P2
W=111
Mark P3, New W = 1 1 1 + 0 1 1 = 1 2 2
Mark P1

No deadlock
Deadlock Detection
• Deadlock exist if and only if there are unmarked processes at
the end of algorithm (exist a process request that can’t be fulfilled)
• Strategy in this algorithm is to find:
– a process whose resource requests can be satisfied with the
available resources and then;
– assume that those resources are granted and the process runs
to completion and release its resources.
• Then the algorithm will look for another process.
• This algorithm will not guarantee to prevent deadlock, it will
depend on the order in which requested are granted.
• It is only to determine either deadlock currently exist or not
RECOVERY
Recovery Strategies
1. Abort all deadlocked processes (most common strategy)
2. Back up each deadlocked process to some previously
defined checkpoint and restart all processes
(rollback/restart)
3. Successively abort deadlocked processes until deadlock no
longer exists
4. Successively preempt resources/processes until deadlock no
longer exists.
The process in 3 and 4 will be selected
according to certain criteria, e.g.
• least amount of CPU time consumed
• lowest priority
• least total resources allocated so far
Summary
• Deadlock:
 the blocking of a set of processes that either compete for
system resources or communicate with each other
 blockage is permanent unless OS takes action
 may involve reusable or consumable resources
• Dealing with deadlock:
 prevention – guarantees that deadlock will not occur
 avoidance – analyzes each new resource request
 detection – OS checks for deadlock and takes action
OPERATING SYSTEMS:
INTERNALS AND DESIGN PRINCIPLES, 6/E
WILLIAM STALLINGS

CHAPTER 5
UNIPROCESSOR
SCHEDULING
SCHEDULING OBJECTIVES
OVERALL AIM
OF SCHEDULING
System (performance) objectives:
1. Low Response Time (Fast Response)
 RT: Time Elapse From The Submission Of A Request To The Beginning
Of The Response.
 Process Needs To Run As Soon As It Enters The System

2. High Throughput
 Throughput: Number Of Processes Completed Per Unit Time
 Try Getting As Much Process/Jobs Done At A Time

3. Processor Efficiency
 High Processor Utilization (Min Processor Idle Time)
 Processor Is Always Doing Task
SCHEDULING
• An OS must allocate resources amongst competing processes.
• The resource provided by a processor is execution time
• The resource is allocated by means of a schedule

• The key to multiprogramming is scheduling.


• Four types of scheduling typically involves

medium
long term term short term I/O
scheduling schedulin scheduling Scheduling
g

• It directly relates to the PROCESS STATES


SCHEDULING OBJECTIVES

• The scheduling function should


• Share time fairly among processes
• Prevent starvation of a process
• Use the processor efficiently
• Have low overhead
• Prioritise processes when necessary (e.g. real time deadlines)
TYPES OF SCHEDULING

medium
long term term short term I/O
scheduling schedulin scheduling Scheduling
g

Long-term scheduling (job


scheduler)
 The decision to add to the pool of
processes to be executed.
TYPES OF SCHEDULING

medium
long term term short term I/O
scheduling schedulin scheduling Scheduling
g

Medium-term scheduling
 The decision to add to the number of
processes that are partially or fully in
main memory.
TYPES OF SCHEDULING

medium
long term term short term I/O
scheduling schedulin scheduling Scheduling
g

Short-term scheduling (CPU


Scheduler)
 The decision as to which available
process will be executed by the
processor.
 Which READY process will be
selected next.
TYPES OF SCHEDULING

medium
long term term short term I/O
scheduling schedulin scheduling Scheduling
g

I/O scheduling
 The decision as to which process’s
pending I/O request shall be handled
by an available I/O device.
SCHEDULING AND
PROCESS STATE TRANSITIONS

 Long-term scheduling is performed


when a new process is created.
 This is a decision whether to add a new
process to the set of processes that are
currently active.
LONG-TERM SCHEDULER

 Triggered when a new process is


created

 Determines which programs are Creates processes from the


queue when it can, but must
admitted to the system for decide:

processing

 Controls the degree of


multiprogramming
when the operating system
1. the more processes that are can take on one or more
which jobs to accept and
turn into processes
created, the smaller the additional processes

percentage of time that


each process can be
executed (Average waiting
time will be bigger)
priority, expected execution
first come, first served
2. may limit to provide time, I/O requirements

satisfactory service to the


current set of processes
SCHEDULING AND
PROCESS STATE TRANSITIONS
 Medium-term scheduling is a
part of the swapping function.
 Which process will be swapped in
or out
SCHEDULING AND
PROCESS STATE TRANSITIONS
 Short-term scheduling is the
actual decision of which ready
process to execute next.
 It determines which process will
be executed by the processor
SHORT-TERM SCHEDULER
 Known as the dispatcher
 Executes most frequently
 Makes the fine-grained decision of which
process to execute next
 Invoked when an event occurs that may :
1. lead to the blocking of the current
process or
2. provide an opportunity to preempt a
currently running process in favor of
another Examples:

• Clock interrupts
• I/O interrupts
• Operating system calls
• Signals (e.g., semaphores)
SCHEDULING POLICIES AND
ALGORITHMS
ALTERNATIVE SCHEDULING
POLICIES
SELECTION FUNCTION

• Determines which process is selected for execution


• May be based on:-
• Priority
• Resource requirements
• Execution characteristics of the process
• If based on execution characteristics then important quantities are:
• w = time spent in system so far, waiting
• e = time spent in execution so far
• s = total service time required by the process, including e;
DECISION MODE

• Specifies the instants in time at which the selection function is


exercised.
• Two categories:
• Nonpreemptive
• Preemptive
DECISION MODE

• Non-preemptive
• Once a process is in the running state, it will continue until it
terminates or blocks itself for I/O

• Preemptive
• Currently running process may be interrupted and moved to ready state
by the OS
• Preemption may occur when new process arrives, on an interrupt, or
periodically.
SCHEDULING CRITERIA

• Different scheduling algorithm have different properties and may


favor one class of processes over another.
• The characteristics used for comparison can make substantial
difference in the determination of the best algorithm.
SCHEDULING CRITERIA

• The criteria are:


1. CPU utilization – keep CPU as busy as possible.
2. Throughput – number of processes that complete their execution
per time unit.
3. Turnaround time – amount of time to execute a particular
process.
4. Waiting time – amount of time a process has been waiting in the
ready queue
5. Response time – amount of time it takes from when a request was
submitted until the first response is produced, not output (for time-sharing
environment)
SCHEDULING POLICIES
• First Come First Serve (FCFS)
• Shortest Process Next (SPN)
• Highest Response Ration Next (HRRN)
• Shortest Remaining Time (SRT)
• Round Robin (RR)
• Feedback

I. Choose selection function + decision mode


II. Draw the scheduling diagram according to scheduling policies
III. Calculate waiting time (WT) and average waiting time (AWT)
PROCESS SCHEDULING
EXAMPLE
• Example set of
processes, consider
each a batch job

– Service time represents total execution time


NON-PREEMPTIVE POLICIES
FIRST-COME-FIRST-SERVED
(FCFS)
• Simplest scheduling policy.
• Selection Function: Will select the process based on the
arrival time.
• Decision mode: Non - Preemptive
• Also known as first-in-first-out (FIFO) or a strict
queuing scheme
• When the current process ceases to execute, the longest
process in the Ready queue is selected
FIRST-COME-
FIRST-SERVED (FCFS)
FCFS
• Waiting time = start execution time – arrival time
Process A  0 – 0 = 0
Process B  3 – 2 = 1
Process C  9 – 4 = 5
Process D  13 – 6 = 7
Process E  18 – 8 = 10
• Average Waiting Time =
Total waiting time

Number of processes
= 0+1+5+7+10
5
= 4.6ms
SHORTEST PROCESS NEXT (SPN)
• Selection Function: Will Select The Process With The Shortest
Service Time.
• Decision Mode: Non - Preemptive
• A Short Process Will Jump To The Head Of The Queue
• Possibility Of Starvation For Longer Processes, As Long As
There Is A Steady Flow Of Shorter Processes
SHORTEST PROCESS NEXT (SPN)
• Waiting time = total of (start execution time – arrival time)
Process A  0
Process B  3-2 = 1
Process C  11- 4 =7
Process D  15- 6 =9
Process E  9-8= 1
Average Waiting Time =
Total waiting time

Number of processes
= 1+7+9+1
5
= 3.6ms
HIGHEST RESPONSE RATIO NEXT
(HRRN)
• Selection Function: Will Select The Process With The Greatest Ratio.

• Decision Mode: Non - Preemptive


• Attractive Because It Accounts For The Age Of The Process
• While Shorter Jobs Are Favored, Aging Without Service Increases The Ratio
So That A Longer Process Will Eventually Get Past Competing Shorter Jobs
HIGHEST RESPONSE RATIO NEXT
(HRRN)

B Arrives At 2
No Need To Calculate The Ratio At This Point Since Only B Is In The Queue
HIGHEST RESPONSE RATIO NEXT
(HRRN)

C arrives at 4 D arrives at 6 E arrives at 8

• C’s ratio • D’s ratio • E’s ratio

• = (9-6) + 5 • = (9-8) + 2
C is CHOSEN
• = (9-4) + 4
5 2
4
= 1.6 = 1.5
= 2.25
HIGHEST RESPONSE RATIO NEXT
(HRRN)

Recalculate ratio for D Recalculate ratio for E


• D’s ratio • E’s ratio
E is CHOSEN
• = (13-6) + 5 • = (13-8) + 2

5 2
= 2.4 = 3.5
HIGHEST RESPONSE RATIO NEXT
(HRRN)
• Waiting time = total of (start execution time – arrival time)
Process A  0
Process B  3-2= 1
Process C  9-4 =5
Process D  15- 6 =9
Process E  13-8= 5
Average Waiting Time =
Total waiting time

Number of processes
= 1+5+9+5
5
= 4ms
PREEMPTIVE POLICIES
SHORTEST REMAINING TIME
(SRT)
• Selection Function: Will Select The Process With The Shortest
Expected Service Time.
• Decision Mode: Preemptive
• Risk Of Starvation Of Longer Processes
• Should Give Superior Turnaround Time Performance To SPN
Because A Short Job Is Given Immediate Preference To A
Running Longer Job
0 5 10 15 20

E
SHORTEST REMAINING TIME
(SRT)
• Waiting time = total of (start execution time – arrival time)
Process A  0
Process B  (3-2) + (10 – 4) = 7
Process C  4-4 =0
Process D  15- 6 =9
Process E  8-8= 0
Average Waiting Time =
Total waiting time

Number of processes
= 7+9
5
= 3.2ms
SUMMARY
• The operating system must make three types of scheduling decisions with
respect to the execution of processes:
1. Long-term – determines when new processes are admitted to the
system
2. Medium-term – part of the swapping function and determines when a
program is brought into main memory so that it may be executed
3. Short-term – determines which ready process will be executed next
by the processorFrom a user’s point of view, response time is
generally the most important characteristic of a system;
• From a system’s point of view, throughput or processor utilization is
important
• Algorithms:
• FCFS, SPN, HRRN, SRT, Round Robin and Feedback
Operating Systems:
Internals and Design Principles, 6/E
William Stallings

Chapter 6
Memory Management
Topic
• Memory Management Requirement
• Partitioning
• Simple Paging
• Simple Segmentation
The need for memory
management
• Memory is cheap today, and getting
cheaper
– But applications are demanding more and
more memory, there is never enough!
• Memory Management, involves swapping
blocks of data from secondary storage.
• Memory I/O is slow compared to a CPU
– The OS must cleverly time the swapping to
maximise the CPU’s efficiency
Memory Management

Memory needs to be allocated to ensure a


reasonable supply of ready processes to
consume available processor time
Memory Management
Terms
Table 7.1 Memory Management Terms

Term Description
Frame Fixed-length block of main
memory.
Page Fixed-length block of data in
secondary memory (e.g. on disk).
Segment Variable-length block of data that
resides in secondary memory.
Memory Management
Requirements
• Relocation
• Protection
• Sharing
• Logical organisation
• Physical organisation
Requirements: Relocation
• The programmer does not know where the
program will be placed in memory when it
is executed,
– it may be swapped to disk and return to main
memory at a different location (relocated)
• Memory references must be translated to
the actual physical memory address
Requirements: Protection
• Processes should not be able to reference
memory locations in another process
without permission
• Impossible to check absolute addresses at
compile time
• Must be checked at run time
Requirements: Sharing
• Allow several processes to access the
same portion of memory
• Better to allow each process access to the
same copy of the program rather than
have their own separate copy
Requirements: Logical
Organization
• Memory is organized linearly (usually)
• Programs are written in modules
– Modules can be written and compiled
independently
• Different degrees of protection given to
modules (read-only, execute-only)
• Share modules among processes
• Segmentation helps here
Requirements: Physical
Organization
• Cannot leave the programmer with the
responsibility to manage memory
• Memory available for a program plus its
data may be insufficient
– Overlaying allows various modules to be
assigned the same region of memory but is
time consuming to program
• Programmer does not know how much
space will be available
Partitioning
• An early method of managing memory
– Pre-virtual memory
– Not used much now
• But, it will clarify the later discussion of
virtual memory if we look first at
partitioning
– Virtual Memory has evolved from the
partitioning methods
Types of Partitioning
• Fixed Partitioning
• Dynamic Partitioning
Fixed Partitioning
• Equal-size partitions (see fig 7.3a)
– Any process whose size is less than
or equal to the partition size can be
loaded into an available partition
• The operating system can swap a
process out of a partition
– If none are in a ready or running
state
Fixed Partitioning Problems
• A program may not fit in a partition.
– The programmer must design the program
with overlays
• Main memory use is inefficient.
– Any program, no matter how small, occupies
an entire partition.
– This is results in internal fragmentation.
Solution – Unequal Size
Partitions
• Lessens both problems
– but doesn’t solve completely
• In Fig 7.3b,
– Programs up to 16M can be
accommodated without overlay
– Smaller programs can be placed in
smaller partitions, reducing internal
fragmentation
Placement Algorithm
• Equal-size
– Placement is trivial (no options)
• Unequal-size
– Can assign each process to the smallest
partition within which it will fit
– Queue for each partition
– Processes are assigned in such a way as to
minimize wasted memory within a partition
Fixed Partitioning
Dynamic Partitioning
• Partitions are of variable length and
number
• Process is allocated exactly as much
memory as required
Dynamic Partitioning Example
cont
Dynamic Partitioning Example
OS (8M)
• External Fragmentation
• Memory external to all
P2
P1
(14M) processes is fragmented
(20M)
Empty (6M) • Can resolve using
Empty
P4(8M)
P2
(56M)
compaction
(14M)
Empty (6M) – OS moves processes so
P3
that they are contiguous
(18M) – Time consuming and
wastes CPU time
Empty (4M)

Refer to Figure 7.4


Dynamic Partitioning
• Operating system must decide which free block
to allocate to a process, based on 3 algorithms.
1. Best-fit algorithm
– Chooses the block that is closest in size to the
request / smallest hole that could fit
request
– Since smallest block is found for process, the
smallest amount of fragmentation is left
– Memory compaction must be done more often
Dynamic Partitioning
2. First-fit algorithm
– Scans memory form the beginning and
chooses the first available block that is large
enough
– Fastest
– May have many process loaded in the front
end of memory that must be searched over
when trying to find a free block
Dynamic Partitioning
3. Next-fit
– Scans memory from the location of the last
placement
– Chooses first hole from last placement
– The largest block of memory is broken up into
smaller blocks
– Compaction is required to obtain a large block
at the end of memory
Allocation
Buddy System
• Entire space available is treated as a
single block.
• Otherwise block is split into two equal
buddies
Example of Buddy System
Paging
• Partition memory into small equal fixed-
size chunks and divide each process into
the same size chunks
• The chunks of a process are called pages
• The chunks of memory are called frames
Paging
• Operating system maintains a page table
for each process
– Contains the frame location for each page in
the process
– Memory address consist of a page number
and offset within the page
Processes and Frames
• Example:-
– Process A consist of 4 frames (A0 – A3)
– Process B consist of 3 frames (B0 – B2)
– Process C consist of 4 frames (C0 – C3)
– Process D consist of 5 frames (D0 – D4)
Processes and Frames cont

A.0
A.1
A.2
A.3
D.0
B.0
D.1
B.1
D.2
B.2
C.0
C.1
C.2
C.3
D.3
D.4
Page Table
Segmentation
• A program can be subdivided into
segments
– Segments may vary in length
– There is a maximum segment length
• Addressing consist of two parts
– a segment number and
– an offset
• Segmentation is similar to dynamic
partitioning
• The difference with dynamic partitioning, is
that with segmentation a program may
occupy more than one partition, and these
partitions does not need to be contiguous.
• Segmentation eliminates internal
fragmentation but suffers from external
fragmentation (as does dynamic
partitioning)
OPERATING SYSTEMS:
INTERNALS AND DESIGN PRINCIPLES,
6/E
WILLIAM STALLINGS

Chapter 7
Virtual Memory
Topic
Real and Virtual Memory
Locality and Virtual Memory
Terminology
Execution of a Process
Operating system brings into main memory a few pieces of
the program
Resident set - portion of process that is in main memory
An interrupt is generated when an address is needed that is
not in main memory
Operating system places the process in a blocking state
Execution of a Process
Piece of process that contains the logical address is brought
into main memory
◦ Operating system issues a disk I/O Read request
◦ Another process is dispatched to run while the disk I/O
takes place
◦ An interrupt is issued when disk I/O complete which
causes the operating system to place the affected process
in the Ready state
Implications of this new strategy
More processes may be maintained in main memory
◦ Only load in some of the pieces of each process
◦ With so many processes in main memory, it is very likely a
process will be in the Ready state at any particular time
A process may be larger than all of main memory
Real and Virtual Memory
Real memory
◦ Main memory, the actual RAM

Virtual memory
◦ Memory on disk
◦ Allows for effective multiprogramming and relieves the user of tight
constraints of main memory
Locality and Virtual Memory
To accommodate as many processes as possible, only a few pieces of
each process is maintained in main memory
But main memory may be full so OS will brings one piece in and must
swap one piece out
The OS must not swap out a piece of a process just before that piece is
needed
If it does this too often this leads to thrashing:
Thrashing
•A state in which the system spends most of its time
swapping pieces rather than executing instructions.
• To avoid this, the operating system tries to guess
which pieces are least likely to be used in the
near future.
• The guess is based on recent history
Principle of Locality
If we keep the most active segments of
The principle that the program and data in the cache, overall
instruction currently being execution speed for the program will be
Locality of Reference fetched/executed is very SO optimized. Our strategy for cache
close in memory to the utilization should maximize the number
instruction to be of cache read/write operations, in
fetched/executed next. comparison with the number of main
memory read/write operations.

Program and data references within a process tend to


cluster
Only a few pieces of a process will be needed over a short
period of time
Therefore it is possible to make intelligent guesses about
which pieces will be needed in the future  Avoids
thrashing
Support Needed for Virtual
Memory
Hardware must support paging and segmentation
Operating system must be able to manage the movement of
pages and/or segments between secondary memory and
main memory
Memory Management
Decisions
Whether or not to use virtual memory techniques
The use of paging or segmentation or both
The algorithms employed for various aspects of memory management
Key Design Elements

Key aim: Minimise page faults


◦ No definitive best policy
Fetch Policy
Determines when a page should be
brought into memory
Two main types:
◦ Demand Paging - only brings pages into
main memory when a reference is
made to a location on the page
◦ Prepaging - pages other than the one
demanded are brought in
Placement Policy
Determines where in real memory a process
piece is to reside
Important in a segmentation system
For pure segmentation systems:
◦ first-fit, next fit, best fit
Replacement Policy
Deals with the selection of a page in main memory
to be replaced when a new page is brought in
• Objective is that the page that is removed will be the
page least likely to be referenced in the near future
• The more sophisticated the replacement policy, the
greater the hardware and software overhead to
implement it
• Not all pages in main memory can be selected for
replacement
Replacement Policy: Frame
Locking
Frame Locking
◦ If frame is locked, it may not be replaced
◦ Kernel of the operating system
◦ Key control structures
◦ I/O buffers
◦ Associate a lock bit with each frame
Basic Replacement
Algorithms
There are certain basic algorithms that are used for the selection of a
page to replace, they include
◦ Optimal
◦ Least recently used (LRU)
◦ First-in-first-out (FIFO)
◦ Clock (not covered in the syllabus)

Examples
Examples
An example of the implementation of these policies
will use a page address stream formed by executing
the program is
◦232152453252

Which means that the first page referenced is 2,


◦ the second page referenced is 3,
◦ And so on. (there are 5 different pages)

Resident set = 3 frames


Optimal policy
• Selects the page for which the time to the
next reference is the longest
• But Impossible to have perfect knowledge
of future events
• Produce the fewest number of page faults
Optimal Policy
Example

The optimal policy produces three page faults after the


frame allocation has been filled.
Least Recently
Used (LRU)
Replaces the page that has not been referenced for
the longest time
By the principle of locality, this should be the page
least likely to be referenced in the near future
Difficult to implement
◦ This requires a great deal of overhead.
LRU Example

The LRU policy does nearly as well as the optimal policy.


◦ In this example, there are four page faults
First-in, first-out (FIFO)
Treats page frames allocated to a process as a
circular buffer
Pages are removed in round-robin style
◦ Simplest replacement policy to implement
Page that has been in memory the longest is
replaced
FIFO Example
Operating
Chapter 8 Systems:
Internals and
I/O Management & Design Principles,
6/E
Disk Scheduling William Stallings
• Categories of I/O Devices
• I/O Techniques
Topic
• I/O Buffering (Single, Double and
Circular Buffer)
 Three Categories:
Categories of  Human readable
 Machine readable
I/O Devices  Communications
 Devices used to communicate with the
user
 Printers and terminals
Human  Video display
readable  Keyboard
 Mouse etc
Used to communicate with electronic
equipment
 Disk drives
Machine  USB keys
readable  Sensors
 Controllers
 Actuators
 Used to communicate with remote
devices
Communicat  Digital line drivers
ion  Modems
Devices differ in a number of areas
 Data Rate - How fast is the
transfer?
 Application – Which application
Differences uses it?
in  Complexity of Control – What is
I/O Devices being control? How is it done?
 Unit of transfer – How big of
data is being transferred?
 Error conditions – What to do
when things go wrong
Data Rates
 Disk used to store files requires
file management software
 Disk used to store virtual
memory pages needs special
Application hardware and software to
support it
 Terminal used by system
administrator may have a
higher priority
 A printer requires a relatively
simple control interface.
Complexity  A disk is much more complex.
of control  This complexity is filtered to some
extent by the complexity of the I/O
module that controls the device.
Techniques  Programmed I/O
for  Interrupt-driven I/O
performing
I/O  Direct memory access (DMA)
1. Programmed I/O
the processor issues an I/O command on
behalf of a process to an I/O module;
that process then busy waits for the operation
to be completed before proceeding
2. Interrupt-driven I/O
the processor issues an I/O command on
behalf of a process
 if non-blocking – processor continues to
execute instructions from the process that issued
the I/O command
 if blocking – the next instruction the processor
executes is from the OS, which will put the
current process in a blocked state and schedule
another process
3. Direct Memory Access (DMA)
 a DMA module controls the exchange of data
between main memory and an I/O module
 Direct Memory Access (DMA) is a capability
provided by a computer architecture that
 allows data to be sent directly from an
attached peripheral device (such as a disk
drive) to the memory on the computer's
motherboard.
Direct  The processor is freed from involvement with
Memory the data transfer, thus speeding up overall
Access computer operation.
 the processor is involved only at the
(DMA) beginning and end of the transfer
BUFFERING
Perform input transfers in advance
of requests being made

Perform output transfers some time


after the request is made

Buffering Reasons for buffering:


 Processes must wait for I/O to
complete before proceeding
 Certain pages must remain in main
memory during I/O
 Without a buffer, the OS directly
access the device as and when it
needs
No Buffer
Buffering

Single Double Circular


Buffer Buffer Buffer
 Operating system assigns a buffer in
main memory for an I/O request
 The simplest type of support that the
operating system can provide.
When a user process issues an I/O
Single Buffer request, the OS assigns a buffer in
the system portion of main
memory to the operation.
 Use two system buffers instead of one
 A process can transfer data to or from
one buffer while the operating system
empties or fills the other buffer
 Also known as buffer swapping
Double
Buffer
 More than two buffers are used
 Each individual buffer is one unit in a
circular buffer
Circular  Used when I/O operation must keep up
Buffer with process
 Buffering smoothes out peaks in I/O
demand.
 But with enough demand eventually
all buffers become full and their
advantage is lost
 However no amount of buffering will
allow an I/O device to keep pace with a
Buffer process.
Limitations  Even with multiple buffers, all of the
buffers will eventually fill up and the
process will have to wait after
processing each chunk of data.
 In multiprogramming buffering can be a
tool that can increase the efficiency of
the operating system and the
performance of individual processes.
I/O SCHEDULING –
DISK PERFORMANCE
 The actual details of disk I/O
operation depend on the:
1. Computer system (the
Disk architecture)
Performance
2. Operating system
Parameters
3. Nature of the I/O channel and
disk controller hardware
(scheduling)
When the disk drive is operating, the disk is rotating at
constant speed
 To read or write
 The head must be positioned at the desired
Positioning track and at the beginning of the desired
sector on that track
the  Track selection

Read/Write  Involves moving the head in a movable-


head system or electronically selecting one

Heads head on a fixed-head system


 Seek Time (ST)
 The time it takes to position the head at the
track
 Rotational Delay (RD)
 The time it takes for the beginning of the
sector to reach the head
 Access Time
 The sum of the seek time and the rotational
delay (ST + RD)
 Once the head is in position, the read or write operation is
performed as the sector moves under the head
 data transfer operation

Positioning
the
Read/Write
Heads

Inside a hard disk drive: https://www.youtube.com/watch?


v=kdmLvl1n82U
head
head

disc
disc
 Transfer time  The time
Disc required for the transfer
Performance
Parameters
 Seek time is the reason for differences in
performance
 The time it takes to position the head at the
track
 If sector access requests involve selection of
tracks at random, then the performance of the
Disc disk I/O will be as poor as possible.

Scheduling  To improve, need to reduce the time spent on


seeks
 For a single disk there will be a number of I/O
requests (read and write) from various processes
in a queue
 If requests are selected randomly  poor
performance
 First In First Out
 Priority
 Last In First Out
Disc
Scheduling  Shortest Service Time First
Policies (SSTF)
 SCAN
 C-SCAN
 N-Step-SCAN
 FSCAN
 A disk track request
55, 58, 39, 18, 90, 160, 150, 38, 184
Example:  Head is at track 100
 Head movement - increasing
 Processes in sequential
order
 Fair to all processes
 Approximates random
scheduling in
First-In, performance if there are
many processes
First-Out competing for the disk
(FIFO)

55, 58, 39, 18, 90, 160, 150, 38, 184


 Select the disk I/O
request that requires
the least movement of
Shortest the disk arm from its
current position
Service  Always choose the
Time First minimum seek time
(SSTF)

90, 58, 55, 39, 38, 18, 150, 160, 184


 Also known as the elevator
algorithm
 Arm moves in one direction
and reverse
SCAN  Satisfies all outstanding
requests until it reaches the
last track in that direction or
no more requests in the
direction (LOOK), then the
direction is reversed
SCAN

150, 160, 184, 90, 58, 55, 39, 38, 18


 Restricts scanning to
one direction only
 When the last track has
been visited in one
direction, the arm is
C-SCAN returned to the opposite
end of the disk and the
scan begins again
Operating Systems:
Internals and Design Principles, 6/E
William Stallings

Chapter 09
File Management
File System and Structure
What is a file?

 In most application the input to the application is by


means of a file, and in virtually all applications
output is saved in a file for long-term storage and for
later access.
 Users has to be able to access files, save them and
maintain the integrity of their contents.
 In managing files, all operating systems provide file
management system.
 Consists of system utility programs that run as privileged
applications.
 Need special services from the operating system.
Files: Data collections created by users
3 desirable properties of files:
Long-term existence
• files are stored on disk or other secondary storage and do not
disappear when a user logs off

Sharable between processes

• files have names and can have associated access permissions


that permit controlled sharing

Structure
• files can be organized into hierarchical or more complex
structure to reflect the relationships among files
• Ease of management and book keeping
File Systems

 The File System is one of the most important parts of


the OS to a user
 Provide a means to store:
1. organized data
2. a collection of authorized operations that can be
performed on files
A process updates a file,
either by adding new data

6 Operations that expands the


size of the file or by
changing the values of
existing data items in the
A new file is defined file
and positioned within WRITE
the structure of files.

CREATE

READ
DELETE A process
A file is removed from reads all or a
the file structure and portion of the
CLOSE
destroyed data in a file
OPEN

The file is closed with respect to a


An existing file is declared to be “opened” by a
process, so that the process no
process, allowing the process to perform
longer may perform functions on
functions on the file
the file, until the process opens the
file again
File Structure
Four terms are
commonly used when
discussing files:

Field Record File Database


Structure Terms

Field Record
 basic element of data  collection of related
 contains a single value fields that can be treated
as a unit by some
 length and data type application program
 fixed or variable length
Database
 collection of related data
 relationships among  File
elements of data are
explicit  collection of similar records
 designed for use by a  treated as a single entity
number of different  may be referenced by name
applications  access control restrictions
 consists of one or more usually apply at the file
types of files level
DATABASE
File File File

File File File

Student File File


File

File

Field
ID Name Address Contact Number
Record 1 SN 11234 Razin Kajang 012-9992366

Record 2 SW 45655 Raziq Alor Setar 013-5698788


Record 3 CS 10235 Raidz Kota Marudu 014-2586664

. . . .
. . . .

Record n IS 56554
.
Amir .
Kota Bahru .
011-4569875
.
File Management System
(FMS)
Minimal User Requirements
 Each user:
• should be able to create, delete, read, write and modify files
1

• may have controlled access to other users’ files


2

• may control what type of accesses are allowed to the files


3

• should be able to restructure the files in a form appropriate to the problem


4

• should be able to move data between files


5

• should be able to back up and recover files in case of damage


6

• should be able to access his or her files by name rather than by numeric identifier
7
FMS Objectives
1. Meet the data management needs of the
user
2. Guarantee that the data in the file are
valid
3. Optimize performance (how will the file
be accessed/shared?)
4. Provide I/O support for a variety of
storage device types
5. Minimize the potential for lost or
destroyed data
6. Provide a standardized set of I/O
interface routines to user processes
7. Provide I/O support for multiple users in
the case of multiple-user systems
Software Organization for
Files
Typical Software Organization

CONSIDERED
PART OF THE
OS
Device Drivers
 Lowest level (device
communication / machine level)
 Communicates directly with
peripheral devices
 Which is the source / destination
for files
 Responsible for starting I/O
operations on a device
 Processes the completion of an I/O
request
Basic File System
 Also referred to as the physical
I/O level
 Primary interface with the
environment outside the computer
system
 Deals with blocks of data that are
exchanged with disk or tape
systems
 Concerned with
 the placement of blocks on the
secondary storage device
 buffering blocks in main memory
Basic I/O Supervisor
 Responsible for all file I/O
initiation and termination
 Maintaining Control structures that
deal with:
 device I/O
 Scheduling
 file status

 Selects the device on which I/O is


to be performed
 Scheduling disk and tape accesses
to optimize performance
 I/O buffers are assigned and
secondary memory is allocated at
this level
Logical I/O

 Enables users and applications to access records


 Provides general-purpose record I/O capability
 Maintains basic data about files
Access Method
 Level of the file system closest to the
user
 Provides a standard interface between
applications and the file systems and
devices that hold the data
 Different access methods reflect
different file structures and different
ways of accessing and processing the
data
File Management Functions

• How files are stored and


organized by the OS
• Method such as: The piles,
index, sequential index
Organization and Access
File Organization and Access
 File organization  the logical
structuring of the records as determined by the
way in which they are accessed
 Logical = how it is represented by the OS

 In choosing a file organization,


several criteria are important (more
the one method can be chosen):
1. Short access time
2. Ease of update
3. Economy of storage
4. Simple maintenance
5. Reliability

 Priority of criteria depends on the


application that will use the file
File Organization Types

The pile

The
The direct, sequential
or hashed, file
file

Five of the
common file
organizations
are: The
The indexed
indexed sequential
file file
• Least complicated form of file
organization
• Data are collected in the order they
arrive (First In Last Out)
• Each record consists of one burst of
data
• Purpose is simply to accumulate the
mass of data and save it
• Record access is by exhaustive search
since there is no proper index
2. The Sequential File • Most common form of file structure
• A fixed format is used for records
• Key field uniquely identifies the record
(EXAMPLE: StudentID)
• records are stored either
alphabetical/numerical order
• Typically used in batch applications
• Read files -> process the files ->
Generate report
• Less human intervention
• Only organization that is easily stored
on tape as well as disk
3. Indexed Sequential File
• Adds:
1. an index to the file to support random access
• Provides lookup capability to quickly reach desired
record
2. an overflow file
• Providing pointer from previous record
• Greatly reduces the time required to access a single
record
• Multiple levels of indexing can be used to provide
• For example, depending greater efficiency in access
on needs file can be
indexed based on:
• First Name
• Last Name
• Department
• Major
• etc

You might also like