Professional Documents
Culture Documents
Os Full Slides
Os Full Slides
Os Full Slides
OPERATING SYSTEM
OVERVIEW
OBJECTIVES
• Describe the key function of an operating
system (OS).
• Discuss the evolution of OS
• Explain each of the major achievements in
OS.
• Discuss the key design areas in the
development of modern OS
TOPICS
• OS definition, objectives and functions
• Evolution of the OS
• Major achievements
• Characteristic of modern OS
PART 1 – OS DEFINITION,
OBJECTIVES & FUNCTIONS
WHAT MAKES A COMPUTER?
• Hardware
• Application
WHAT IS AN OPERATING SYSTEM
(OS)?
• A program that controls the execution of application
programs
• An interface between applications and hardware
OS
LAYERS AND VIEWS
APPLICATION PROGRAM?
• Any program written to perform specific day to day function
to the user (office apps, calculator, web browsing, etc)
UTILITIES?
• A set of system programs in assisting program
creation, management of files and controlling
I/O
OPERATING SYSTEM
OBJECTIVES
• Convenience
• Makes the computer more convenient to use
• Efficiency
• Allows computer system resources to be used in an efficient
manner
• Ability to evolve
• Permits effective development, testing, and introduction of new
system functions without interfering with service
OPERATING SYSTEM OBJECTIVES #1 -
CONVENIENCE
Program Development
Editors and Debuggers to assist
program development
In the computer, OS
provide service in the Program Execution
following area Number of tasks need to be performed in executing a
program
OS handle the scheduling duties for the user
1.Memory
2. I/O Devices
3.Processor
OPERATING SYSTEM AS RESOURCE MANAGE
• OS MANAGES these resources
OS functions same way as ordinary computer software
It is a suite of program that is executed
The difference is OS:
directs the processor in the use of the other system
resources
Schedule the timing of programs / processes
MANAGE
OPERATING SYSTEM OBJECTIVES #3 – ABILITY TO
EVOLVE
Setup time
An amount of time reserved was spent just on
setting up the hardware in order for the program
to run
Setting up a program to run is longer than running
the program
2. Simple batch system
Early computers were very expensive
important to maximize processor utilization
The use of a software call Monitor
1. user no longer has direct access to processor
2. job is submitted to computer operator who batches them together and
places them on an input device
3. program branches back to the monitor when finished
X X
Monitor Point of View
Monitor controls the sequence of events
Resident Monitor is software
always in memory
Monitor reads in job and gives control
Job returns control to monitor
Processor Point of View
Processor executes instruction from the memory containing the monitor
Executes the instructions in the user program until it encounters:
an ending
error condition
• while the user program is executing, it must not alter the memory area containing the
monitor
Timer
Privileged instructions
Interrupts
Multiprogramming
also known as multitasking
memory is expanded to hold three, four, or
more programs and switch among all of them
Central theme for MODERN OS
EXAMPLE
UTILIZATION
HISTOGRAMS
EFFECTS ON RESOURCE
UTILIZATION
MULTIPROGRAMMING AFTER THOUGHTS
To have several jobs ready to run, they must be kept in main memory,
which in turn requires MEMORY MANAGEMENT
Additionally, process must decide which job to run, a task which
requires SCHEDULING algorithm.
TIME SHARING SYSTEMS
Although multiple batch programming was efficient, it is still desirable
to have a system where the user can interact directly with the computer
In the 60s the idea of having a dedicated personal computer was
non-existent, back then, a concept of time sharing were developed.
TIME SHARING SYSTEM
Can be used to handle multiple interactive jobs
Processor time is shared among multiple users
Multiple users simultaneously access the system through
terminals;
with the OS
interleaving the
execution of each
user program in a
short burst or
quantum of
computation
BATCH MULTIPROGRAMMING
VERSUS TIME SHARING
COMPATIBLE TIME-SHARING SYSTEMS
(CTSS)
Time Slicing
System clock generates interrupts at a rate of approximately one every
0.2 seconds
At each interrupt OS regained control and could assign processor to
another user
At regular time intervals the current user would be preempted and another
user loaded in
Old user programs and data were written out to disk
Old user program code and data were restored in main memory when that
program was next given a turn
TIME SHARING SYSTEM
AFTER THOUGHTS
The CTSS approach is considered primitive compared to
present day time sharing, but it was effective
Time sharing and multiprogramming raised some new
problems for the OS.
1. If multiple jobs are in memory, they must be protected from
interfering with each other
2. With multiple users, the file system must be protected so that
only authorize users have access to a particular file.
3. The contention for resources must be handled
PART 3 – MAJOR ACHIEVEMENTS
MAJOR ACHIEVEMENTS
Operating Systems are among the most complex pieces of
software ever developed
• Processes
• Memory management
• Information protection and security
• Scheduling and resource management
• System structure
Kernel Codes
• Mem mgmt.
• Process mgmt.
• Disk mgmt
Monolithic Kernel
• Large kernel
• Most OS functionality provided in these large kernel
scheduling, file system, networking, device drivers,
memory management and more
Operating Systems:
Internals and Design Principles, 6/E
William Stallings
CHAPTER 2
PROCESS DESCRIPTION AND CONTROL
• Define the term process
• Explain the relationship between processes and process
control blocks
• Explain the concept of process state and the state transition
• Describe the purpose of data structures and data structures
elements use by an OS to manage the processes
• Describe the requirements for process control by the OS
• Describe the key security issues that relate to OS
TOPICS
• What is a Process?
• Process States
• Process Description
• Process Control
• Security Issues
EARLIER CONCEPTS
A computer platform
consists of a collection of
hardware resources
Computer applications
are developed to perform
some task
• A program in execution
• An instance of a program running on a computer
• The entity that can be assigned to and executed on a
processor
• A unit of activity characterized by the execution of a
sequence of instructions, a current state, and an associated
set of system instructions
PROCESS ELEMENTS
Process
identifier
I/O status
memory pointers context data accounting information
information
PROCESS ELEMENTS
A unique identifier
associated with this
process, to distinguish it
from all other processes.
identifier
I/O status
memory pointers context data accounting information
information
PROCESS ELEMENTS
identifier
Is the process
running / waiting /
blocked? priority program counter
state
I/O status
memory pointers context data accounting information
information
PROCESS ELEMENTS
Priority level
identifierrelative to other
processes.
I/O status
memory pointers context data accounting information
information
PROCESS ELEMENTS
I/O status
memory pointers context data accounting information
information
PROCESS ELEMENTS
Includes pointers
to the program
code and data identifier
associated
with this process,
plus any memory
blocks shared with state priority program counter
other processes.
I/O status
memory pointers context data accounting information
information
PROCESS ELEMENTS
identifier
I/O status
memory pointers context data accounting information
information
identifier
I/O status
memory pointers context data accounting information
information
Includes outstanding I/O
requests, I/O devices assigned
to this process, a list of files in
use by the process, and so on.
PROCESS ELEMENTS
identifier
I/O status
memory pointers context data accounting information
information
May include the
amount of processor
time and clock time
used, time limits,
account numbers,
and so on.
PROCESS CONTROL BLOCK
Timeout
I/O
4 REASONS FOR PROCESS
CREATION
INTERACTIVE LOGON
A user logs on to the system
PARENT CHILD
PROCESS PROCESS
is the original, creating,
The newly created
process
process
PROCESS TERMINATION
Etc … processes moved by the dispatcher of the OS to the CPU then back to the
queue until the task is competed
TWO STATE PROCESS MODEL
• A way to tackle this situation is to split the Not Running state into
two different states, which are:
• Ready state: Ready to execute
• Blocked state: waiting for I/O
• Now, instead of two states we have three states Ready,
Running, Blocked
NEW
• A process that has just been created but has not yet been
admitted to the pool of executable processes by the OS.
• Typically, a new process has not yet been loaded into main
memory (RAM), although its process control block has been created.
FIVE-STATE
PROCESS MODEL
READ
Y
RUNNING
BLOCKED
EXIT
As each process is admitted to the system, it is placed in the Ready queue. When it is
time for the OS to choose another process to run, it selects one from the Ready
queue.
USING TWO QUEUES
Finally, when an event occurs, any process in the Blocked queue that has been
waiting on that event only is moved to the Ready queue.
MULTIPLE BLOCKED QUEUES
SUSPENDED PROCESSES
SOLUTION • Cost
• Result for larger processes not
more processes.
Swapping
• Swap / moving these processes to
disk to free up more space in main
memory.
SWAPPING
SWAPPING
The OS need to release sufficient main
memory to bring in process that is ready for
execution
OTHER OS REASON
Suspend process that is suspected of causing
problem
TIMING
A process is executed periodically and
suspended while waiting for the next schedule
A process switch may occur any time that the OS has gained control from the
currently running process. Possible events giving OS control are:
“A security service that monitors and analyzes system events for the purpose
of finding, and providing real-time or near real-time warning of, attempts
to access system resources in an unauthorized manner” (RFC 2828)”
May be host or network based
• It comprises three logical components:
Chapter 3
Concurrency: Mutual Exclusion
Topic
Atomic Critical
Deadlock
Operation Section
Mutual Race
Live lock
Exclusion Conditions
Starvation
Atomic Operation
• It is a requirement that
Ok, I eat first ya?
prevents simultaneous You guys wait
access to a shared Later I will put down my forks
resource used in
concurrency control
and to prevent race
conditions. After this it is my turn
Race condition
• It becomes a bug
when events do not
happen in the order I want those forks
the programmer I want the forks Too!
First!
intended.
• The term originates
with the idea of two
signals racing each
other to influence
the output first.
Starvation
Hmm..I am full
I haven’t eat
void echo()
{
chin = getchar();
chout = chin;
putchar(chout);
}
A Simple Example:
On a Multiprocessor
Process P1 Process P2
. .
chin = getchar(); .
. chin =
getchar();
chout = chin; chout = chin;
putchar(chout); .
. putchar(chout);
. .
Enforce Single Access (avoids
concurrency problems)
P1:
a=a+1
b=b+1
P2:
b=2*b
a=2*a
Data Coherence cont
Example
Each doing its own thing
Sharing resources such as i/o buffer
CHAPTER 4
CONCURRENCY: DEADLOCK AND
S TA R V AT I O N
TOPIC
• Principles of Deadlock
• Resource Categories
• Deadlock Strategies
– Deadlock Detection
– Deadlock Prevention
– Deadlock Avoidance
DEADLOCK
I need quad
C and D I need quad
B and C
Reusable Consumable
P1 P2
... ...
Request 80 Kbytes; Request 70 Kbytes;
... ...
Request 60 Kbytes; Request 80 Kbytes;
• Mutual Exclusion
• No preemption
• Hold and wait
The first three conditions are necessary but not sufficient for a
deadlock to exist.
For deadlock to actually take place, a fourth condition is required
EXISTENCE OF DEADLOCK
• Mutual Exclusion
• No preemption
• Hold and wait
• Circular wait
3 Approaches on Dealing with Deadlock
Prevent Deadlock
• adopt a policy that eliminates one of the conditions
Avoid Deadlock
• make the appropriate dynamic choices based on the
current state of resource allocation
Detect Deadlock
• attempt to detect the presence of deadlock and take
action to recover
DEADLOCK PREVENTION
Deadlock Prevention Strategy
Design a system in such a way that the possibility of
deadlock is excluded
Two main methods:
1. Indirect
prevent the occurrence of one of the three necessary
conditions
2. Direct
prevent the occurrence of a circular wait (the fourth
condition)
Indirect Prevention
Mutual
Hold and Wait
Exclusion
Cannot be disallowed
if access to a resource require that a process request all of
requires mutual its required resources at one time
exclusion then it must and blocking the process until all
be supported by the requests can be granted
OS simultaneously
Indirect Prevention
No Preemption
Circular Wait
Deadlock
Avoidance
Resource
Allocation Denial Process
• do not grant an
Initiation Denial
incremental resource • do not start a
request to a process process if its
if this allocation demands might
might lead to lead to deadlock
deadlock
i. Resource Allocation Denial
• Referred to as the banker’s algorithm
• State of the system reflects the current allocation of
resources to processes
1. Safe state
There is at least one sequence of
resource allocations to processes that
does not result in a deadlock
2. Unsafe state is a state that is not
safe
There is not enough resource to allocate
to any process
Determination of a Safe State
• State of a system consisting of four processes and three resources
• Allocations have been made to the four processes
P1 2 2 2 P1 3 2 2 P1 1 0 0
P2 0 0 1 P2 6 1 3 P2 6 1 2
P3 1 0 3 P3 3 1 4 P3 2 1 1
= -
P4 4 2 0 P4 4 2 2 P4 0 0 2
o it shows how many resources needed by each program in order for the process
to complete execution.
Determination of a Safe State
R1 R2 R3
P1 2 2 2
R1 R2 R3
P2 0 0 1
0 1 1
P3 1 0 3
P4 4 2 0 Available Vector
What is left
Need Matrix
o For this example the system has 1 unit of R2 and 1 unit of R3.
Based from the claim matrix P2 can runs to completion.
Determination of a Safe State
3. After a process runs to completion, it will release all the resources to system.
Construct new available vector.
o New AV = Current Av + Allocation Matrix of the chosen process
= 011+ 612
= 623
R1 R2 R3 R1 R2 R3
R1 R2 R3 P1 1 0 0 6 2 3
P2 6 1 2
0 1 1 + =
New A.V
Previous balance of A.V P3 2 1 1 What is
left
P4 0 0 2
Allocation Matrix
What is already allocated
P3 2 1 1
P4 0 0 2
New Available
Vector
Allocation Matrix What is left
What is already allocated
Determination of a Safe State
Determination of a Safe State P3 Executes
R1 R2 R3 R1 R2 R3
P1 0 0 0 P1 0 0 0
R1 R2 R3
P2 0 0 0 P2 0 0 0
7 2 3
P3 3 1 4 P3 1 0 3
P4 4 2 2 P4 4 2 0
Available Vector
Previous balance of A.V
P3 2 1 1 New Available
Vector
P4 0 0 2
What is left
Allocation Matrix
What is already allocated
Determination of a Safe State
Determination of a Safe State P4 Executes
R1 R2 R3 R1 R2 R3
P1 0 0 0 P1 0 0 0
R1 R2 R3
P2 0 0 0 P2 0 0 0
9 3 4
P3 0 0 0 P3 0 0 0
P4 4 2 2 P4 4 2 0
Available Vector
Previous balance of A.V
P3 0 0 0 New Available
Vector
P4 0 0 2
What is left
Allocation Matrix
What is already allocated
Determination of a Safe State
P4 Runs to Completion
R1 R2 R3 R1 R2 R3 R1 R2 R3
P1 0 0 0 P1 0 0 0 9 3 6
P2 0 0 0 P2 0 0 0 Available Vector
P3 0 0 0 P3 0 0 0
R1 R2 R3
P4 0 0 0 P4 0 0 0
9 3 6
Claim Matrix Allocation Matrix
Resource Vector
When all processes run to completion, the value for available vector is equal to
resource vector
Thus, the state defined
originally is a safe state
Determination of a Unsafe State
R1 R2 R3 R1 R2 R3 R1 R2 R3
P1 3 2 2 P1 2 0 1 9 3 6
P2 6 1 3 P2 5 1 1
Resource Vector
P3 3 1 4 P3 2 1 1 R1 R2 R3
P4 4 2 2 P4 0 0 2 1 1 2
Claim Matrix Allocation Matrix Available Vector
P1 3 2 2 P1 2 0 1
P2 6 1 3 P2 5 1 1 R1 R2 R3
P3 3 1 4 P3 2 1 1 1 1 2
P4 4 2 2 P4 0 0 2 Available Vector
Claim Matrix Allocation Matrix
Is this a safe state? The answer is no, because each
NEED = CLAIM - ALLOCATION process will need at least one additional unit of R1, and
R1 R2 R3 there are none available. Thus, on the basis of deadlock
avoidance, the request by P1 should be denied and P1
P1 1 2 1
should be blocked.
P2 1 0 2
The deadlock avoidance strategy does not predict
P3 1 0 3
deadlock with certainty; it merely anticipates the
P4 4 2 0 possibility of deadlock and assures that there is never such
a possibility.
Need Matrix
EXE
RC
ISE
Process to complete:
P5, AV = 1 3 4 3 4
P2, AV = 2 4 6 3 4
P3, AV = 2 5 7 3 4
P4, AV = 2 6 8 5 5
P1, AV = 3 7 8 5 5
Banker’s Algorithm
Concept: ensure that the system of processes and
resources is always in a safe state
Mechanism: when a process makes a request for a set
of resources Assume that the request is granted
Update the system state accordingly
Determine if the result is a safe state
If so, grant the request;
If not, block the process until it is safe
to grant the request
Deadlock Avoidance Advantages
It is not necessary to pre-empt and rollback
processes, as in deadlock detection
Processes may still hold on to resources
It is less restrictive than deadlock prevention
Processes only have to be block if their
continuation might result in a deadlock situation
4 Deadlock Avoidance Restrictions
1. Maximum resource requirement for each process
must be stated in advance
2. Processes under consideration must be
independent and with no synchronization
requirements
3. There must be a fixed number of resources to
allocate
4. No process may exit while holding resources
DEADLOCK DETECTION
Deadlock Strategies
Advantages:
it leads to early detection
the algorithm is relatively simple
Disadvantage
frequent checks consume considerable processor time
Deadlock Detection Algorithms
1. Mark each process that has a row in the Allocation
matrix of all zeros.
This process does not hold any resources
2. Initialize a temporary vector W to equal to Available
vector.
3. Find an index i such that process i is currently
unmarked and ith row of Q is less then or equal to W
(Q <= W). If no such row found terminate the
algorithm.
4. If such a row is found ( ith row exist), mark process i
and add the corresponding row of the Allocation
matrix to W. Then return to step 3.
Deadlock Detection Algorithms
Mark P2
W=111
Mark P3, New W = 1 1 1 + 0 1 1 = 1 2 2
Mark P1
No deadlock
Deadlock Detection
• Deadlock exist if and only if there are unmarked processes at
the end of algorithm (exist a process request that can’t be fulfilled)
• Strategy in this algorithm is to find:
– a process whose resource requests can be satisfied with the
available resources and then;
– assume that those resources are granted and the process runs
to completion and release its resources.
• Then the algorithm will look for another process.
• This algorithm will not guarantee to prevent deadlock, it will
depend on the order in which requested are granted.
• It is only to determine either deadlock currently exist or not
RECOVERY
Recovery Strategies
1. Abort all deadlocked processes (most common strategy)
2. Back up each deadlocked process to some previously
defined checkpoint and restart all processes
(rollback/restart)
3. Successively abort deadlocked processes until deadlock no
longer exists
4. Successively preempt resources/processes until deadlock no
longer exists.
The process in 3 and 4 will be selected
according to certain criteria, e.g.
• least amount of CPU time consumed
• lowest priority
• least total resources allocated so far
Summary
• Deadlock:
the blocking of a set of processes that either compete for
system resources or communicate with each other
blockage is permanent unless OS takes action
may involve reusable or consumable resources
• Dealing with deadlock:
prevention – guarantees that deadlock will not occur
avoidance – analyzes each new resource request
detection – OS checks for deadlock and takes action
OPERATING SYSTEMS:
INTERNALS AND DESIGN PRINCIPLES, 6/E
WILLIAM STALLINGS
CHAPTER 5
UNIPROCESSOR
SCHEDULING
SCHEDULING OBJECTIVES
OVERALL AIM
OF SCHEDULING
System (performance) objectives:
1. Low Response Time (Fast Response)
RT: Time Elapse From The Submission Of A Request To The Beginning
Of The Response.
Process Needs To Run As Soon As It Enters The System
2. High Throughput
Throughput: Number Of Processes Completed Per Unit Time
Try Getting As Much Process/Jobs Done At A Time
3. Processor Efficiency
High Processor Utilization (Min Processor Idle Time)
Processor Is Always Doing Task
SCHEDULING
• An OS must allocate resources amongst competing processes.
• The resource provided by a processor is execution time
• The resource is allocated by means of a schedule
medium
long term term short term I/O
scheduling schedulin scheduling Scheduling
g
medium
long term term short term I/O
scheduling schedulin scheduling Scheduling
g
medium
long term term short term I/O
scheduling schedulin scheduling Scheduling
g
Medium-term scheduling
The decision to add to the number of
processes that are partially or fully in
main memory.
TYPES OF SCHEDULING
medium
long term term short term I/O
scheduling schedulin scheduling Scheduling
g
medium
long term term short term I/O
scheduling schedulin scheduling Scheduling
g
I/O scheduling
The decision as to which process’s
pending I/O request shall be handled
by an available I/O device.
SCHEDULING AND
PROCESS STATE TRANSITIONS
processing
• Clock interrupts
• I/O interrupts
• Operating system calls
• Signals (e.g., semaphores)
SCHEDULING POLICIES AND
ALGORITHMS
ALTERNATIVE SCHEDULING
POLICIES
SELECTION FUNCTION
• Non-preemptive
• Once a process is in the running state, it will continue until it
terminates or blocks itself for I/O
• Preemptive
• Currently running process may be interrupted and moved to ready state
by the OS
• Preemption may occur when new process arrives, on an interrupt, or
periodically.
SCHEDULING CRITERIA
Number of processes
= 0+1+5+7+10
5
= 4.6ms
SHORTEST PROCESS NEXT (SPN)
• Selection Function: Will Select The Process With The Shortest
Service Time.
• Decision Mode: Non - Preemptive
• A Short Process Will Jump To The Head Of The Queue
• Possibility Of Starvation For Longer Processes, As Long As
There Is A Steady Flow Of Shorter Processes
SHORTEST PROCESS NEXT (SPN)
• Waiting time = total of (start execution time – arrival time)
Process A 0
Process B 3-2 = 1
Process C 11- 4 =7
Process D 15- 6 =9
Process E 9-8= 1
Average Waiting Time =
Total waiting time
Number of processes
= 1+7+9+1
5
= 3.6ms
HIGHEST RESPONSE RATIO NEXT
(HRRN)
• Selection Function: Will Select The Process With The Greatest Ratio.
B Arrives At 2
No Need To Calculate The Ratio At This Point Since Only B Is In The Queue
HIGHEST RESPONSE RATIO NEXT
(HRRN)
• = (9-6) + 5 • = (9-8) + 2
C is CHOSEN
• = (9-4) + 4
5 2
4
= 1.6 = 1.5
= 2.25
HIGHEST RESPONSE RATIO NEXT
(HRRN)
5 2
= 2.4 = 3.5
HIGHEST RESPONSE RATIO NEXT
(HRRN)
• Waiting time = total of (start execution time – arrival time)
Process A 0
Process B 3-2= 1
Process C 9-4 =5
Process D 15- 6 =9
Process E 13-8= 5
Average Waiting Time =
Total waiting time
Number of processes
= 1+5+9+5
5
= 4ms
PREEMPTIVE POLICIES
SHORTEST REMAINING TIME
(SRT)
• Selection Function: Will Select The Process With The Shortest
Expected Service Time.
• Decision Mode: Preemptive
• Risk Of Starvation Of Longer Processes
• Should Give Superior Turnaround Time Performance To SPN
Because A Short Job Is Given Immediate Preference To A
Running Longer Job
0 5 10 15 20
E
SHORTEST REMAINING TIME
(SRT)
• Waiting time = total of (start execution time – arrival time)
Process A 0
Process B (3-2) + (10 – 4) = 7
Process C 4-4 =0
Process D 15- 6 =9
Process E 8-8= 0
Average Waiting Time =
Total waiting time
Number of processes
= 7+9
5
= 3.2ms
SUMMARY
• The operating system must make three types of scheduling decisions with
respect to the execution of processes:
1. Long-term – determines when new processes are admitted to the
system
2. Medium-term – part of the swapping function and determines when a
program is brought into main memory so that it may be executed
3. Short-term – determines which ready process will be executed next
by the processorFrom a user’s point of view, response time is
generally the most important characteristic of a system;
• From a system’s point of view, throughput or processor utilization is
important
• Algorithms:
• FCFS, SPN, HRRN, SRT, Round Robin and Feedback
Operating Systems:
Internals and Design Principles, 6/E
William Stallings
Chapter 6
Memory Management
Topic
• Memory Management Requirement
• Partitioning
• Simple Paging
• Simple Segmentation
The need for memory
management
• Memory is cheap today, and getting
cheaper
– But applications are demanding more and
more memory, there is never enough!
• Memory Management, involves swapping
blocks of data from secondary storage.
• Memory I/O is slow compared to a CPU
– The OS must cleverly time the swapping to
maximise the CPU’s efficiency
Memory Management
Term Description
Frame Fixed-length block of main
memory.
Page Fixed-length block of data in
secondary memory (e.g. on disk).
Segment Variable-length block of data that
resides in secondary memory.
Memory Management
Requirements
• Relocation
• Protection
• Sharing
• Logical organisation
• Physical organisation
Requirements: Relocation
• The programmer does not know where the
program will be placed in memory when it
is executed,
– it may be swapped to disk and return to main
memory at a different location (relocated)
• Memory references must be translated to
the actual physical memory address
Requirements: Protection
• Processes should not be able to reference
memory locations in another process
without permission
• Impossible to check absolute addresses at
compile time
• Must be checked at run time
Requirements: Sharing
• Allow several processes to access the
same portion of memory
• Better to allow each process access to the
same copy of the program rather than
have their own separate copy
Requirements: Logical
Organization
• Memory is organized linearly (usually)
• Programs are written in modules
– Modules can be written and compiled
independently
• Different degrees of protection given to
modules (read-only, execute-only)
• Share modules among processes
• Segmentation helps here
Requirements: Physical
Organization
• Cannot leave the programmer with the
responsibility to manage memory
• Memory available for a program plus its
data may be insufficient
– Overlaying allows various modules to be
assigned the same region of memory but is
time consuming to program
• Programmer does not know how much
space will be available
Partitioning
• An early method of managing memory
– Pre-virtual memory
– Not used much now
• But, it will clarify the later discussion of
virtual memory if we look first at
partitioning
– Virtual Memory has evolved from the
partitioning methods
Types of Partitioning
• Fixed Partitioning
• Dynamic Partitioning
Fixed Partitioning
• Equal-size partitions (see fig 7.3a)
– Any process whose size is less than
or equal to the partition size can be
loaded into an available partition
• The operating system can swap a
process out of a partition
– If none are in a ready or running
state
Fixed Partitioning Problems
• A program may not fit in a partition.
– The programmer must design the program
with overlays
• Main memory use is inefficient.
– Any program, no matter how small, occupies
an entire partition.
– This is results in internal fragmentation.
Solution – Unequal Size
Partitions
• Lessens both problems
– but doesn’t solve completely
• In Fig 7.3b,
– Programs up to 16M can be
accommodated without overlay
– Smaller programs can be placed in
smaller partitions, reducing internal
fragmentation
Placement Algorithm
• Equal-size
– Placement is trivial (no options)
• Unequal-size
– Can assign each process to the smallest
partition within which it will fit
– Queue for each partition
– Processes are assigned in such a way as to
minimize wasted memory within a partition
Fixed Partitioning
Dynamic Partitioning
• Partitions are of variable length and
number
• Process is allocated exactly as much
memory as required
Dynamic Partitioning Example
cont
Dynamic Partitioning Example
OS (8M)
• External Fragmentation
• Memory external to all
P2
P1
(14M) processes is fragmented
(20M)
Empty (6M) • Can resolve using
Empty
P4(8M)
P2
(56M)
compaction
(14M)
Empty (6M) – OS moves processes so
P3
that they are contiguous
(18M) – Time consuming and
wastes CPU time
Empty (4M)
A.0
A.1
A.2
A.3
D.0
B.0
D.1
B.1
D.2
B.2
C.0
C.1
C.2
C.3
D.3
D.4
Page Table
Segmentation
• A program can be subdivided into
segments
– Segments may vary in length
– There is a maximum segment length
• Addressing consist of two parts
– a segment number and
– an offset
• Segmentation is similar to dynamic
partitioning
• The difference with dynamic partitioning, is
that with segmentation a program may
occupy more than one partition, and these
partitions does not need to be contiguous.
• Segmentation eliminates internal
fragmentation but suffers from external
fragmentation (as does dynamic
partitioning)
OPERATING SYSTEMS:
INTERNALS AND DESIGN PRINCIPLES,
6/E
WILLIAM STALLINGS
Chapter 7
Virtual Memory
Topic
Real and Virtual Memory
Locality and Virtual Memory
Terminology
Execution of a Process
Operating system brings into main memory a few pieces of
the program
Resident set - portion of process that is in main memory
An interrupt is generated when an address is needed that is
not in main memory
Operating system places the process in a blocking state
Execution of a Process
Piece of process that contains the logical address is brought
into main memory
◦ Operating system issues a disk I/O Read request
◦ Another process is dispatched to run while the disk I/O
takes place
◦ An interrupt is issued when disk I/O complete which
causes the operating system to place the affected process
in the Ready state
Implications of this new strategy
More processes may be maintained in main memory
◦ Only load in some of the pieces of each process
◦ With so many processes in main memory, it is very likely a
process will be in the Ready state at any particular time
A process may be larger than all of main memory
Real and Virtual Memory
Real memory
◦ Main memory, the actual RAM
Virtual memory
◦ Memory on disk
◦ Allows for effective multiprogramming and relieves the user of tight
constraints of main memory
Locality and Virtual Memory
To accommodate as many processes as possible, only a few pieces of
each process is maintained in main memory
But main memory may be full so OS will brings one piece in and must
swap one piece out
The OS must not swap out a piece of a process just before that piece is
needed
If it does this too often this leads to thrashing:
Thrashing
•A state in which the system spends most of its time
swapping pieces rather than executing instructions.
• To avoid this, the operating system tries to guess
which pieces are least likely to be used in the
near future.
• The guess is based on recent history
Principle of Locality
If we keep the most active segments of
The principle that the program and data in the cache, overall
instruction currently being execution speed for the program will be
Locality of Reference fetched/executed is very SO optimized. Our strategy for cache
close in memory to the utilization should maximize the number
instruction to be of cache read/write operations, in
fetched/executed next. comparison with the number of main
memory read/write operations.
Examples
Examples
An example of the implementation of these policies
will use a page address stream formed by executing
the program is
◦232152453252
Positioning
the
Read/Write
Heads
disc
disc
Transfer time The time
Disc required for the transfer
Performance
Parameters
Seek time is the reason for differences in
performance
The time it takes to position the head at the
track
If sector access requests involve selection of
tracks at random, then the performance of the
Disc disk I/O will be as poor as possible.
Chapter 09
File Management
File System and Structure
What is a file?
Structure
• files can be organized into hierarchical or more complex
structure to reflect the relationships among files
• Ease of management and book keeping
File Systems
CREATE
READ
DELETE A process
A file is removed from reads all or a
the file structure and portion of the
CLOSE
destroyed data in a file
OPEN
Field Record
basic element of data collection of related
contains a single value fields that can be treated
as a unit by some
length and data type application program
fixed or variable length
Database
collection of related data
relationships among File
elements of data are
explicit collection of similar records
designed for use by a treated as a single entity
number of different may be referenced by name
applications access control restrictions
consists of one or more usually apply at the file
types of files level
DATABASE
File File File
File
Field
ID Name Address Contact Number
Record 1 SN 11234 Razin Kajang 012-9992366
. . . .
. . . .
Record n IS 56554
.
Amir .
Kota Bahru .
011-4569875
.
File Management System
(FMS)
Minimal User Requirements
Each user:
• should be able to create, delete, read, write and modify files
1
• should be able to access his or her files by name rather than by numeric identifier
7
FMS Objectives
1. Meet the data management needs of the
user
2. Guarantee that the data in the file are
valid
3. Optimize performance (how will the file
be accessed/shared?)
4. Provide I/O support for a variety of
storage device types
5. Minimize the potential for lost or
destroyed data
6. Provide a standardized set of I/O
interface routines to user processes
7. Provide I/O support for multiple users in
the case of multiple-user systems
Software Organization for
Files
Typical Software Organization
CONSIDERED
PART OF THE
OS
Device Drivers
Lowest level (device
communication / machine level)
Communicates directly with
peripheral devices
Which is the source / destination
for files
Responsible for starting I/O
operations on a device
Processes the completion of an I/O
request
Basic File System
Also referred to as the physical
I/O level
Primary interface with the
environment outside the computer
system
Deals with blocks of data that are
exchanged with disk or tape
systems
Concerned with
the placement of blocks on the
secondary storage device
buffering blocks in main memory
Basic I/O Supervisor
Responsible for all file I/O
initiation and termination
Maintaining Control structures that
deal with:
device I/O
Scheduling
file status
The pile
The
The direct, sequential
or hashed, file
file
Five of the
common file
organizations
are: The
The indexed
indexed sequential
file file
• Least complicated form of file
organization
• Data are collected in the order they
arrive (First In Last Out)
• Each record consists of one burst of
data
• Purpose is simply to accumulate the
mass of data and save it
• Record access is by exhaustive search
since there is no proper index
2. The Sequential File • Most common form of file structure
• A fixed format is used for records
• Key field uniquely identifies the record
(EXAMPLE: StudentID)
• records are stored either
alphabetical/numerical order
• Typically used in batch applications
• Read files -> process the files ->
Generate report
• Less human intervention
• Only organization that is easily stored
on tape as well as disk
3. Indexed Sequential File
• Adds:
1. an index to the file to support random access
• Provides lookup capability to quickly reach desired
record
2. an overflow file
• Providing pointer from previous record
• Greatly reduces the time required to access a single
record
• Multiple levels of indexing can be used to provide
• For example, depending greater efficiency in access
on needs file can be
indexed based on:
• First Name
• Last Name
• Department
• Major
• etc