Professional Documents
Culture Documents
Chapt 2
Chapt 2
Chapt 2
CACHE
MAIN MEMORY
SECONDARY STORAGE
TERTIARY STORAGE
Memory Hierarchy
MEMORY MANAGEMENT
STRATEGIES
• fetch strategy
• The method used to determine which block is to be obtained next.
• Determine when to load and how much to load at a time.
• E.g., demand fetching, anticipated fetching (pre-fetching).
• placement strategy
• The method used to determine where to put a new block
• To determine where information is to be placed.
• E.g., Best-Fit, First-Fit, Worst-Fit
• replacement strategy
• The method used to determine which resident block is to be
displaced.
• Determine which memory area is to be removed under contention
conditions. E.g., FIFO
• 3 algorithm for searching free block for a specific amount of memory:
1) First fit
- Allocate the first free block that is large enough for the new process.
- This a fast algorithm.
2) Best fit
Allocate the smallest block among those that are large enough for the new process.
- OS search entire list, or it can keep it sorted and stop it when it has an entry which has
a size larger than the size of process.
- This algorithm produces the smallest left over block.
3) Worst fit
- Allocate the largest block among those that are large enough for
the new process.
- Search or sorting of the entire list is needed.
- This algorithm produces the largest left over block.
QUESTION
• Given five (5) logical memory partitions of 100kb, 500kb,
200kb, 300kb, and 600kb (in order). Write a physical memory
partitions using best-fit, first-fit and worst-fit algorithm for
processes of 212kb, 417kb, 112kb and 426kb (in order).
ANSWER
• Best fit • Worst fit
- 212kb is put in 300kb
- 212kb is put in 600kb
- 417kb is put in 500kb
- 417kb is put in 500kb
- 112kb is put in 200kb
- 426kb is put in 600kb - 112kb is put in 300kb
- 426kb must wait
• First fit
- 212kb is put in 500kb
- 417kb is put in 600kb
- 112kb is put in 200kb
- 426kb must wait
RESIDENT ROUTINES
• OS is a collection of software routines
• Resident routines (Rutin Tetap)
• The part of a program that must remain in memory at all times.
• Instructions and data that remain in memory can be accessed instantly.
• A routine that stays in the memory eg routines that control physical I/O
and directly support application programs as they run
• One such program might be an anti-virus program. This has given rise to the
term resident protection.
TRANSIENT ROUTINES
• Transient routines (Rutin Sementara)
• A routine that is stored on disk loaded as needed
• routines that format disks
RESIDENT & TRANSIENT
ROUTINES
• The operating system is a collection of software routines.
Process B is swapped in from the The main memory now has Process A is swapped out from
disk to main memory empty spaces for other process the main memory to the disk
On many systems,
multiple
programs are loaded into
memory and executed
concurrently.
FIXED-PARTITION MEMORY
MANAGEMENT
Fixed-partition memory
management structure diagram
VIRTUAL MEMORY
Then the page’s base address is looked up in a program page table (like
the segment table, maintained by the operating system) and added to the
displacement.
PAGING
• Advantages:
1. Address translation: each task has the same virtual address
3. Memory protection
4. Demand loading (prevents big load on CPU when a task first starts
running, conserves memory)
• Ready list
• Blocked list
CONT…
• Ready list
Running Block
Dispatch
Ready Blocked
Wakeup
INTERRUPTS
• An interrupt is an electronic signal.
• Hardware senses the signal, saves key control information
for the currently executing program, and starts the operating
system’s interrupt handler routine. At that instant, the
interrupt ends.
• The operating system then handles the interrupt.
• Subsequently, after the interrupt is processed, the dispatcher
starts an application program.
• Eventually, the program that was executing at the time of
the interrupt resumes processing.
CPU SCHEDULER
• Selects from among the processes in memory that are ready
to execute, and allocates the CPU to one of them
• CPU scheduling decisions may take place when a process:
1. Switches from running to waiting state
2. Switches from running to ready state
3. Switches from waiting to ready
4. Terminates
• Scheduling under 1 and 4 is nonpreemptive
• All other scheduling is preemptive
CPU SCHEDULING
• Types of scheduling:
long-term scheduling
Medium-term scheduling
Short-term scheduling
Long-term Scheduling
• Determine which programs admitted to system for
processing - controls degree of multiprogramming
• Once admitted, program becomes a process, either:
– added to queue for short-term scheduler
– swapped out (to disk), so added to queue for medium-term
scheduler
Medium –Term Scheduling
• Part of swapping function between main memory and disk
- based on how many processes the OS wants available at any
one time
- must consider memory management if no virtual memory
(VM), so look at memory requirements of swapped out
processes
Short –Term Scheduling
(Dispatcher)
• Executes most frequently, to decide which process to
execute next
– Invoked whenever event occurs that interrupts current
process or provides an opportunity to preempt current one in
favor of another
– Events: clock interrupt, I/O interrupt, OS call, signal
CPU SCHEDULER
• Preemptive scheduling policy interrupts processing of a job and
transfers the CPU to another job.
- The process may be pre-empted by the operating system when:
1)a new process arrives (perhaps at a higher priority), or
2)an interrupt or signal occurs, or
3)a (frequent) clock interrupt occurs.
Combined strategies
5)Multi-level queue
6)Multi-level feedback queue
First In First Out(FIFO)
• The simplest scheduling discipline
• Processes are dispatched according to their arrival time on the
ready queue
• Process that has the CPU will run until complete
• Non preemptive scheduling algo.
First In First Out(FIFO)
• Disadvantage:
• Long jobs make short jobs wait
• Unimportant jobs make important jobs wait
• Not useful in scheduling interactive users because it does not
guarantee good response times
First In First Out(FIFO)
• Rarely used as a master scheduling algo but often embedded within
other algo.
• Eg many scheduling algo dispatch processes according to priority
but processes with the same priority are dispatched FIFO
FIFO
Ready list
Completion
C B A CPU
Running
Block
Dispatch
Timerrunout
Ready Blocked
Wakeup
First Come First Serve (FCFS)
• Non-preemptive.
• Handles jobs according to their arrival time -- the earlier they
arrive, the sooner they’re served.
• Simple algorithm to implement -- uses a FIFO queue.
• Good for batch systems; not so good for interactive ones.
• Turnaround time is unpredictable.
Round Robin(RR)
• Process are dispatched FIFO but are given a limited amount of
CPU time(time-slice or quantum)
• If a process does not complete before its CPU time expires, the
CPU is preempted and given to the next waiting process
• The preempted process is placed at the back of the ready list
Round Robin(RR)
• Advantage:
• Effective in timesharing environments in which the system needs to
guarantee reasonable response times for interactive users
Round Robin(RR)
Ready list
Completion
(complete within
CPU
CPU time)
A C B A
Preemption
(does not complete
before CPU time
expires)
Round Robin (RR)
• FCFS with Preemption.
• Used extensively in interactive systems because it’s easy to
implement.
• Isn’t based on job characteristics but on a predetermined
slice of time that’s given to each job.
• Ensures CPU is equally shared among all active
processes and isn’t monopolized by any one job.
• Time slice is called a time quantum
• size crucial to system performance (100 ms to 1-2 secs)
Shortest Job First(SJF)
• Non preemptive scheduling algorithm
• Process with the smallest estimated run-time-to-completion is run
next
• Advantage :
• Minimize the average waiting time of jobs
• Disadvantage :
• Not useful in timesharing environment
Shortest Job First (SJF)
• Non-preemptive.
• Handles jobs based on length of their CPU cycle time.
• Use lengths to schedule process with shortest time.
• Optimal – gives minimum average waiting time for a given
set of processes.
• optimal only when all of jobs are available at same time
and the CPU estimates are available and accurate.
• Doesn’t work in interactive systems because users don’t
estimate in advance CPU time required to run their jobs.
Priority Scheduling(P)
• Non-preemptive.
• Gives preferential treatment to important jobs.
• Programs with highest priority are processed first.
• Aren’t interrupted until CPU cycles are completed or a
natural wait occurs.
• If 2+ jobs with equal priority are in READY queue,
processor is allocated to one that arrived first (first come
first served within priority).
• Many different methods of assigning priorities by system
administrator or by Processor Manager.
Priority
• A preemptive priority scheduling algorithm will preempt the CPU
if the priority of the newly arrived process is higher than the
priority of the currently running process.
Priority
• A non preemptive priority scheduling algorithm will simply put the
newly arrived process at the head of the ready queue.
Priority
• Disadvantage :
• indefinite blocking or process starvation. This can be solved by a
technique called aging wherein we gradually increase the priority of
a long waiting process.
Multilevel Queue(MLQ)
• Partitions the ready queue into several separate queues.
• The processes are permanently assigned to one queue based on
some property of the process. Eg :
• Interactive process
• Batch process
Multilevel Queue(MLQ)
• Each queue has its own scheduling algorithm.
• In addition, there must be scheduling between the queues which is
mostly implemented as fixed priority preemptive scheduling
Multilevel Queue
Ready Queue 1
System Processes
SJF Processor
Ready Queue 2
Interactive Processes Inter queue scheduling (mostly
implemented as fixed priority
RR preemptive scheduling)
Ready Queue 3
Batch Processes
FIFO
Multi-level Queue
Multi-level Queue
Multilevel Feedback Queue
(MLFQ)
• Has a number of queues, each assigned a different priority level.
• A job that is ready to run can only be on a single queue.
MULTILEVEL FEEDBACK
QUEUE (MLFQ)
• Uses priorities to decide which job should run at a given time:
• a job with higher priority (i.e., a job on a higherqueue) is the one that
will run.
• If there is more than 1 job with the same priority at a given queue
at the same time.
• roundrobin scheduling is used among those jobs
Multilevel Feedback Queue
(MLFQ)
• MLFQ varies the priority of a job based on its observed behavior.
Eg:
• If, for example, a job repeatedly release the CPU while waiting for
input from the keyboard, MLFQ will keep its priority high.
• If a job uses the CPU intensively, MLFQ will reduce its priority.
• In this way, MLFQ will try to learn about processes as they run,
and thus use the history of the job to predict its future behavior.
Multilevel Feedback Queue
(MLFQ)
• Rules of MLFQ:
• Rule 1: If Priority(A) > Priority(B), A will run and B
won’t
• Rule 2: If Priority(A) = Priority(B), both A and B will
be run in round-robin fashion
• Rule 3: When a job enters the system, it is placed at
the highest priority (the topmost queue).
• Rule 4: Once a job uses up its quantum at a given
level (regardless of how many times it has given up
the CPU), its priority is reduced (it moves down one
queue).
• Rule 5: After some time period S, move all the jobs in
the system to the topmost queue
Multilevel Feedback Queue
Queue 1, quantum = 10 milisec.
Priority 1
Queue 3, FIFO
Priority 3
Multi-level Feedback Queue
(MLFQ)
Multi-level Feedback Queue
(MLFQ) : EXAMPLE
Scheduling
A new job enters queue Q0 (RR) and is placed at the end. When it gains the
CPU, the job receives 8 milliseconds. If it does not finish in 8 milliseconds,
the job is moved to the end of queue Q1.
Example of
Multilevel
Feedback
Queue
A thread is a basic unit of CPU utilization; it comprises a thread
ID, a program counter, a register set, and a stack.
It shares with other threads belonging to the same process its
code section, data section, and other OS resources, such as open
files and signals.
A traditional process has a single thread of control.
If a process has multiple threads of control, it can perform more
than one task at a time.
DEFINITION
OF
THREADS
SINGLE AND
MULTITHREADED PROCESSES
BENEFITS OF MULTITHREADING
• Responsiveness
• Resource Sharing
• Economy
• One-to-One
• Many-to-Many
Many-to-one
• Many user-level threads mapped to single kernel thread
• Examples:
• Solaris Green Threads
Many-to-one Model
One-to-one
• Each user-level thread maps to kernel thread
• Examples
• Windows NT/XP/2000
• Linux
• Solaris 9 and later
One-to-one Model
Many-to-many Model
• Allows many user level threads to be mapped to many kernel
threads
• Allows the operating system to create a sufficient number of kernel
threads
• Solaris prior to version 9
• Windows NT/2000 with the ThreadFiber package
Many-to-many Model
Part 3: Deadlock Situation In OS
Topic:
Define deadlock.
Describe 4 necessary conditions for a deadlock to occur.
Identify the methods for handling deadlock.
DEFINITION
• The cause of deadlocks:
- Each process needing what another process has.
- This results from sharing resources such as memory,
devices, links.
1) Mutual exclusion
One or more than one resource must be held by a process in a non-sharable
(exclusive) mode.
2) Hold and Wait
A process holds a resource while waiting for another resource.
3) No Preemption
There is only voluntary release of a resource - nobody else can make a
process give up a resource.
4) Circular Wait
Process A waits for Process B waits for Process C .... waits for Process A.
HANDLING DEADLOCK-
GENERAL STRATEGY
There are three methods:
Ignore Deadlocks: Most Operating systems do this!!
1) Mutual exclusion:
a) Automatically holds for printers and other non-sharables.
b) Shared entities (read only files) don't need mutual exclusion (and aren’t
susceptible to deadlock.)
c) Prevention not possible, since some devices are intrinsically non-
sharable.
2) Hold and wait:
a) Collect all resources before execution.
b) A particular resource can only be requested when no others are being
held. A sequence of resources is always collected beginning with the
same one.
c) Utilization is low, starvation possible.
Deadlock Prevention
3) No preemption:
a) Release any resource already being held if the process can't get an additional
resource.
b) Allow preemption - if a needed resource is held by another process, which is also
waiting on some resource, steal it. Otherwise wait.
4) Circular wait:
a) Number resources and only request in ascending order.
b) EACH of these prevention techniques may cause a decrease in utilization and/or
resources. For this reason, prevention isn't necessarily the best technique.
c) Prevention is generally the easiest to implement.
DEADLOCK AVOIDANCE
If we have prior knowledge of how resources will be requested, it's possible to determine if we are
entering an "unsafe" state.
Possible states are:
Deadlock No forward progress can be made.
Unsafe state A state that may allow deadlock.
Safe state A state is safe if a sequence of processes exist such that there are enough
resources for the first to finish, and as each finishes and releases its resources
there are enough for the next to finish.
The rule is simple: If a request allocation would cause an unsafe state, do not honor that request.
NOTE: All deadlocks are unsafe, but all unsafes are NOT deadlocks.
METHODS FOR
HANDLING DEADLOCKS
• Ensure that the system will never enter a deadlock state.
• Ignore the problem and pretend that deadlocks never occur in the
system; used by most operating systems, including UNIX.