Professional Documents
Culture Documents
OS-Slides QP
OS-Slides QP
OS-Slides QP
Prof. Sivashankari.R
Process: Program in execution
if (fork() == 0)
{ a = a + 5; printf(“%d,%d\n”, a, &a); }
else { a = a – 5; printf(“%d,%d\n”, a, &a); }
(a) u = x + 10 and v = y
(b) u = x + 10 and v != y
(c) u + 10 = x and v = y
(d) u + 10 = x and v != y
Thread
• A lightweight process
User Level Thread
• Implemented by thread library at the user level.
• Library provides support for thread creation, scheduling, and
management
• No support form the kernel.
• Fast to create and manage.
• Thread performing a blocking system call will cause the entire process
to block.
Kernel Level Thread
• supported directly by OS.
• Kernel performs thread creation, scheduling, and management in
kernel space.
• Slower to create and manage than the user threads.
• Thread performing a blocking system call kernel can schedule another
thread in application for execution
CPU Scheduling
• All the process in the ready queue are waiting for a chance to run on
the CPU.
• The problem of determining CPU is assigned to which processes is
called CPU scheduling.
• Nonpreemptive Scheduling
• once a process has been given the CPU, the CPU cannot be taken away from
that process until process terminate.
• Preemptive Scheduling
• once a process has been given the CPU can taken away in middle of
execution.
USER LEVEL THREAD KERNEL LEVEL THREAD
User thread are implemented by user
kernel threads are implemented by OS.
processes.
OS doesn’t recognized user level threads. Kernel threads are recognized by OS.
If one user level thread perform blocking If one kernel thread perform blocking
operation then entire process will be operation then another thread can continue
blocked. execution.
(B) When a user level thread is blocked, all other threads of its process are blocked.
(C) Context switching between user level threads is faster than context switching between kernel level threads.
(D) Kernel level threads cannot share the code segment
2) Consider the following statements about user level threads and kernel level threads. Which one of the
following statements is FALSE?
A. Context switch time is longer for kernel level threads than for user level threads.
B. User level threads do not need any hardware support.
C. Related kernel level threads can be scheduled on different processors in a multi-processor system.
D. Blocking one kernel level thread blocks all related threads.
Scheduling Criteria
• CPU utilization:- keep the CPU as busy as possible
• Throughput:- Number of processes that complete their execution per
time unit
• Turnaround time:- Amount of time to execute a particular process.
• Waiting time:- Amount of time a process has been waiting in the
ready queue
• Response time:- Amount of time it takes from when a request was
submitted until the First response is produced.
First-Come, First-Served (FCFS) Scheduling
• The process that requests CPU first is allocated the CPU first.
• Non preemptive.
• Other process wait for the one big process to get o CPU – Convoy
Effect.
FCFS Ex.
Process Waiting Turnaround Response
Time Time Time
P1 0 21 0
p2 21 24 21
p3 24 30 24
p4 30 32 30
Shortest-Job-First (SJR) Scheduling
• Optimal gives minimum average waiting time.
• preemptive:- Shortest-Remaining-Time-First (SRTF).
• Actual time taken by the process is already known to processor.
• Difficult to implement
SJF Ex.
Process Waiting Turnaround Response
Time Time Time
P1 11 32 11
p2 2 5 2
p3 5 11 5
p4 0 2 0
SRTF – Ex.
Process Waiting Turnaround Response
Time Time Time
P1 11 32 0
p2 2 5 1
p3 4 10 4
p4 0 2 0
Priority Scheduling
• The CPU is allocated to the process with the highest priority
• Preemptive/non preemptive
• Starvation:-low priority processes may never execute.
• Aging:- as time progresses increase the priority of the process
Priority Scheduling
Process Waiting Turnaround Response
Time Time Time
P1 3 24 3
P2 0 3 0
p3 26 32 26
p4 24 26 24
Round Robin (RR)
• A fixed time is allotted to each process, called quantum, for
execution. (Preemptive)
• Once a process is executed for given time period that process is
preempted and other process executes for given time period.
• No process waits more than (n-1)q time units.
q: time quantum
n : no. of process
• q large FIFO
• small q more context switch.
RR – Ex. (Time Quantum = 5)
Process Waiting Turnaround Response
Time Time Time
P1 11 32 0
P2 5 8 5
p3 15 21 8
p4 13 15 13
• Which of the following scheduling algorithms is non-preemptive ?
(A) Round Robin
(B) First-In First-Out
(C) Multilevel Queue Scheduling
(D) Multilevel Queue Scheduling with Feedback
Process Synchronization :- The Critical-Section Problem
Solution to Critical-Section Problem
• A solution to Critical-Section Problem must satisfy following three
requirements:
• Mutual Exclusion
• Only one process at a time can be executing in their critical section
• Progress
• processes cannot be blocked forever waiting to get into their critical sections
• Bounded Waiting
• A bound must exist on the number of times that other processes are allowed
to enter their critical sections after a process has made a request to enter its
critical section and before that request is granted.
Ex.1 Consider the methods used by processes P1 and P2 for accessing
their critical sections whenever needed, as given below. The initial
values of shared Boolean variables S1 and S2 are randomly assigned.
/* P1 */ /* P2 */
while (true) { while (true) {
wants1 = true; wants2 = true;
while (wants2 == true); while (wants1==true);
/* Critical /* Critical
Section */ Section */
wants1=false; wants2 = false;
} }
/* Remainder section */ /* Remainder section */
Here, wants1 and wants2 are shared Boolean variables, which are initialized to false. Which
one of the following statements is TRUE about the above construct?
Ans d.
Semaphore - Synchronization tool
• Accessed only through two standard atomic operations: wait and
signal.
• Binary, Counting
• Consider Peterson’s algorithm for mutual exclusion between two
concurrent processes i and j . The program executed by process is
shown below.
Until false;
Process P: Process Q:
while(1) while(1)
{ {
W: Y:
print '0'; print '1';
print '0'; print '1';
X: Z:
} }
[1] Which of the following will always lead to an output staring with '001100110011'?
(A) P(S) at W,V(S) at X,P(T) at Y,V(T) at Z,S and T initially 1
(B) P(S) at W,V(T) at X,P(T) at Y,V(S) at Z,S initially 1 and T initially 0
(C) P(S) at W,V(T) at X,P(T) at Y,V(S) at Z,S and T initially 1
(D) P(S) at W,V(S) at X,P(T) at Y,V(T) at Z,S initially 1 and T initially 0
Ex.3- Three concurrent processes X, Y, and Z execute three different code segments that
access and update certain shared variables. Process X executes the P operation (i.e.,
wait) on semaphores a, b and c; process Y executes the P operation on semaphores b, c
and d; process Z executes the P operation on semaphores c, d, and a before entering the
respective code segments. After completing the execution of its code segment, each
process invokes the V operation (i.e., signal) on its three semaphores. All semaphores
are binary semaphores initialized to one. Which one of the following represents a
deadlock-free order of invoking the P operations by the processes?
Consider a disk system with 100 cylinders. The requests to access the cylinders
occur in following sequence :
4, 34, 10, 7, 19, 73, 2, 15, 6, 20
Assuming that the head is currently at cylinder 50, what is the time taken to satisfy
all requests if it takes 1 ms to move from one cylinder to adjacent one and shortest
seek time first policy is used ?
(A) 95 ms (B) 119 ms
(C) 233 ms (D) 276 ms
Other Disk Scheduling
• FCFS
• shortest seek time first policy (optimal like SJF)
• SCAN(121 ms) Elevator algorithm
• C-SCAN (121+99+26)
• LOOK(119+71)
• C-LOOK(119+71)
• Let the page fault service time be 10 ms in a computer with average
memory access time being 20 ns. If one page fault is generated for
every 10^6memory accesses, what is the effective access time for the
memory?
(A) 21 ns
(B) 30 ns
(C) 23 ns
(D) 35 ns
Methods for Handling Deadlocks
• Deadlock Prevention
• Deadlock Avoidance
• Deadlock Recovery
Deadlock Prevention
• For a deadlock each of the four conditions must hold.
• By ensuring that at least one of these conditions cannot hold , we can
prevent the occurrence of a deadlock.
• Mutual Exclusion:- Resource sharable
• Hold and Wait- Must guarantee that whenever a process requests a
resource, it does not hold any other resources.
• No Preemption:- Preempt the desired resources
• Circular Wait- each process requests resources in an increasing order
of enumeration.
Deadlock Avoidance : The Bankers Algorithm
Deadlock Avoidance : The Bankers Algorithm