Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 120

Unit 2

Processes management

Operating System Concepts – 10th Edition Silberschatz, Galvin and Gagne


Outline
 Process Concept
 Process Scheduling
 Operations on Processes
 Interprocess Communication
 IPC in Shared-Memory Systems
 IPC in Message-Passing Systems
 Examples of IPC Systems
 Communication in Client-Server Systems

Operating System Concepts – 10th Edition 3.2 Silberschatz, Galvin and Gagne
Process Concept
 An operating system executes a variety of programs that
run as a process.
 Process – a program in execution; process execution must
progress in sequential fashion. No parallel execution of
instructions of a single process
 Multiple parts
• The program code, also called text section
• Current activity including program counter, processor
registers
• Stack containing temporary data
 Function parameters, return addresses, local
variables
• Data section containing global variables
• Heap containing memory dynamically allocated during
run time

Operating System Concepts – 10th Edition 3.3 Silberschatz, Galvin and Gagne
Process Concept (Cont.)

 Program is passive entity stored on disk (executable file);


process is active

• Program becomes process when an executable file is


loaded into memory

 Execution of program started via GUI mouse clicks,


command line entry of its name, etc.

 One program can be several processes


• Consider multiple users executing the same program

Operating System Concepts – 10th Edition 3.4 Silberschatz, Galvin and Gagne
Process in Memory

Operating System Concepts – 10th Edition 3.5 Silberschatz, Galvin and Gagne
Memory Layout of a C Program

Operating System Concepts – 10th Edition 3.6 Silberschatz, Galvin and Gagne
Process State

 As a process executes, it changes state


• New: The process is being created
• Running: Instructions are being executed
• Waiting: The process is waiting for some event to
occur

• Ready: The process is waiting to be assigned to a


processor

• Terminated: The process has finished execution

Operating System Concepts – 10th Edition 3.7 Silberschatz, Galvin and Gagne
Diagram of Process State

Operating System Concepts – 10th Edition 3.8 Silberschatz, Galvin and Gagne
Process Control Block (PCB)
Information associated with each process(also called task
control block)
 Process state – running, waiting, etc.
 Program counter – location of instruction
to next execute
 CPU registers – contents of all process-
centric registers
 CPU scheduling information- priorities,
scheduling queue pointers
 Memory-management information –
memory allocated to the process
 Accounting information – CPU used, clock
time elapsed since start, time limits
 I/O status information – I/O devices
allocated to process, list of open files

Operating System Concepts – 10th Edition 3.9 Silberschatz, Galvin and Gagne
Process Representation in Linux

Represented by the C structure task_struct

pid t_pid; /* process identifier */


long state; /* state of the process */
unsigned int time_slice /* scheduling information */
struct task_struct *parent;/* this process’s parent */
struct list_head children; /* this process’s children */
struct files_struct *files;/* list of open files */
struct mm_struct *mm; /* address space of this process */

Operating System Concepts – 10th Edition 3.10 Silberschatz, Galvin and Gagne
Process Scheduling

 Maximize CPU use, quickly switch processes onto CPU for


time sharing

 Process scheduler selects among available processes for


next execution on CPU

 Maintains scheduling queues of processes


• Job queue – set of all processes in the system
• Ready queue – set of all processes residing in main
memory, ready and waiting to execute

• Device queues – set of processes waiting for an I/O


device

• Processes migrate among the various queues

Operating System Concepts – 10th Edition 3.11 Silberschatz, Galvin and Gagne
Ready and Wait Queues

Operating System Concepts – 10th Edition 3.12 Silberschatz, Galvin and Gagne
Representation of Process Scheduling

Operating System Concepts – 10th Edition 3.13 Silberschatz, Galvin and Gagne
CPU Switch From Process to Process
A context switch occurs when the CPU switches from
one process to another.

Operating System Concepts – 10th Edition 3.14 Silberschatz, Galvin and Gagne
Context Switch

 When CPU switches to another process, the system


must save the state of the old process and load the saved
state for the new process via a context switch

 Context of a process represented in the PCB


 Context-switch time is pure overhead; the system does
no useful work while switching

• The more complex the OS and the PCB  the longer


the context switch

 Time dependent on hardware support


• Some hardware provides multiple sets of registers
per CPU  multiple contexts loaded at once

Operating System Concepts – 10th Edition 3.15 Silberschatz, Galvin and Gagne
Basic Concepts

 Maximum CPU utilization


obtained with
multiprogramming
 CPU–I/O Burst Cycle –
Process execution consists
of a cycle of CPU execution
and I/O wait
 CPU burst followed by I/O
burst
 CPU burst distribution is of
main concern

Operating System Concepts – 10th Edition 3.16 Silberschatz, Galvin and Gagne
Histogram of CPU-burst Times

Large number of short bursts

Small number of longer bursts

Operating System Concepts – 10th Edition 3.17 Silberschatz, Galvin and Gagne
CPU Scheduler
 The CPU scheduler selects from among the processes
in ready queue, and allocates a CPU core to one of
them
• Queue may be ordered in various ways
 CPU scheduling decisions may take place when a
process:
1. Switches from running to waiting state
2. Switches from running to ready state
3. Switches from waiting to ready
4. Terminates
 For situations 1 and 4, there is no choice in terms of
scheduling. A new process (if one exists in the ready
queue) must be selected for execution.
 For situations 2 and 3, however, there is a choice.

Operating System Concepts – 10th Edition 3.18 Silberschatz, Galvin and Gagne
Preemptive and Nonpreemptive Scheduling

 When scheduling takes place only under circumstances


1 and 4, the scheduling scheme is nonpreemptive.

 Otherwise, it is preemptive.
 Under Nonpreemptive scheduling, once the CPU has
been allocated to a process, the process keeps the CPU
until it releases it either by terminating or by switching
to the waiting state.

 Virtually all modern operating systems including


Windows, MacOS, Linux, and UNIX use preemptive
scheduling algorithms.

Operating System Concepts – 10th Edition 3.19 Silberschatz, Galvin and Gagne
Preemptive Scheduling and Race Conditions

 Preemptive scheduling can result in race conditions


when data are shared among several processes.

 Consider the case of two processes that share data.


While one process is updating the data, it is preempted
so that the second process can run. The second process
then tries to read the data, which are in an inconsistent
state.

 This issue will be explored in detail in Chapter 6.

Operating System Concepts – 10th Edition 3.20 Silberschatz, Galvin and Gagne
Dispatcher
 Dispatcher module gives control
of the CPU to the process
selected by the CPU scheduler;
this involves:
• Switching context
• Switching to user mode
• Jumping to the proper
location in the user program
to restart that program
 Dispatch latency – time it takes for
the dispatcher to stop one
process and start another
running

Operating System Concepts – 10th Edition 3.21 Silberschatz, Galvin and Gagne
Scheduling Criteria

 CPU utilization – keep the CPU as busy as possible


 Throughput – # of processes that complete their
execution per time unit

 Turnaround time – amount of time to execute a


particular process

 Waiting time – amount of time a process has been


waiting in the ready queue

 Response time – amount of time it takes from when a


request was submitted until the first response is
produced.

Operating System Concepts – 10th Edition 3.22 Silberschatz, Galvin and Gagne
Scheduling Algorithm Optimization Criteria

 Max CPU utilization


 Max throughput
 Min turnaround time
 Min waiting time
 Min response time

Operating System Concepts – 10th Edition 3.23 Silberschatz, Galvin and Gagne
First- Come, First-Served (FCFS) Scheduling

Process Burst Time


P1 24
P2 3
P3 3
 Suppose that the processes arrive in the order: P1 , P2 ,
P3
The Gantt Chart for the schedule is:
P1 P2 P3
0 24 27 30

 Waiting time for P1 = 0; P2 = 24; P3 = 27


 Average waiting time: (0 + 24 + 27)/3 = 17

Operating System Concepts – 10th Edition 3.24 Silberschatz, Galvin and Gagne
FCFS Scheduling (Cont.)

Suppose that the processes arrive in the order:


P2 , P3 , P1
 The Gantt chart for the schedule is:

P2 P3 P1
0 3 6 30

 Waiting time for P1 = 6; P2 = 0; P3 = 3


 Average waiting time: (6 + 0 + 3)/3 = 3
 Much better than previous case
 Convoy effect - short process behind long process
• Consider one CPU-bound and many I/O-bound
processes

Operating System Concepts – 10th Edition 3.25 Silberschatz, Galvin and Gagne
Shortest-Job-First (SJF) Scheduling

 Associate with each process the length of its


next CPU burst
• Use these lengths to schedule the process
with the shortest time
 SJF is optimal – gives minimum average waiting
time for a given set of processes
 Preemptive version called shortest-remaining-time-
first
 How do we determine the length of the next CPU
burst?
• Could ask the user
• Estimate

Operating System Concepts – 10th Edition 3.26 Silberschatz, Galvin and Gagne
Example of SJF

Process Burst Time


P1 6
P2 8
P3 7
P4 3

 SJF scheduling chart

P4 P1 P3 P2
0 3 9 16 24

 Average waiting time = (3 + 16 + 9 + 0) / 4 = 7

Operating System Concepts – 10th Edition 3.27 Silberschatz, Galvin and Gagne
Determining Length of Next CPU Burst

 Can only estimate the length – should be similar to the


previous one
• Then pick process with shortest predicted next CPU
burst
 Can be done by using the length of previous CPU bursts,
using exponential averaging

 Commonly, α set to ½

Operating System Concepts – 10th Edition 3.28 Silberschatz, Galvin and Gagne
Shortest Remaining Time First Scheduling

 Preemptive version of SJN


 Whenever a new process arrives in the ready queue,
the decision on which process to schedule next is
redone using the SJN algorithm.

 Is SRT more “optimal” than SJN in terms of the


minimum average waiting time for a given set of
processes?

Operating System Concepts – 10th Edition 3.29 Silberschatz, Galvin and Gagne
Example of Shortest-remaining-time-first

 Now we add the concepts of varying arrival times and


preemption to the analysis
Process i Arrival TimeT Burst Time
P1 0 8
P2 1 4
P3 2 9
P4 3 5
 Preemptive SJF Gantt Chart

P1 P2 P4 P1 P3
0 1 5 10 17 26

 Average waiting time = [(10-1)+(1-1)+(17-2)+(5-3)]/4 = 26/4


= 6.5

Operating System Concepts – 10th Edition 3.30 Silberschatz, Galvin and Gagne
Round Robin (RR)

 Each process gets a small unit of CPU time (time quantum q),
usually 10-100 milliseconds. After this time has elapsed,
the process is preempted and added to the end of the ready
queue.

 If there are n processes in the ready queue and the time


quantum is q, then each process gets 1/n of the CPU time in
chunks of at most q time units at once. No process waits
more than (n-1)q time units.

 Timer interrupts every quantum to schedule next process


 Performance
• q large  FIFO (FCFS)
• q small  RR
Operating System Concepts – 10th Edition 3.31 Silberschatz, Galvin and Gagne
Example of RR with Time Quantum = 4

Process Burst Time


P1 24
P2 3
3 P3
 The Gantt chart is:
P1 P2 P3 P1 P1 P1 P1 P1
0 4 7 10 14 18 22 26 30

 Typically, higher average turnaround than SJF, but


better response
 q should be large compared to context switch time
• q usually 10 milliseconds to 100 milliseconds,
• Context switch < 10 microseconds

Operating System Concepts – 10th Edition 3.32 Silberschatz, Galvin and Gagne
Time Quantum and Context Switch Time

Operating System Concepts – 10th Edition 3.33 Silberschatz, Galvin and Gagne
Turnaround Time Varies With The Time Quantum

80% of CPU bursts


should be shorter than q

Operating System Concepts – 10th Edition 3.34 Silberschatz, Galvin and Gagne
Priority Scheduling

 A priority number (integer) is associated with each


process

 The CPU is allocated to the process with the highest


priority (smallest integer  highest priority)
• Preemptive
• Nonpreemptive

 SJF is priority scheduling where priority is the inverse of


predicted next CPU burst time

 Problem  Starvation – low priority processes may never


execute

 Solution  Aging – as time progresses increase the


priority of the process

Operating System Concepts – 10th Edition 3.35 Silberschatz, Galvin and Gagne
Example of Priority Scheduling

Process Burst Time Priority


P1 10 3
P2 1 1
P3 2 4
P4 1 5
P5 5 2

 Priority scheduling Gantt Chart

 Average waiting time = 8.2

Operating System Concepts – 10th Edition 3.36 Silberschatz, Galvin and Gagne
Priority Scheduling w/ Round-Robin

 Run the process with the highest priority. Processes


with the same priority run round-robin
 Example:
Process a Burst Time Priority
P1 4 3
P2 5 2
P3 8 2
P4 7 1
P5 3 3
 Gantt Chart with time quantum = 2

Operating System Concepts – 10th Edition 3.37 Silberschatz, Galvin and Gagne
Multilevel Queue
 The ready queue consists of multiple queues
 Multilevel queue scheduler defined by the following
parameters:
• Number of queues
• Scheduling algorithms for each queue
• Method used to determine which queue a
process will enter when that process needs
service
• Scheduling among the queues

Operating System Concepts – 10th Edition 3.38 Silberschatz, Galvin and Gagne
Multilevel Queue
 With priority scheduling, have separate queues for each
priority.
 Schedule the process in the highest-priority queue!

Operating System Concepts – 10th Edition 3.39 Silberschatz, Galvin and Gagne
Multilevel Queue

 Prioritization based upon process type

Operating System Concepts – 10th Edition 3.40 Silberschatz, Galvin and Gagne
Multilevel Feedback Queue
 A process can move between the various queues.
 Multilevel-feedback-queue scheduler defined by the
following parameters:
• Number of queues
• Scheduling algorithms for each queue
• Method used to determine when to upgrade a
process
• Method used to determine when to demote a
process
• Method used to determine which queue a process
will enter when that process needs service
 Aging can be implemented using multilevel feedback
queue

Operating System Concepts – 10th Edition 3.41 Silberschatz, Galvin and Gagne
Example of Multilevel Feedback Queue
 Three queues:
• Q0 – RR with time quantum 8
milliseconds
• Q1 – RR time quantum 16
milliseconds
• Q2 – FCFS
 Scheduling
• A new process enters queue Q0
which is served in RR
 When it gains CPU, the process
receives 8 milliseconds
 If it does not finish in 8
milliseconds, the process is moved
to queue Q1
• At Q1 job is again served in RR
and receives 16 additional
milliseconds
 If it still does not complete, it is
preempted and moved to queue Q2

Operating System Concepts – 10th Edition 3.42 Silberschatz, Galvin and Gagne
Threads Motivation

 A thread refers to the smallest unit of execution within a


process.
 Most modern applications are multithreaded
 Threads run within application
 Multiple tasks with the application can be implemented
by separate threads
• Update display
• Fetch data
• Spell checking
• Answer a network request
 Process creation is heavy-weight while thread creation is
light-weight
 Can simplify code, increase efficiency
 Kernels are generally multithreaded

Operating System Concepts – 10th Edition 3.43 Silberschatz, Galvin and Gagne
Single and Multithreaded Processes

Operating System Concepts – 10th Edition 3.44 Silberschatz, Galvin and Gagne
Multithreaded Server Architecture

Operating System Concepts – 10th Edition 3.45 Silberschatz, Galvin and Gagne
Benefits

 Responsiveness – may allow continued execution if part


of process is blocked, especially important for user
interfaces

 Resource Sharing – threads share resources of process,


easier than shared memory or message passing

 Economy – cheaper than process creation, thread


switching lower overhead than context switching

 Scalability – process can take advantage of multicore


architectures

Operating System Concepts – 10th Edition 3.46 Silberschatz, Galvin and Gagne
Thread Scheduling

 Distinction between user-level and kernel-level threads


 When threads supported, threads scheduled, not
processes

 Many-to-one and many-to-many models, thread library


schedules user-level threads to run on LWP

• Known as process-contention scope (PCS) since


scheduling competition is within the process

• Typically done via priority set by programmer


 Kernel thread scheduled onto available CPU is system-
contention scope (SCS) – competition among all threads in
system

Operating System Concepts – 10th Edition 3.47 Silberschatz, Galvin and Gagne
Pthread Scheduling
 API allows specifying either PCS or SCS during thread
creation
• PTHREAD_SCOPE_PROCESS schedules threads using
PCS scheduling

• PTHREAD_SCOPE_SYSTEM schedules threads using


SCS scheduling
 Can be limited by OS – Linux and macOS only allow
PTHREAD_SCOPE_SYSTEM

Operating System Concepts – 10th Edition 3.48 Silberschatz, Galvin and Gagne
Multithreading Models

 These models determine how user threads are mapped to


kernel threads.

• Many-to-One
• One-to-One
• Many-to-Many

Operating System Concepts – 10th Edition 3.49 Silberschatz, Galvin and Gagne
Many-to-One
 In this model, many user threads are mapped to only
one kernel thread.
 All user-level threads share a single kernel-level thread.
 Advantages:
• Easy to implement since it requires minimal kernel
support.
• Fast context switching between user threads.
 Disadvantages:
• Blocking system calls can block the entire process.
• Not suitable for multicore processors due to single
kernel thread.
 Examples:
• Solaris Green Threads
• GNU Portable Threads

Operating System Concepts – 10th Edition 3.50 Silberschatz, Galvin and Gagne
One-to-One
 Each user thread corresponds to exactly one kernel thread.
 Provides true parallelism on multicore systems.
 Advantages:
• Efficient due to parallel execution.
• Blocking system calls do not affect other threads.
 Disadvantages:
• Higher overhead (more kernel threads).
• Increased context switching time.
 Examples
• Windows
• Linux

Operating System Concepts – 10th Edition 3.51 Silberschatz, Galvin and Gagne
Many-to-Many Model
 Allows multiple user threads to be mapped to multiple
kernel threads.
 Provides a balance between the previous models.
 Advantages:
• Flexibility in managing user and kernel threads.
• Can exploit multicore processors effectively.
 Disadvantages:
• Complex implementation with synchronization
challenges.
 Example: Windows threads
(user threads mapped to kernel
threads).

Operating System Concepts – 10th Edition 3.52 Silberschatz, Galvin and Gagne
Process Synchronization

Types of Processes:
 Independent Processes: These processes do not affect
each other’s execution. Their actions are isolated and
independent.

 Cooperative Processes: These processes can impact or be


affected by other processes executing in the
system. Resources are shared among cooperative
processes, leading to synchronization challenges.

 Process synchronization in operating systems refers to the


mechanisms and techniques used to coordinate the
execution of multiple processes or threads to ensure
correct and predictable behavior, particularly in scenarios
where shared resources are accessed concurrently.

 Process synchronization is essential for preventing issues


such as race
Operating System Concepts – 10th Edition
conditions, deadlocks, and data
3.53 Silberschatz, Galvin and Gagne
Race condition

 It is a situation that occurs when multiple processes


execute the same code or access shared variables
simultaneously. The outcome depends on the order of
access, potentially leading to incorrect results or data
inconsistency.

 Process synchronization is vital for maintaining data


consistency, integrity, and preventing deadlocks in multi-
process systems.

Operating System Concepts – 10th Edition 3.54 Silberschatz, Galvin and Gagne
Race Condition
 counter++ could be implemented as (P1)
register1 = counter
register1 = register1 + 1
counter = register1
 counter-- could be implemented as (p2)
register2 = counter
register2 = register2 - 1
counter = register2

 Consider this execution interleaving with “count = 5” initially:


S0: P1 register1 = counter {register1 = 5}
S1: P1 register1 = register1 + 1 {register1 = 6}
S2: P2 register2 = counter {register2 = 5}
S3: P2 register2 = register2 – 1 {register2 = 4}
S4: P1 counter = register1 {counter = 6 }
S5: P2 counter = register2 {counter = 4}

Operating System Concepts – 10th Edition 3.55 Silberschatz, Galvin and Gagne
Critical Section Problem
 A critical section refers to a part of a program where
shared resources are accessed, and where concurrent
access by multiple processes could lead to undesirable
behavior such as data inconsistency.
 Consider system of n processes {p0, p1, … pn-1}
 Each process has critical section segment of code
• Process may be changing common variables, updating
table, writing file, etc
• When one process in critical section, no other may be
in its critical section
 Critical section problem is to design protocol to solve
this
 Each process must ask permission to enter critical
section in entry section, may follow critical section with
exit section, then remainder section

Operating System Concepts – 10th Edition 3.56 Silberschatz, Galvin and Gagne
Critical Section

 General structure of process Pi

Operating System Concepts – 10th Edition 3.57 Silberschatz, Galvin and Gagne
Solution to Critical-Section Problem
1. Mutual Exclusion - If process Pi is executing in its critical
section, then no other processes can be executing in their
critical sections

2. Progress - If no process is executing in its critical section


and there exist some processes that wish to enter their
critical section, then the selection of the processes that
will enter the critical section next cannot be postponed
indefinitely

3. Bounded Waiting - A bound (limit) must exist on the


number of times that other processes are allowed to enter
their critical sections after a process has made a request
to enter its critical section and before that request is
granted
 Assume that each process executes at a nonzero speed
 No assumption concerning relative speed of the n
processes
Operating System Concepts – 10th Edition 3.58 Silberschatz, Galvin and Gagne
Peterson’s Solution
 Good algorithmic description of solving the problem
 Two process solution
 Assume that the load and store machine-language
instructions are atomic; that is, cannot be
interrupted

 The two processes share two variables:


• int turn;
• Boolean flag[2]

 The variable turn indicates whose turn it is to enter


the critical section

 The flag array is used to indicate if a process is


ready to enter the critical section. flag[i] = true
implies that process Pi is ready!

Operating System Concepts – 10th Edition 3.59 Silberschatz, Galvin and Gagne
Algorithm for Process Pi

do {
flag[i] = true;
turn = j;
while (flag[j] && turn = = j);
critical section
flag[i] = false;
remainder section
} while (true);

Operating System Concepts – 10th Edition 3.60 Silberschatz, Galvin and Gagne
Peterson’s Solution (Cont.)
 Provable that the three CS requirement are met:
1. Mutual exclusion is preserved
Pi enters CS only if:
either flag[j] = false or turn = i
2. Progress requirement is satisfied
3. Bounded-waiting requirement is met

Operating System Concepts – 10th Edition 3.61 Silberschatz, Galvin and Gagne
Synchronization Hardware
 Many systems provide hardware support for critical
section code

 Uniprocessors – could disable interrupts


• Currently running code would execute without
preemption
• Generally too inefficient on multiprocessor systems
 Operating systems using this not broadly scalable

 Modern machines provide special atomic hardware


instructions
 Atomic = non-interruptable

• Either test memory word and set value


• Or swap contents of two memory words

Operating System Concepts – 10th Edition 3.62 Silberschatz, Galvin and Gagne
Solution to Critical-section
Problem Using Locks
do {
acquire lock
critical section
release lock
remainder section
} while (TRUE);

Operating System Concepts – 10th Edition 3.63 Silberschatz, Galvin and Gagne
TestAndSet Instruction

 Definition:

boolean TestAndSet (boolean *target)


{
boolean rv = *target;
*target = TRUE;
return rv:
}

Operating System Concepts – 10th Edition 3.64 Silberschatz, Galvin and Gagne
Solution using TestAndSet

 Shared boolean variable lock, initialized to FALSE


 Solution:

do {
while ( TestAndSet (&lock ))
; // do nothing

// critical section

lock = FALSE;

// remainder section

} while (TRUE);

Operating System Concepts – 10th Edition 3.65 Silberschatz, Galvin and Gagne
Swap Instruction

 Definition:

void Swap (boolean *a, boolean *b)


{
boolean temp = *a;
*a = *b;
*b = temp:
}

Operating System Concepts – 10th Edition 3.66 Silberschatz, Galvin and Gagne
Solution using Swap
 Shared Boolean variable lock initialized to FALSE; Each
process has a local Boolean variable key
 Solution:
do {
key = TRUE;
while ( key == TRUE)
Swap (&lock, &key );

// critical section

lock = FALSE;

// remainder section

} while (TRUE);

Operating System Concepts – 10th Edition 3.67 Silberschatz, Galvin and Gagne
Bounded-waiting Mutual Exclusion
with TestandSet()
do {
waiting[i] = TRUE;
key = TRUE;
while (waiting[i] && key)
key = TestAndSet(&lock);
waiting[i] = FALSE;
// critical section
j = (i + 1) % n;
while ((j != i) && !waiting[j])
j = (j + 1) % n;
if (j == i)
lock = FALSE;
else
waiting[j] = FALSE;
// remainder section
} while (TRUE);

Operating System Concepts – 10th Edition 3.68 Silberschatz, Galvin and Gagne
Semaphore

 Synchronization tool that does not require busy waiting


 Semaphore S – integer variable
 Two standard operations modify S: wait() and signal()
• Originally called P() and V()
 Less complicated
 Can only be accessed via two indivisible (atomic) operations
• wait (S) {
while S <= 0
; // no-op
S--;
}
• signal (S) {
S++;
}

Operating System Concepts – 10th Edition 3.69 Silberschatz, Galvin and Gagne
Semaphore as
General Synchronization Tool
 Counting semaphore – integer value can range over an
unrestricted domain
 Binary semaphore – integer value can range only between 0
and 1; can be simpler to implement
• Also known as mutex locks
 Can implement a counting semaphore S as a binary semaphore
 Provides mutual exclusion
Semaphore mutex; // initialized to 1
do {
wait (mutex);
// Critical Section
signal (mutex);
// remainder section
} while (TRUE);

Operating System Concepts – 10th Edition 3.70 Silberschatz, Galvin and Gagne
Semaphore Implementation

 Must guarantee that no two processes can execute wait


() and signal () on the same semaphore at the same time

 Thus, implementation becomes the critical section


problem where the wait and signal code are placed in the
crtical section
• Could now have busy waiting in critical section
implementation
 But implementation code is short
 Little busy waiting if critical section rarely
occupied

 Note that applications may spend lots of time in critical


sections and therefore this is not a good solution

Operating System Concepts – 10th Edition 3.71 Silberschatz, Galvin and Gagne
Semaphore Implementation
with no Busy waiting

 With each semaphore there is an associated waiting


queue
 Each entry in a waiting queue has two data items:
• value (of type integer)
• pointer to next record in the list

 Two operations:
• block – place the process invoking the operation on
the appropriate waiting queue
• wakeup – remove one of processes in the waiting
queue and place it in the ready queue

Operating System Concepts – 10th Edition 3.72 Silberschatz, Galvin and Gagne
Semaphore Implementation with
no Busy waiting (Cont.)
 Implementation of wait:
wait(semaphore *S) {
S->value--;
if (S->value < 0) {
add this process to S->list;
block();
}
}
 Implementation of signal:

signal(semaphore *S) {
S->value++;
if (S->value <= 0) {
remove a process P from S->list;
wakeup(P);
}
}

Operating System Concepts – 10th Edition 3.73 Silberschatz, Galvin and Gagne
Deadlock and Starvation
 Deadlock – two or more processes are waiting
indefinitely for an event that can be caused by only one
of the waiting processes
 Let S and Q be two semaphores initialized to 1
P0 P1
wait (S); wait (Q);
wait (Q); wait (S);
. .
. .
. .
signal (S); signal (Q);
signal (Q); signal (S);
 Starvation – indefinite blocking
• A process may never be removed from the
semaphore queue in which it is suspended
 Priority Inversion – Scheduling problem when lower-
priority process holds a lock needed by higher-priority
process
Operating System•Concepts
Solved via priority-inheritance
– 10 Edition
th
3.74 protocol Silberschatz, Galvin and Gagne
Classical Problems of Synchronization
 Classical problems used to test newly-proposed
synchronization schemes

• Bounded-Buffer Problem

• Readers and Writers Problem

• Dining-Philosophers Problem

Operating System Concepts – 10th Edition 3.75 Silberschatz, Galvin and Gagne
Bounded Buffer Problem (Cont.)
 The structure of the producer process

do {

// produce an item in nextp

wait (empty);
wait (mutex);

// add the item to the buffer

signal (mutex);
signal (full);
} while (TRUE);

Operating System Concepts – 10th Edition 3.76 Silberschatz, Galvin and Gagne
Bounded Buffer Problem (Cont.)
 The structure of the consumer process

do {
wait (full);
wait (mutex);

// remove an item from buffer to nextc

signal (mutex);
signal (empty);

// consume the item in nextc

} while (TRUE);

Operating System Concepts – 10th Edition 3.77 Silberschatz, Galvin and Gagne
Readers-Writers Problem

 A data set is shared among a number of concurrent


processes
• Readers – only read the data set; they do not perform
any updates
• Writers – can both read and write
Problem – allow multiple readers to read at the same
time
• Only one single writer can access the shared data at
the same time
 Several variations of how readers and writers are
treated – all involve priorities
 Shared Data
• Data set
• Semaphore mutex initialized to 1
• Semaphore wrt initialized to 1
• Integer readcount initialized to 0
Operating System Concepts – 10th Edition 3.78 Silberschatz, Galvin and Gagne
Readers-Writers Problem (Cont.)

 The structure of a writer process

do {
wait (wrt) ;

// writing is performed

signal (wrt) ;
} while (TRUE);

Operating System Concepts – 10th Edition 3.79 Silberschatz, Galvin and Gagne
Readers-Writers Problem (Cont.)
 The structure of a reader process

do {
wait (mutex) ;
readcount ++ ;
if (readcount == 1)
wait (wrt) ;
signal (mutex)

// reading is performed

wait (mutex) ;
readcount - - ;
if (readcount == 0)
signal (wrt) ;
signal (mutex) ;
} while (TRUE);

Operating System Concepts – 10th Edition 3.80 Silberschatz, Galvin and Gagne
Readers-Writers Problem Variations

 First variation – no reader kept waiting unless writer has


permission to use shared object

 Second variation – once writer is ready, it performs write


asap

 Both may have starvation leading to even more variations

 Problem is solved on some systems by kernel providing


reader-writer locks

Operating System Concepts – 10th Edition 3.81 Silberschatz, Galvin and Gagne
Dining-Philosophers Problem

 Philosophers spend their lives


thinking and eating
 Don’t interact with their
neighbors, occasionally try to pick
up 2 chopsticks (one at a time) to
eat from bowl
• Need both to eat, then release
both when done
 In the case of 5 philosophers
• Shared data
 Bowl of rice (data set)
 Semaphore chopstick [5]
initialized to 1

Operating System Concepts – 10th Edition 3.82 Silberschatz, Galvin and Gagne
Dining-Philosophers Problem
Algorithm
 The structure of Philosopher i:

do {
wait ( chopstick[i] );
wait ( chopStick[ (i + 1) % 5] );

// eat
signal ( chopstick[i] );
signal (chopstick[ (i + 1) % 5] );

// think

} while (TRUE);

 What is the problem with this algorithm?

Operating System Concepts – 10th Edition 3.83 Silberschatz, Galvin and Gagne
Problems with Semaphores

 Incorrect use of semaphore operations:

• signal (mutex) …. wait (mutex)

• wait (mutex) … wait (mutex)

• Omitting of wait (mutex) or signal (mutex) (or


both)

 Deadlock and starvation

Operating System Concepts – 10th Edition 3.84 Silberschatz, Galvin and Gagne
Monitors
 A high-level abstraction that provides a convenient and
effective mechanism for process synchronization

 Abstract data type, internal variables only accessible by code


within the procedure

 Only one process may be active within the monitor at a time

 But not powerful enough to model some synchronization


schemes

monitor monitor-name
{
// shared variable declarations
procedure P1 (…) { …. }

procedure Pn (…) {……}

Initialization code (…) { … }


}
}
Operating System Concepts – 10th Edition 3.85 Silberschatz, Galvin and Gagne
Schematic view of a Monitor

Operating System Concepts – 10th Edition 3.86 Silberschatz, Galvin and Gagne
Condition Variables

 condition x, y;

 Two operations on a condition variable:


• x.wait () – a process that invokes the operation is
suspended until x.signal ()
• x.signal () – resumes one of processes (if any) that
invoked x.wait ()
 If no x.wait () on the variable, then it has no effect
on the variable

Operating System Concepts – 10th Edition 3.87 Silberschatz, Galvin and Gagne
Monitor with Condition Variables

Operating System Concepts – 10th Edition 3.88 Silberschatz, Galvin and Gagne
Condition Variables Choices

 If process P invokes x.signal (), with Q in x.wait () state,


what should happen next?
• If Q is resumed, then P must wait

 Options include
• Signal and wait – P waits until Q leaves monitor or
waits for another condition
• Signal and continue – Q waits until P leaves the
monitor or waits for another condition

• Both have pros and cons – language implementer can


decide
• Monitors implemented in Concurrent Pascal
compromise
 P executing signal immediately leaves the monitor,
Q is resumed
• Implemented in other3.89
Operating System Concepts – 10th Edition
languages including Mesa,Galvin
Silberschatz, C#,and Gagne
Solution to Dining Philosophers
monitor DiningPhilosophers
{
enum { THINKING; HUNGRY, EATING) state [5] ;
condition self [5];

void pickup (int i) {


state[i] = HUNGRY;
test(i);
if (state[i] != EATING) self [i].wait;
}

void putdown (int i) {


state[i] = THINKING;
// test left and right neighbors
test((i + 4) % 5);
test((i + 1) % 5);
}

Operating System Concepts – 10th Edition 3.90 Silberschatz, Galvin and Gagne
Solution to Dining Philosophers (Cont.)

void test (int i) {


if ( (state[(i + 4) % 5] != EATING) &&
(state[i] == HUNGRY) &&
(state[(i + 1) % 5] != EATING) ) {
state[i] = EATING ;
self[i].signal () ;
}
}

initialization_code() {
for (int i = 0; i < 5; i++)
state[i] = THINKING;
}
}

Operating System Concepts – 10th Edition 3.91 Silberschatz, Galvin and Gagne
Solution to Dining Philosophers (Cont.)

 Each philosopher i invokes the operations pickup() and


putdown() in the following sequence:

DiningPhilosophers.pickup (i);

EAT

DiningPhilosophers.putdown (i);

 No deadlock, but starvation is possible

Operating System Concepts – 10th Edition 3.92 Silberschatz, Galvin and Gagne
Monitor Implementation Using Semaphores

 Variables
semaphore mutex; // (initially = 1)
semaphore next; // (initially = 0)
int next_count = 0;

 Each procedure F will be replaced by

wait(mutex);

body of F;


if (next_count > 0)
signal(next)
else
signal(mutex);

 Mutual exclusion within a monitor is ensured

Operating System Concepts – 10th Edition 3.93 Silberschatz, Galvin and Gagne
Monitor Implementation – Condition
Variables

 For each condition variable x, we have:

semaphore x_sem; // (initially = 0)


int x_count = 0;

 The operation x.wait can be implemented as:

x-count++;
if (next_count > 0)
signal(next);
else
signal(mutex);
wait(x_sem);
x-count--;

Operating System Concepts – 10th Edition 3.94 Silberschatz, Galvin and Gagne
Monitor Implementation (Cont.)
 The operation x.signal can be implemented as:

if (x-count > 0) {
next_count++;
signal(x_sem);
wait(next);
next_count--;
}

Operating System Concepts – 10th Edition 3.95 Silberschatz, Galvin and Gagne
Resuming Processes within a Monitor
 If several processes queued on condition x, and x.signal()
executed, which should be resumed?

 FCFS frequently not adequate

 conditional-wait construct of the form x.wait(c)


• Where c is priority number
• Process with lowest number (highest priority) is
scheduled next

Operating System Concepts – 10th Edition 3.96 Silberschatz, Galvin and Gagne
iv) Increment full value by 1and decrement empty value by 1

A Monitor to Allocate Single Resource

monitor ResourceAllocator
{
boolean busy;
condition x;
void acquire(int time) {
if (busy)
x.wait(time);
busy = TRUE;
}
void release() {
busy = FALSE;
x.signal();
}
initialization code() {
busy = FALSE;
}
}

Operating System Concepts – 10th Edition 3.97 Silberschatz, Galvin and Gagne
Synchronization Examples
 Solaris

 Windows XP

 Linux

 Pthreads

Operating System Concepts – 10th Edition 3.98 Silberschatz, Galvin and Gagne
Solaris Synchronization
 Implements a variety of locks to support multitasking,
multithreading (including real-time threads), and
multiprocessing
 Uses adaptive mutexes for efficiency when protecting
data from short code segments
• Starts as a standard semaphore spin-lock
• If lock held, and by a thread running on another CPU,
spins
• If lock held by non-run-state thread, block and sleep
waiting for signal of lock being released
 Uses condition variables
 Uses readers-writers locks when longer sections of code
need access to data
 Uses turnstiles to order the list of threads waiting to
acquire either an adaptive mutex or reader-writer lock
• Turnstiles are per-lock-holding-thread, not per-object
 Priority-inheritance per-turnstile gives the running
Operating System Concepts – 10th Edition 3.99 Silberschatz, Galvin and Gagne
Windows XP Synchronization
 Uses interrupt masks to protect access to global
resources on uniprocessor systems

 Uses spinlocks on multiprocessor systems


• Spinlocking-thread will never be preempted

 Also provides dispatcher objects user-land which may


act mutexes, semaphores, events, and timers

• Events
 An event acts much like a condition variable
• Timers notify one or more thread when time expired
• Dispatcher objects either signaled-state (object
available) or non-signaled state (thread will block)

Operating System Concepts – 10th Edition 3.100 Silberschatz, Galvin and Gagne
Linux Synchronization
 Linux:
• Prior to kernel Version 2.6, disables interrupts to
implement short critical sections
• Version 2.6 and later, fully preemptive

 Linux provides:
• semaphores
• spinlocks
• reader-writer versions of both

 On single-cpu system, spinlocks replaced by enabling


and disabling kernel preemption

Operating System Concepts – 10th Edition 3.101 Silberschatz, Galvin and Gagne
Pthreads Synchronization

 Pthreads API is OS-independent

 It provides:
• mutex locks
• condition variables

 Non-portable extensions include:


• read-write locks
• spinlocks

Operating System Concepts – 10th Edition 3.102 Silberschatz, Galvin and Gagne
Atomic Transactions

 System Model
 Log-based Recovery
 Checkpoints
 Concurrent Atomic Transactions

Operating System Concepts – 10th Edition 3.103 Silberschatz, Galvin and Gagne
System Model

 Assures that operations happen as a single logical unit


of work, in its entirety, or not at all
 Related to field of database systems
 Challenge is assuring atomicity despite computer
system failures
 Transaction - collection of instructions or operations
that performs single logical function
• Here we are concerned with changes to stable
storage – disk
• Transaction is series of read and write operations
• Terminated by commit (transaction successful) or
abort (transaction failed) operation
• Aborted transaction must be rolled back to undo
any changes it performed

Operating System Concepts – 10th Edition 3.104 Silberschatz, Galvin and Gagne
Types of Storage Media

 Volatile storage – information stored here does not survive


system crashes
• Example: main memory, cache
 Nonvolatile storage – Information usually survives crashes
• Example: disk and tape
 Stable storage – Information never lost
• Not actually possible, so approximated via replication
or RAID to devices with independent failure modes

Goal is to assure transaction atomicity where failures


cause loss of information on volatile storage

Operating System Concepts – 10th Edition 3.105 Silberschatz, Galvin and Gagne
Log-Based Recovery

 Record to stable storage information about all


modifications by a transaction
 Most common is write-ahead logging
• Log on stable storage, each log record describes
single transaction write operation, including
 Transaction name
 Data item name
 Old value
 New value
• <Ti starts> written to log when transaction Ti starts
• <Ti commits> written when Ti commits
 Log entry must reach stable storage before
operation on data occurs

Operating System Concepts – 10th Edition 3.106 Silberschatz, Galvin and Gagne
Log-Based Recovery Algorithm

 Using the log, system can handle any volatile memory


errors
• Undo(Ti) restores value of all data updated by Ti
• Redo(Ti) sets values of all data in transaction Ti to
new values
 Undo(Ti) and redo(Ti) must be idempotent
• Multiple executions must have the same result as one
execution
 If system fails, restore state of all updated data via log
• If log contains <Ti starts> without <Ti commits>,
undo(Ti)
• If log contains <Ti starts> and <Ti commits>, redo(Ti)

Operating System Concepts – 10th Edition 3.107 Silberschatz, Galvin and Gagne
Checkpoints

 Log could become long, and recovery could take long


 Checkpoints shorten log and recovery time.
 Checkpoint scheme:
1. Output all log records currently in volatile storage to
stable storage
2. Output all modified data from volatile to stable
storage
3. Output a log record <checkpoint> to the log on stable
storage
 Now recovery only includes Ti, such that Ti started
executing before the most recent checkpoint, and all
transactions after Ti All other transactions already on
stable storage

Operating System Concepts – 10th Edition 3.108 Silberschatz, Galvin and Gagne
Concurrent Transactions

 Must be equivalent to serial execution – serializability


 Could perform all transactions in critical section
• Inefficient, too restrictive
 Concurrency-control algorithms provide serializability

Operating System Concepts – 10th Edition 3.109 Silberschatz, Galvin and Gagne
Serializability

 Consider two data items A and B


 Consider Transactions T0 and T1
 Execute T0, T1 atomically
 Execution sequence called schedule
 Atomically executed transaction order called serial
schedule
 For N transactions, there are N! valid serial schedules

Operating System Concepts – 10th Edition 3.110 Silberschatz, Galvin and Gagne
Schedule 1: T0 then T1

Operating System Concepts – 10th Edition 3.111 Silberschatz, Galvin and Gagne
Nonserial Schedule

 Nonserial schedule allows overlapped execute


• Resulting execution not necessarily incorrect
 Consider schedule S, operations Oi, Oj
• Conflict if access same data item, with at least one
write
 If Oi, Oj consecutive and operations of different
transactions & Oi and Oj don’t conflict
• Then S’ with swapped order Oj Oi equivalent to S
 If S can become S’ via swapping nonconflicting
operations
• S is conflict serializable

Operating System Concepts – 10th Edition 3.112 Silberschatz, Galvin and Gagne
Schedule 2: Concurrent Serializable Schedule

Operating System Concepts – 10th Edition 3.113 Silberschatz, Galvin and Gagne
Locking Protocol

 Ensure serializability by associating lock with each data


item
• Follow locking protocol for access control
 Locks
• Shared – Ti has shared-mode lock (S) on item Q, Ti can
read Q but not write Q
• Exclusive – Ti has exclusive-mode lock (X) on Q, Ti can
read and write Q
 Require every transaction on item Q acquire appropriate
lock
 If lock already held, new request may have to wait
• Similar to readers-writers algorithm

Operating System Concepts – 10th Edition 3.114 Silberschatz, Galvin and Gagne
Two-phase Locking Protocol

 Generally ensures conflict serializability


 Each transaction issues lock and unlock requests in two
phases
• Growing – obtaining locks
• Shrinking – releasing locks
 Does not prevent deadlock

Operating System Concepts – 10th Edition 3.115 Silberschatz, Galvin and Gagne
Timestamp-based Protocols

 Select order among transactions in advance – timestamp-


ordering
 Transaction Ti associated with timestamp TS(Ti) before Ti
starts
• TS(Ti) < TS(Tj) if Ti entered system before Tj
• TS can be generated from system clock or as logical
counter incremented at each entry of transaction
 Timestamps determine serializability order
• If TS(Ti) < TS(Tj), system must ensure produced
schedule equivalent to serial schedule where Ti
appears before Tj

Operating System Concepts – 10th Edition 3.116 Silberschatz, Galvin and Gagne
Timestamp-based Protocol Implementation

 Data item Q gets two timestamps


• W-timestamp(Q) – largest timestamp of any
transaction that executed write(Q) successfully
• R-timestamp(Q) – largest timestamp of successful
read(Q)
• Updated whenever read(Q) or write(Q) executed
 Timestamp-ordering protocol assures any conflicting
read and write executed in timestamp order
 Suppose Ti executes read(Q)
• If TS(Ti) < W-timestamp(Q), Ti needs to read value of Q
that was already overwritten
 read operation rejected and Ti rolled back
• If TS(Ti) ≥ W-timestamp(Q)
 read executed, R-timestamp(Q) set to max(R-
timestamp(Q), TS(Ti))

Operating System Concepts – 10th Edition 3.117 Silberschatz, Galvin and Gagne
Timestamp-ordering Protocol

 Suppose Ti executes write(Q)


• If TS(Ti) < R-timestamp(Q), value Q produced by Ti was
needed previously and Ti assumed it would never be
produced
 Write operation rejected, Ti rolled back
• If TS(Ti) < W-timestamp(Q), Ti attempting to write
obsolete value of Q
 Write operation rejected and Ti rolled back
• Otherwise, write executed
 Any rolled back transaction Ti is assigned new timestamp
and restarted
 Algorithm ensures conflict serializability and freedom
from deadlock

Operating System Concepts – 10th Edition 3.118 Silberschatz, Galvin and Gagne
Schedule Possible Under Timestamp Protocol

Operating System Concepts – 10th Edition 3.119 Silberschatz, Galvin and Gagne
End of Unit 2

Operating System Concepts – 10th Edition Silberschatz, Galvin and Gagne

You might also like