Professional Documents
Culture Documents
Process Management
Process Management
CHAPTER 2
PROCESS MANAGEMENT
OUTLINE
Process
Interprocess Communication
Thread
Scheduling Algorithms
Process Synchonization
Deadlocks
OBJECTIVES
Identify the separate components of a process and illustrate how they are represented and
scheduled in an operating system.
Describe how processes are created and terminated in an operating system, including
developing programs using the appropriate system calls that perform these operations.
Identify the basic components of a thread, and contrast threads and processes
Describe the benefits and challenges of designng multithreaded applications
Describe various CPU scheduling algorithms
Describe the critical-section problem and illustrate a race condition
Illustrate solutions to the critical-section problem using “busy waiting” solutions and «
sleep and wakeup » solutions.
3
OUTLINE
Process
Interprocess Communication
Thread
Scheduling Algorithms
Process Synchonization
Deadlocks
4
PROCESS CONCEPT
5
PROCESS CONCEPT (CONT.)
Execution of program started via GUI mouse clicks, command line entry of its name,
etc.
One program can be several processes
Consider multiple users executing the same program
6
PROGRAM VS PROCESS
7
PROGRAM VS PROCESS
8
PROCESS IN MEMORY
9
MEMORY LAYOUT OF A C PROGRAM
10
PROCESS STATE
11
PROCESS STATES EXAMPLES
13
PROCESS CONTROL BLOCK (PCB)
14
PCB STRUCTURE
15
PROCESS SCHEDULING
Process scheduler selects among available processes for next execution on CPU
core
Goal -- Maximize CPU use, quickly switch processes onto CPU core
Main scheduling queues of processes
Job queue – set of all processes in the system.
Ready queue – set of all processes residing in main memory, ready and waiting to execute
Wait queues – set of processes waiting for an event (i.e., I/O)
Device queues – set of processes waiting for an I/O device.
Processes migrate among the various queues
Select a process in the queue, in the ready state, with the highest priority
16
PROCESS SCHEDULING
17
READY AND WAIT QUEUES
18
REPRESENTATION OF PROCESS SCHEDULING
19
PROCESS SCHEDULING QUEUES
A new process is initially put in the ready queue. It waits there until it is selected for
execution. Once the process is allocated the CPU and is executing, one of several
events could occur:
The process could issue an I/O request and then be placed in an I/O queue.
The process could create a new child process and wait for the child’s termination.
The process could be removed forcibly from the CPU, as a result of an interrupt, and be put back in
the ready queue.
20
CPU SWITCH FROM PROCESS TO PROCESS
21
CONTEXT SWITCH
When CPU switches to another process, the system must save the state of the old
process and load the saved state for the new process via a context switch
Context of a process represented in the PCB
Context-switch time is pure overhead; the system does no useful work while switching
The more complex the OS and the PCB the longer the context switch
It includes the value of the CPU registers, the process state, and memory-management
information.
22
PROPERTIES OF PROCESS
I/O-bound process
CPU-bound process
Batch processing (via quantum time)
Process priority
CPU time used by the process
Remaining time the process takes to complete the task
23
MULTITASKING IN MOBILE SYSTEMS
Some mobile systems (e.g., early version of iOS) allow only one process to run, others
suspended
Due to screen real estate, user interface limits iOS provides for a
Single foreground process- controlled via user interface
Multiple background processes– in memory, running, but not on the display, and with limits
Limits include single, short task, receiving notification of events, specific long-running tasks like audio
playback
Android runs foreground and background, with fewer limits
Background process uses a service to perform tasks
Service can keep running even if background process is suspended
Service has no user interface, small memory use
24
OPERATIONS ON PROCESSES
25
PROCESS CREATION
26
PROCESS CREATION (CONT.)
Parent process creates children processes, which, in turn create other processes,
forming a tree of processes.
Generally, process identified and managed via a process identifier (pid)
Resource sharing options
Parent and children share all resources.
Children share subset of parent’s resources.
Parent and child share no resources.
Execution options
Parent and children execute concurrently.
Parent waits until children terminate.
27
PROCESS CREATION (CONT.)
Address space
Child duplicate of parent (recursive)
Child has a program loaded into it (sub function)
UNIX examples
fork() system call creates new process
exec() system call used after a fork() to replace the process’ memory space with a new
program
Parent process calls wait()waiting for the child to terminate
28
A TREE OF PROCESSES ON A TYPICAL UNIX SYSTEM
29
UNIX PROCESS CREATION
#include <stdio.h>
#include <unistd.h>
int main()
{
int ret;
Printf(“before fork \n”);
ret=fork();
//Creating a child process. This will now throw 2 return values. One>0 and other==0. >0 is parent, equal to 0 is child.
If(ret>0)
{
printf(“ \n A Parent “);
printf(“ \n My PID is %d”, getpid());
}
if (ret==0)
printf (“ \n A CHILD MAN “);
printf(“ \n My PID is %d”, getpid());
printf(“ \n My Parent PID is %d”, getppid());
}
Printf(“common Work \n”);
}
30
UNIX PROCESS CREATION
#include <sys/types.h>
#include <stdio.h>
#include <unistd.h>
int main() {
pid_t pid;
pid = fork(); /* fork a child process */
if (pid < 0) { /* error occurred */
fprintf(stderr, "Fork Failed");
return 1;
}
else if (pid == 0) { /* child process */
execlp("/bin/ls","ls",NULL);
}
else { /* parent process */
/* parent will wait for the child to complete */
wait(NULL);
printf("Child Complete");
}
return 0;
}
31
EXAMPLE
33
PROCESS CREATION (CONT.)
On UNIX systems, we can obtain a listing of processes by using the ps command. For
example, the command ps –el will list complete information for all processes
currently active in the system
pstree is a UNIX command that shows the running processes
Gnome-system-monitor: Show process manage by GUI
34
UNISTD C LIBRARY
getpid()
getppid()
fork()
wait()
35
PROCESS TERMINATION
Process executes last statement and asks the operating system to decide it
(exit).
Output data from child to parent (via wait).
Remove PCB of process
Remove the process from all system management lists
Process’ resources are deallocated by operating system (reallocate).
Parent may terminate execution of children processes (abort).
Child has exceeded allocated resources.
Task assigned to child is no longer required.
Parent is exiting. Operating system does not allow child to continue if its parent
terminates.
36
MULTIPROCESS ARCHITECTURE – CHROME BROWSER
37
OUTLINE
Process
Interprocess Communication
Thread
Scheduling Algorithms
Process Synchonization
Deadlocks
38
COOPERATING PROCESSES - INTERPROCESS COMMUNICATION
39
IPC MECHANISMS
Signal
Pipe
Message passing
Shared memory
Sockets
40
SIGNAL
41
SIGNAL (CONT.)
42
ORDINARY PIPES
An unidirectional communication channel between two processes: the output of one process
is passed as input to the other in the form of a stream of bytes.
When a pipe is established between two processes
A process writes data to pipe
The other process will read data from the pipe
The order of data passed through the pipe is preserved according to the FIFO principle
Ordinary pipes – cannot be accessed from outside the process that created it. Typically, a
parent process creates a pipe and uses it to communicate with a child process that it
created.
Named pipes – can be accessed without a parent-child relationship.
43
NAMED PIPES
44
COOPERATING PROCESSES - COMMUNICATIONS
message passing
45
PRODUCER-CONSUMER PROBLEM
Two variations:
unbounded-buffer places no practical limit on the size of the buffer:
Producer never waits
Consumer waits if there is no buffer to consume
46
IPC – SHARED MEMORY
47
BOUNDED-BUFFER – SHARED-MEMORY SOLUTION
Shared data
in
#define BUFFER_SIZE 10
typedef struct {
. . . counter
} item; P C
item buffer[BUFFER_SIZE];
int in = 0;
out
int out = 0;
48
WHAT ABOUT FILLING ALL THE BUFFERS?
49
PRODUCER PROCESS – SHARED MEMORY
51
IMPLEMENTATION QUESTIONS
52
PRODUCER-CONSUMER: MESSAGE PASSING
Producer
message next_produced;
while (true) {
/* produce an item in next_produced */
send(next_produced);
}
Consumer
message next_consumed;
while (true) {
receive(next_consumed)
/* consume the item in next_consumed */
}
53
BUFFERING
54
SOCKETS
All ports below 1024 are well known, used for standard services
55
SOCKET COMMUNICATION
56
OUTLINE
Process
Interprocess Communication
Thread
Scheduling Algorithms
Process Synchonization
Deadlocks
57
58
59
THREADS
A thread (or lightweight process) is a basic unit of CPU utilization; it consists of:
program counter
register set
stack space
A thread shares with its peer threads its:
code section
data section
operating-system resources
collectively know as a task.
A traditional or heavyweight process is equal to a task with one thread
61
THREADS (CONT.)
Each thread can interact with a specific part of the system, such as disks, network I/O, or
users.
Threads are scheduled for execution because some threads can wait for some event to
happen or wait for some work to finish from another thread.
Threads includes:
Thread ID (thread ID)
Program counter (PC)
Register set
Stacks
Threads in a process share code, data, and other system resources such as open files and
signals.
62
THREADS (CONT.)
Benefits:
Responsiveness
Resource Sharing
Economy
Scalability
63
SINGLE AND MULTITHREADED PROCESSES
64
MULTITHREADED SERVER ARCHITECTURE
65
MULTICORE PROGRAMMING
Parallelism implies a system can perform more than one task simultaneously
Concurrency supports more than one task making progress
Single processor / core, scheduler providing concurrency
66
CONCURRENCY VS. PARALLELISM
67
PROCESS VS THREAD
68
THREADS (CONT.)
69
USER THREADS AND KERNEL THREADS
70
MULTITHREADING MODELS
Many-to-one model
Many user-level threads mapped to single kernel thread
One thread blocking causes all to block
Multiple threads may not run in parallel on muticore system because only one may be
in kernel at a time
Few systems currently use this model
Examples:
Solaris Green Threads
GNU Portable Threads
71
MULTITHREADING MODELS
One-to-one model.
Each user-level thread maps to kernel thread
Creating a user-level thread creates a kernel thread
More concurrency than many-to-one
Number of threads per process sometimes restricted due to overhead
Examples
Windows
Linux
72
MULTITHREADING MODELS
Many-to-many model.
73
THREAD LIBRARIES
A thread library provides the programmer with an API for creating and managing
threads.
Three main thread libraries are in use today:
POSIX Pthreads
Windows
Java.
74
THREAD LIBRARIES
75
PTHREADS
May be provided either as user-level or kernel-level
A POSIX standard (IEEE 1003.1c) API for thread creation and synchronization
Specification, not implementation
API specifies behavior of the thread library, implementation is up to development of
the library
Common in UNIX operating systems (Linux & Mac OS X)
76
PTHREADS EXAMPLE
77
#include <pthread.h>
#include <stdio.h>
int sum=0; /* this data is shared by the thread(s) */
void* runner(void *param); /* threads call this function */
int amout1=5; amout2=8;
int main(int argc, char *argv[]) {
pthread_t tid1; /* the thread identifier */
pthread_t tid2; /* the thread identifier */
pthread_attr_t attr; /* set of thread attributes */
pthread_attr_init(&attr); /* get the default attributes */
pthread_create(&tid1,&attr,runner,&amout1); /* create the thread */
pthread_create(&tid2,&attr,runner,&amout2); /* create the thread */
pthread_join(tid1,NULL); /* wait for the thread to exit */
pthread_join(tid2,NULL); /* wait for the thread to exit */
printf("sum = %d\n",sum);
}
void* runner(void* arg) { /* Thread will begin control in this function */
int *param_ptr=(int *) arg;
int param =*param_ptr;
for (int i = 1; i <= param; i++)
sum += i;
pthread_exit(0);
}
WINDOWS THREADS
The technique for creating threads using theWindows thread library is similar to the Pthreads
technique in several ways.
Threads are created in the Windows API using the CreateThread() function, and - just as in Pthreads a
set of attributes for the thread is passed to this function
Once the summation thread is created, the parent must wait for it to complete before outputting the
value of Sum, as the value is set by the summation thread
Windows API using the WaitForSingleObject() function to wait for the summation thread
79
#include <Windows.h> int main(int argc, char* argv[])
#include <process.h> {
#include <stdio.h> HANDLE myhandle1, myhandle2;
int sum1=0,sum2=0; myhandle1 = (HANDLE)_beginthread(&mythread1, 0, 0);
int i; myhandle2 = (HANDLE)_beginthread(&mythread2, 0, 0);
void mythread1(void* data) WaitForSingleObject(myhandle1, INFINITE);// join
{ WaitForSingleObject(myhandle2, INFINITE);// join
printf("mythread1 ID %d \n", GetCurrentThreadId());
for (i = 0; i < 10; i++) return 0;
if (i%2==0){ }
sum1=sum1+i;}
printf("sum= %d \n",sum1);
}
void mythread2(void* data)
{
for (i = 0; i < 10; i++)
if (i%2!=0){
sum2=sum2+i;}
printf("sum= %d \n",sum2);
printf("mythread2 ID %d \n", GetCurrentThreadId());
}
JAVA THREADS
Threads are the fundamental model of program execution in a Java program, and the Java
language and its API provide a rich set of features for the creation and management of thread
All Java programs comprise at least a single thread of control—even a simple Java program
consisting of only a main() method runs as a single thread in the JVM
Java threads may be created by:
Extending Thread class
Implementing the Runnable interface
81
public class Main {
public static void main(String[] args) {
System.out.println("Start");
Thread t1 = new Thread (new Runnable(){
@Override
public void run(){
for (int i=0;i<10;i++){
System.out.println("Thread 1>"+i);
}
}
});
Thread t2 = new Thread (new Runnable(){
@Override
public void run(){
for (int i=0;i<10;i++){
System.out.println("Thread 2>"+i);
}
}
});
t1.start();
t2.start();
System.out.println("Finish");
}}
IMPLICIT THREADING
Implicit Threading: strategy for designing multithreaded programs that can take
advantage of multicore processors
Thread Pools
OpenMP (Open Multiple Processing)
GDC (Grand Central Dispatch)
83
THREAD POOLS
The general idea behind a thread pool is to create a number of threads at process startup and place
them into a pool, where they sit and wait for work
84
THREAD POOLS
85
THEAD POOLS
86
OPENMP
87
#include <omp.h>
#include <stdio.h>
int main(int argc, char *argv[]){
/* sequential code */
#pragma omp parallel
{
printf("I am a parallel region.");
}
/* sequential code */
return 0;
}
#pragma omp parallel for
for (i = 0; i < N; i++) {
c[i] = a[i] + b[i];
}
MULTIPLE THREADS WITHIN A TASK
89
THREADS SUPPORT IN SOLARIS 2
Solaris 2 is a version of UNIX with support for threads at the kernel and user levels, symmetric multiprocessing,
and
real-time scheduling.
LWP – intermediate level between user-level threads and kernel-level threads.
Resource needs of thread types:
Kernel thread: small data structure and a stack; thread switching does not require changing memory access information –
relatively fast.
LWP: PCB with register data, accounting and memory information,; switching between LWPs is relatively slow.
User-level thread: only ned stack and program counter; no kernel involvement means fast switching. Kernel only sees the
LWPs that support user-level threads.
90
SOLARIS 2 THREADS
91
OUTLINE
Overview
Interprocess Communication
Thread
Scheduling Algorithms
Process Synchonization
Deadlocks
92
BASIC CONCEPTS
93
HISTOGRAM OF CPU-BURST TIMES
94
SCHEDULERS
Long-term scheduler (or job scheduler) – selects which processes should be brought
into the ready queue controls the degree of multiprogramming (may be slow)
Medium-term: Select which process should be (swap in) and swap out
95
LONG-TERM SCHEDULER VS SHORT-TERM SCHEDULER
Ready queue
Waiting queue
96
ADDITION OF MEDIUM TERM SCHEDULING
97
SCHEDULING ALGORITHMS
Whenever the CPU becomes idle, the operating system must select one of the
processes in the ready queue to be executed.
The selection process is carried out by the short-term scheduler, or CPU
scheduler
98
SCHEDULER
Short-term scheduler selects from among the processes in ready queue, and allocates the CPU to one of them
Queue may be ordered in various ways
CPU scheduling decisions may take place when a process:
1. Switches from running to waiting state
2. Switches from running to ready state
3. Switches from waiting to ready
4. Terminates
5. New process to ready sate
Scheduling under 1 and 4 is nonpreemptive
All other scheduling is preemptive
Consider access to shared data
Consider preemption while in kernel mode
Consider interrupts occurring during crucial OS activities
99
PREEMPTIVE VS NON-PREEMPTIVE SCHEDULING
Non-preemptive
The running process keeps the CPU until it voluntarily gives up the CPU
process exits
switches to blocked state
Preemptive:
The running process can be interrupted and must release the CPU (can be forced to give up CPU)
100
DISPATCHER
101
SCHEDULING CRITERIA
CPU utilization:In a real system, it should range from 40 percent (for a lightly loaded
system) to 90 percent (for a heavily loaded system).
Throughput: the number of processes that are completed per time unit
Turnaround time: The interval from the time of submission of a process to the time of
completion. = waiting to get into memory + waiting in the ready queue + executing on the
CPU + doing I/O.
Waiting time. is the sum of the periods spent waiting in the ready queue.
Response time. the time from the submission of a request until the first response is
produced
It is desirable to maximize CPU utilization and throughput and to minimize turnaround time, waiting
time, and response time.
102
SCHEDULING ALGORITHMS: FIRST-COME, FIRST-SERVED
104
SCHEDULING ALGORITHMS: FIRST-COME, FIRST-SERVED (CONT)
105
SCHEDULING ALGORITHMS: FIRST-COME, FIRST-SERVED(CONT)
106
SCHEDULING ALGORITHMS: SHORTEST-JOB-FIRST (SJF)
Associate with each process the length of its next CPU burst
Use these lengths to schedule the process with the shortest time
SJF is optimal – gives minimum average waiting time for a given set of processes
The difficulty is knowing the length of the next CPU request
Could ask the user
Estimate
107
SCHEDULING ALGORITHMS: SHORTEST-JOB-FIRST(CONT)
108
SHORTEST-JOB-FIRST(NON PREEMPTIVE) WITH ARRIVAL TIME
109
SHORTEST-JOB-FIRST(NON PREEMPTIVE) WITH ARRIVAL TIME
Non-preemptive scheduling
The real difficulty with the SJF algorithm is knowing the length of the next CPU request.
For long-term (job) scheduling in a batch system, we can use the process time limit that a user
specifies when he submits the job
SJF scheduling is used frequently in long-term scheduling.
One approach to this problem is to try to approximate SJF scheduling. We may not know the length
of the next CPU burst, but we may be able to predict its value.
111
SCHEDULING ALGORITHMS: SHORTEST-JOB-FIRST(CONT)
112
EXAMPLE OF SHORTEST-REMAINING-TIME-FIRST (SJF PREEMPTIVE)
Now we add the concepts of varying arrival times and preemption to the analysis
114
SHORTEST-REMAINING-TIME-FIRST (SJF PREEMPTIVE)
Remaining time
Process Arrival time Bust time P1 P2 P3 P4 P5
P1 0 11
P2 3 7
P3 5 4 ready queue Running queue
P4 5 2
P5 2 8
P1
Gantt
0 2
116
SCHEDULING ALGORITHMS: ROUND-ROBIN
Preemption scheduling
The round-robin (RR) scheduling algorithm is designed especially for timesharing systems.
Each process gets a small unit of time, called a time quantum (q) or time slice, is defined. A time
quantum is generally from 10 to 100 milliseconds in length.
After this time has elapsed, the process is preempted and added to the end of the ready queue.
The ready queue is treated as a circular queue.
Timer interrupts every quantum to schedule next process
Performance
q large FIFO
q small q must be large with respect to context switch, otherwise overhead is too high
117
SCHEDULING ALGORITHMS: ROUND-ROBIN(CONT)
In the RR scheduling algorithm, no process is allocated the CPU for more than 1 time quantum in a
row (unless it is the only runnable process)
The performance of the RR algorithm depends heavily on the size of the time quantum.
At one extreme, if the time quantum is extremely large, the RR policy
if the time quantum is extremely small, the RR approach can result in a large number of context switches
118
SCHEDULING ALGORITHMS: ROUND-ROBIN(CONT)
119
ROUND ROBIN WITH ARRIVAL TIME
Remaining time
Process Arrival time Bust time P1 P2 P3 P4 P5
P1 0 11 Play
P2 3 7
P3 5 4 ready queue
P4 5 2 P4 P3 P2 P5 P1
P5 2 8
Running Process
Gantt chart P1
0 3
Q=3
Tp1 Tp2 Tp3 Tp4 Tp5
Tcp1 =0+(9−3)+(23−12)+(30−26)=21 Tcp2=6+(20−9)+(29−23)−3=20 Tcp3 =12+(26−15)−5=18 Tcp4 =15−5=10 Tcp5=3+(17−6)+(27−20)−2=19
The average waiting time: 17.6 milliseconds
The Average turnaround time: 17.6 + 32/5= 24 milliseconds. 120
SCHEDULING ALGORITHMS: ROUND-ROBIN(CONT)
121
SCHEDULING ALGORITHMS: PRIORITY
Priorities are generally indicated by some fixed range of numbers, such as 0 to 7 or 0 to 4,095.
Some systems use low numbers to represent low priority; others use low numbers for high priority
In this text, we assume that low numbers represent high priority
Preemptive
Nonpreemptive
122
EXAMPLE OF PRIORITY SCHEDULING
123
SCHEDULING ALGORITHMS: PRIORITY(CONT)
125
PRIORITY WITH ARRIVAL TIME PREEMPTIVE
126
PRIORITY WITH ARRIVAL TIME PREEMPTIVE
Process Arrival time Bust time Priority
P1 0 11 2 ready queue
P2 3 7 3
P3 5 4 1
P4 5 2 3
P5 2 8 2
P1 P3 P5 P1 P2 P4
Gantt chart 0 5 9 17 23 30 32
130
SCHEDULING ALGORITHMS: MULTILEVEL QUEUE
131
SCHEDULING ALGORITHMS (CONT)
132
SCHEDULING ALGORITHMS: MULTILEVEL FEEDBACK QUEUE
A process can move between the various queues; aging can be implemented this way
Multilevel-feedback-queue scheduler defined by the following parameters:
number of queues
scheduling algorithms for each queue
method used to determine when to upgrade a process
method used to determine when to demote a process
method used to determine which queue a process will enter when that process needs service
133
EXAMPLE OF MULTILEVEL FEEDBACK QUEUE
Three queues:
Q0 – RR with time quantum 8 milliseconds
Q1 – RR time quantum 16 milliseconds
Q2 – FCFS
Scheduling
A new job enters queue Q0 which is served FCFS
When it gains CPU, job receives 8 milliseconds
If it does not finish in 8 milliseconds, job is moved to queue Q1
At Q1 job is again served FCFS and receives 16 additional
milliseconds
If it still does not complete, it is preempted and moved to queue
Q2
134
SCHEDULING ALGORITHMS: LOTTERY SCHEDULING WORKS
135
OUTLINE
Overview
Interprocess Communication
Thread
Scheduling Algorithms
Process Synchonization
Deadlocks
136
PROCESS SYNCHRONIZATION
137
BOUNDED-BUFFER
138
BOUNDED-BUFFER (CONT.)
Consumer process
repeat in
while counter = 0 do no-op;
nextc := buffer [out];
out := out + 1 mod n; counter
counter := counter – 1;
…
P C
consume the item in nextc
…
until false;
out
The statements:
counter := counter + 1;
counter := counter - 1;
must be executed atomically.
139
RACE CONDITION
register2 = counter
register2 = register2 - 1
counter = register2
140
THE CRITICAL-SECTION PROBLEM
142
SOLUTION TO CRITICAL-SECTION PROBLEM
143
“BUSY – WAITING” SOLUTION
Synchronization Software
Lock
While (unpermission) do nothing();
Peterson’s Solution
Critical section;
Synchronization Hardware
Give up permission with CS non-interruptible
Test&Set Instruction
144
INITIAL ATTEMPTS TO SOLVE PROBLEM
while(true){
while lock ==1 do no-op; /* waiting lock=0*/
//lock =0
lock=1; //set lock =1 to prevent other process (lock)
critical section
lock=0; // finish CS the release resource (unlock)
reminder section
}
146
ALGORITHM 2
Process Pi
do {
while turn ==j do no-op; /* waiting turn #j*/
critical section
turn = j;
reminder section
} while (1);
Satisfies mutual exclusion, but not progress and bounded waiting because of strict alternation
147
ALGORITHM FOR PROCESS PI (CONT.)
Satisfies mutual exclusion, but not progress and bounded waiting because of strict
alternation 148
PETERSON’S SOLUTION
The variable turn indicates whose turn it is to enter the critical section
The interess array is used to indicate if a process is ready to enter the critical
section.
interesse[2] = true implies that process Pi is ready!
149
PETERSON SOLUTIONS
150
PETERSON SOLUTIONS FOR PROCESS PI (CONT.)
151
CORRECTNESS OF PETERSON’S SOLUTION
152
SYNCHRONIZATION HARDWARE
Allows the process to disable all interrupts before entering the critical section, and
to recover interrupts when leaving the critical section.
At that time, the clock interrupt also does not occur, so the system cannot pause the
operation of the process that is processing to allocate CPU for another process, so
that the current process can rest assured to operate on the critical segment. without
fear of being disputed by any other process
153
SYNCHRONIZATION HARDWARE
154
MUTUAL EXCLUSION WITH TEST-AND-SET
155
“SLEEP & WAKE UP” SOLUTIONS
while(true){
if(busy){
++blocked;
If (not have permission) Sleep(); sleep();}
else busy =1;//set busy =1 to prevent
Critical section; other
critical section
Wakeup (somebody); busy =0;// set busy=0
if(blocked)//check any waited process?
{ wakeup(process);// wakup other
blocked --;}
remainder section
}
156
SEMAPHORE AS GENERAL SYNCHRONIZATION TOOL
157
SEMAPHORE IMPLEMENTATION
Must guarantee that no two processes can execute wait () and signal () on the same
semaphore at the same time
Thus, implementation becomes the critical section problem where the wait and signal
code are placed in the crtical section
Could now have busy waiting in critical section implementation
But implementation code is short
Little busy waiting if critical section rarely occupied
Note that applications may spend lots of time in critical sections and therefore this is
not a good solution
158
SEMAPHORE IMPLEMENTATION WITH NO BUSY WAITING
Two operations:
block – place the process invoking the operation on the appropriate waiting queue
wakeup – remove one of processes in the waiting queue and place it in the ready queue
159
SEMAPHORE IMPLEMENTATION WITH NO BUSY WAITING
Semaphore operations now defined as
typedef struct{
int value;
struct process *Ptr;
} Semaphorre
-3
01
S.value=-1
-2 Critical Section
Shared Data
161
DEADLOCK AND STARVATION
Deadlock – two or more processes are waiting indefinitely for an event that can be caused by
only one of the waiting processes.
Let S and Q be two semaphores initialized to 1
P0 P1
wait(S); wait(Q);
wait(Q); wait(S);
signal(S); signal(Q);
signal(Q) signal(S);
Starvation – indefinite blocking. A process may never be removed from the semaphore queue
in which it is suspended.
162
CLASSICAL PROBLEMS OF SYNCHRONIZATION
Bounded-Buffer Problem
Readers and Writers Problem
Dining-Philosophers Problem
163
BOUNDED-BUFFER PROBLEM
164
SEMAPHORE WITH PRODUCER-CONSUMER PROBLEM
Producer Consumer
do { do {
wait (full);
// produce an item in nextp wait (mutex);
wait (empty); // remove an item from buffer to nextc
wait (mutex); signal (mutex);
// add the item to the buffer signal (empty);
signal (mutex);
signal (full); // consume the item in nextc
} while (TRUE);
} while (TRUE);
165
READERS-WRITERS PROBLEM
Several variations of how readers and writers are treated – all involve priorities
Shared Data
Data set
Semaphore mutex initialized to 1
Semaphore wrt initialized to 1
Integer readcount initialized to 0
166
SEMAPHORE WITH READERS-WRITERS PROBLEM
writer process Consumer
do { do {
wait (mutex) ;
wait (wrt) readcount ++ ;
if (readcount == 1)
// writing is performed wait (wrt) ;
signal (mutex)
signal (wrt) ; // reading is performed
} while (TRUE); wait (mutex) ;
readcount - - ;
if (readcount == 0)
signal (wrt) ;
signal (mutex) ;
} while (TRUE);
167
OUTLINE
Overview
Interprocess Communication
Thread
Scheduling Algorithms
Process Synchonization
Deadlocks
168
DEADLOCK CHARACTERIZATION
Deadlock can arise if four conditions hold simultaneously.
Example of a Resource Allocation Graph Graph With a Deadlock Graph With A Cycle But No Deadlock
170
METHODS FOR HANDLING DEADLOCKS
171
DEADLOCK PREVENTION
Mutual Exclusion – not required for sharable resources (e.g., read-only files); must
hold for non-sharable resources
Hold and Wait – must guarantee that whenever a process requests a resource, it
does not hold any other resources
Require process to request and be allocated all its resources before it begins execution, or allow
process to request resources only when the process has none allocated to it.
Low resource utilization; starvation possible
172
DEADLOCK PREVENTION
No Preemption –
If a process that is holding some resources requests another resource that cannot be immediately
allocated to it, then all resources currently being held are released
Preempted resources are added to the list of resources for which the process is waiting
Process will be restarted only when it can regain its old resources, as well as the new ones that it is
requesting
Circular Wait – impose a total ordering of all resource types, and require that each
process requests resources in an increasing order of enumeration
173
DEADLOCK AVOIDANCE
174
SAFE STATE
175
BANKER’S ALGORITHM
Multiple instances
When a process gets all its resources it must return them in a finite amount of time
176
DATA STRUCTURES FOR THE BANKER’S ALGORITHM
Max: n x m matrix. If Max [i,j] = k, then process Pi may request at most k instances of resource type Rj
Need: n x m matrix. If Need[i,j] = k, then Pi may need k more instances of Rj to complete its task
177
SAFETY ALGORITHM
Requesti = request vector for process Pi. If Requesti [j] = k then process Pi wants k instances of
resource type Rj
1. If Requesti <= Needi go to step 2. Otherwise, raise error condition, since process has exceeded its
maximum claim
2. If Requesti <= Available, go to step 3. Otherwise Pi must wait, since resources are not available
3. Pretend to allocate requested resources to Pi by modifying the state as follows:
Available = Available – Requesti;
Allocationi = Allocationi + Requesti;
Needi = Needi – Requesti;
If safe ⇒ the resources are allocated to Pi
If unsafe ⇒ Pi must wait, and the old resource-allocation state is restored
179
EXAMPLE OF BANKER’S ALGORITHM
180
EXAMPLE (CONT.)
181
EXAMPLE: P1 REQUEST (1,0,2)
Check that Request <= Available (that is, (1,0,2) <= (3,3,2) -> true
Allocation Need Available
ABC ABC ABC
P0 0 1 0 7 4 3 2 3 0
P1 3 0 2 0 2 0
P2 3 0 2 6 0 0
P3 2 1 1 0 1 1
P4 0 0 2 4 3 1
Executing safety algorithm shows that sequence < P1, P3, P4, P0, P2> satisfies safety requirement
Can request for (3,3,0) by P4 be granted?
Can request for (0,2,0) by P0 be granted?
182
RECOVERY FROM DEADLOCK: PROCESS TERMINATION
Rollback – return to some safe state, restart process for that state
184