Professional Documents
Culture Documents
Unit 2 Notes Dep HOD
Unit 2 Notes Dep HOD
Process Concept
• A process is a program in execution.
• A program is a passive entity where as a process is an active entity.
• A program becomes a process when an executable file is loaded into memory
• Each process is represented in the operating system by a process control block (PCB)-
also called a task control block.
Process States:
• As a process executes, it changes state.
• The state of a process is defined in part by the current activity of that process. Process
State Transition Diagram is shown in Figure 1.
• Each process may be in one of the following states:
➢ New: The process is being created.
➢ Running: Instructions are being executed.
➢ Waiting: The process is waiting for some event to occur (such as an I/O completion
or reception of a signal).
➢ Ready: The process is waiting to be assigned to a processor.
➢ Terminated: The process has finished execution.
Schedulers
• A process migrates between the various scheduling queues throughout its lifetime.
• The operating system must select, for scheduling purposes, processes from these queues
in some fashion.
• The selection process is carried out by the appropriate scheduler.
• There are three different types of schedulers. They are:
1. Long-term Scheduler or Job Scheduler
2. Short-term Scheduler or CPU Scheduler
3. Medium term Scheduler
PREPARED BY DR J FARITHA BANU 4
• The long-term scheduler, or job scheduler, selects which processes should be brought
into the ready queue from the pool and loads them into memory for execution. It is invoked
very infrequently. It controls the degree of multiprogramming.
• The short-term scheduler, or CPU scheduler, selects from among the processes that are
ready to execute, and allocates the CPU to one of them. It is invoked very frequently.
• Processes can be described as either I/O bound or CPU bound.
• An I\O-bound process spends more of its time doing I/O than it spends doing
computations.
• A CPU-bound process, on the other hand, generates I/O requests infrequently,using more
of its time doing computation than an I/O-bound process uses.
• The system with the best performance will have a combination of CPU-bound and I/O-
bound processes.
A context switching is a process that involves switching of the CPU from one process or task
to another. In this phenomenon, the execution of the process that is present in the running state
is suspended by the kernel and another process that is present in the ready state is executed by
the CPU.
It is one of the essential features of the multitasking operating system. The processes are
switched so fastly that it gives an illusion to the user that all the processes are being executed
at the same time.
A context is the contents of a CPU's registers and program counter at any point in time. Context
switching can happen due to the following reasons:
• When a process of high priority comes in the ready state. In this case, the execution of
the running process should be stopped and the higher priority process should be given
the CPU for execution.
• When an interruption occurs then the process in the running state should be stopped
and the CPU should handle the interrupt before doing something else.
• When a transition between the user mode and kernel mode is required then you have to
perform the context switching.
The process of context switching involves a number of steps. The figure 2.6 depicts the process
of context switching between the two processes P0 and P1.
1. Firstly, the context of the process P0 i.e. the process present in the running state will be
saved in the Process Control Block of process P0 i.e. PCB0.
2. Now, you have to move the PCB0 to the relevant queue i.e. ready queue, I/O queue,
waiting queue, etc.
3. From the ready state, select the new process that is to be executed i.e. the process P1.
4. Now, update the Process Control Block of process P1 i.e. PCB1 by setting the process
state to running. If the process P1 was earlier executed by the CPU, then you can get
the position of last executed instruction so that you can resume the execution of P1.
Similarly, if you want to execute the process P0 again, then you have to follow the same steps
as mentioned above (from step 1 to 4).
Operations on Processes
1. Process Creation
2. Process Termination
3. Mechanisms involved in process creation on UNIX – illustration of fork(),wait(),exit()
Process Creation
• A process may create several new processes, during the course of execution.
• The creating process is called a parent process, whereas the new processes are called the
children of that process.
• When a process creates a new process, two possibilities exist in terms of execution:
o The parent continues to execute concurrently with its children.
o The parent waits until some or all of its children have terminated.
• There are also two possibilities in terms of the address space of the new process:
o The child process is a duplicate of the parent process.
o The child process has a program loaded into it.
• In UNIX, each process is identified by its process identifier, which is a unique integer. A
new process is created by the fork system call.
Process Termination
• A process terminates when it finishes executing its final statement and asks the operating
system to delete it by using the exit system call.
#include <stdio.h>
#include <sys/types.h>
#include <unistd.h>
int main() {
fork();
printf("invoked fork() system call\n");
return 0;
}
Output:
invoked fork() system call
invoked fork() system call
// One printf statement, but output printed 2 times, parent 1 time and child 1 time, if you
invoke one more fork(), then one more time “invoked fork() system call”
will be printed.
Example program 2:
#include <stdio.h>
#include <stdlib.h>
void exitfunc() {
printf("Invoked cleanup function - exitfunc()\n");
return;
}
int main() {
atexit(exitfunc);
printf("Hello, World!\n");
exit (0);
}
Output:
Hello, World!
Invoked cleanup function - exitfunc()
Interprocess Communication
Inter-process communication (IPC) is a mechanism that allows processes to communicate with
each other and synchronize their actions. It enables resource and data sharing between the
processes without interference. Processes that execute concurrently in the operating system
may be either independent processes or cooperating processes.
Figure 2.7. Communications models. (a) Message passing. (b) Shared memory.
The following variables reside in a region of memory shared by the producer and consumer
processes:
Shared data
#define BUFFER_SIZE 10
typedef struct {
...
} item;
item buffer[BUFFER_SIZE];
int in = 0;
int out = 0;
• The shared buffer is implemented as a circular array with two logical pointers: in and
out. The variable in points to the next free position in the buffer; out points to the first
full position in the buffer. The buffer is empty when in == out ; the buffer is full when
((in + 1) % BUFFERSIZE) == out.
Threads
A thread is a basic unit of CPU utilization. A thread is a flow of execution through the process
code. It comprises a thread ID, a program counter (keeps track of which instruction to execute
next), a register set(hold its current working variables), and a stack (execution history).
A thread shares with its peer threads few information like code segment, data segment and open
files. When one thread alters a code segment memory item, all other threads see that.
A thread is also called a lightweight process. Threads provide a way to improve application
performance through parallelism. Each thread belongs to exactly one process and no thread can
exist outside a process. Each thread represents a separate flow of control. A traditional (or
heavyweight) process has a single thread of control. Now a days process has many threads to
perform more than one task at a time. Figure 2.11 illustrates the difference between a traditional
single-threaded process and a multithreaded process.
Multithreading Models
Some operating system provide a combined user level thread and Kernel level thread facility.
Solaris is a good example of this combined approach. In a combined system, multiple threads
within the same application can run in parallel on multiple processors and a blocking system
call need not block the entire process. Multithreading models are three types. They are
• Many to many Model
• Many to one Model
• One to one Model
Many to Many Model
The many-to-many model multiplexes any number of user threads onto an equal or smaller
number of kernel threads is shown in figure 2.14.a and 2.14.b.. The figure 2.14a. shows the
many-to-many threading model where 6 user level threads are multiplexing with 6 kernel level
threads. In this model, developers can create as many user threads as necessary and the
corresponding Kernel threads can run in parallel on a multiprocessor machine. This model
provides the best accuracy on concurrency and when a thread performs a blocking system call,
the kernel can schedule another thread for execution.
Figure 2.14.a. Many to many model with 6 thread Figure 2.14.b Many-to-many model.
int main()
{ pthread_t tid;
printf("In main function\n");
pthread_create(&tid, NULL, runThread, NULL);
pthread_join(tid, NULL);
printf("Thread over\n"); return 0;
}
Output:
In main function
Running Thread
1
2
3
4
5
Thread over
Implicit Threading
To address the difficulties and better support the design of multithreaded applications is to
transfer the creation and management of threading from application developers to compilers
and run-time libraries.
Implicit threading is mainly the use of libraries or other language support to hide the creation
and management of threads from application developers. The most common implicit threading
library is OpenMP, in context of C.
Thread Pools
Whenever the server receives a request, it creates a separate thread to service the request.
Unlimited threads could exhaust system resources. One solution to this problem is to use a
thread pool.
The general idea behind a thread pool is to create a number of threads at process startup and
place them into a pool, where they sit and wait for work. When server receives a request, it
awakens a thread from this pool, and pass it the request for service. Once the thread completes
its service, it returns to the pool and awaits more work. If the pool contains no available thread,
the server waits until one becomes available.
Thread pool benefits
• Servicing a request with an existing thread is faster than waiting to create a thread.
• A thread pool limits the number of threads that exist at any one point.
• Separating the task to be performed from the mechanics of creating the task allows us
to use different strategies for running the task.
Threading Issues
we discuss some of the issues to consider in designing multithreaded programs.
• The fork() and exec() System Calls
• Thread Cancellation
• Signal Handling
• Thread Pool
• Thread Specific Data
Process Synchronization
It is the task of coordinating the execution of processes in such a way that no two processes
can have access to the same shared data and resources at the same time and ensures orderly
execution of the process.
Concurrent access to shared data may result in data inconsistency. Therefore need to ensure
orderly execution of cooperating processes. There are various solutions for the same such as
semaphores, mutex locks, synchronization hardware, etc.
Race condition
Where several processes access and manipulate the same data concurrently and the outcome of
the execution depends on the particular order in which the access takes place, is called a race
condition. To prevent race conditions, concurrent processes must be synchronized.
critical section
flag[i] = false;
remainder section
} while (true);
Synchronization Hardware
Understanding the two-process solution and the benefits of the synchronization hardware
The hardware-based solution to critical section problem is based on a simple tool i.e. lock. The
solution implies that before entering into the critical section the process must acquire a lock
and must release the lock when it exits its critical section. Using of lock also prevent the race
condition.
Let’s say process P0 wants to enter the critical section it executes the code above which let
while loop invokes TestAndSet() instruction. Using the TestAndSet() instruction the P0
modifies the lock value to true to acquire the lock and enters the critical section. Now, when
P0 is already in its critical section process P1 also wants to enter in its critical section. So it
will execute the do-while loop and invoke TestAndSet() instruction only to see that the lock is
already set to true which means some process is in the critical section which will make P1
repeat while loop unless P0 turns the lock to false.
Once the process P0 complete executing its critical section its will turn the lock variable to
false. Then P1 can modify the lock variable to true using TestAndSet() instruction and enter its
critical section. This is how you can achieve mutual exclusion with the do-while structure
above i.e. it let only one process to execute its critical section at a time.
boolean rv = *target;
*target = true;
return rv;
do { do {
while (TestAndSet(&lock)); Process while (TestAndSet(&lock));
P1
// critical section // critical section
lock = FALSE; lock = FALSE;
Process
// remainder section // remainder section
P2
} while (TRUE); } while (TRUE);
Figure 2.20. Test and set Lock - solution for process P1 and P2
Compare and Swap or Swap Hardware Instruction:
Like TestAndSet() instruction the swap() hardware instruction is also an atomic instruction.
With a difference that it operates on two variables provided in its parameter. The structure of
swap() instruction is:
do {
key = TRUE;
while (key == TRUE)
Swap(&lock, &key);
// critical section
lock = FALSE;
// remainder section
} while (TRUE);
The structure above operates on one global shared Boolean variable lock and another local
Boolean variable key. Both of which are initially set to false. The process P0 interested in
executing its critical section execute code above and set lock as true and enter its critical
section. Thus refrain other processes from executing their critical section satisfying mutual
exclusion.
SPIN LOCK : While a process is in its critical section, any other process that tries to enter its
critical section must loop continuously in the call to critical section. This type of mutex lock is
also called a spinlock because the process “spins” while waiting for the lock to become
available.
Mutex Locks
The hardware-based solutions to the critical-section problem presented synchoronization
hardware are complicated as well as generally inaccessible to application programmers.
The simplest of these tools is the mutex lock is the software based solution to the critical-
section (mutex is short for mutual exclusion) .
We use the mutex lock to protect critical regions and thus prevent race conditions. That is, a
process must acquire the lock before entering a critical section; it releases the lock when it exits
the critical section. The acquire()function acquires the lock, and the release() function releases
the lock, as illustrated in Figure 2.21.
A mutex lock has a boolean variable available whose value indicates if the lock is available or
not. If the lock is available, a call to acquire() succeeds, and the lock is then considered
unavailable. A process that attempts to acquire an unavailable lock is blocked until the lock is
released.
The definition of acquire() is as follows:
acquire() {
while (!available)
; /* busy wait */
available = false;;
}
Semaphores
It is a synchronization tool that is used to generalize the solution to the critical section
problem in complex situations.
A Semaphore s is an integer variable that can only be accessed via two indivisible
(atomic) operations namely
wait (s)
{
while(s<=0);
s--;
}
The definition of signal() is as follows:
signal (s)
{
s++;
}
All modifications to the integer value of the semaphore in the wait() and signal() operations
must be executed indivisibly. That is, when one process modifies the semaphore value, no other
process can simultaneously modify that same semaphore value.
Semaphore Usage:
The two common kinds of semaphores are Counting semaphores and Binary semaphores.
Binary Semaphores: In Binary semaphores, the value of the semaphore variable will be 0 or
1. Initially, the value of semaphore variable is set to 1 and if some process wants to use some
resource then the wait() function is called and the value of the semaphore is changed to 0 from
1. The process then uses the resource and when it releases the resource then the signal()
function is called and the value of the semaphore variable is increased to 1. If at a particular
instant of time, the value of the semaphore variable is 0 and some other process wants to use
the same resource then it has to wait for the release of the resource by the previous process. In
this way, process synchronization can be achieved.
Counting Semaphores: In Counting semaphores, firstly, the semaphore variable is initialized
with the number of resources available. After that, whenever a process needs some resource,
then the wait() function is called and the value of the semaphore variable is decreased by one.
The process then uses the resource and after using the resource, the signal() function is called
and the value of the semaphore variable is increased by one. So, when the value of the
semaphore variable goes to 0 i.e all the resources are taken by the process and there is no
resource left to be used, then if some other process wants to use resources then that process has
to wait for its turn. In this way, we achieve the process synchronization.
Implementation - Gaining the knowledge of the usage of the semaphores for the Mutual
exclusion mechanisms
Semaphore Implementation
• When a process executes the wait operation and finds that the semaphore value
is not positive, the process can block itself. The block operation places the
process into a waiting queue associated with the semaphore.
• A process that is blocked waiting on a semaphore should be restarted when some
other process executes a signal operation. The blocked process should be
PREPARED BY DR J FARITHA BANU 36
restarted by a wakeup operation which put that process into ready queue.
• To implemented the semaphore, we define a semaphore as a record as:
typedef
struct { int
value;
struct process *L;
} semaphore;
▪
Assume two simple operations:
− block suspends the process that invokes it.
− wakeup(P) resumes the execution of a blocked process P.
▪
Semaphore operations now defined as
wait(S)
{
S.value--;
if (S.value <= 0) {
add this process to
S.L; block;
}
signal(S)
{
S.value++;
remove a process P from S.L;
wakeup(P);
}
Dining-Philosophers Problem:
Consider five philosophers who spend their lives thinking and eating. The philosophers share
a circular table surrounded by five chairs, each belonging to one philosopher. In the center of
the table is a bowl of rice, and the table is laid with five single chopsticks When a philosopher
thinks, she does not interact with her colleagues. From time to time, a philosopher gets hungry
and tries to pick up the two chopsticks that are closest to her (the chopsticks that are between
her and her left and right neighbors). A philosopher may pick up only one chopstick at a time.
Obviously, she cannot pick up a chopstick that is already in the hand of a neighbor. When a
hungry philosopher has both her chopsticks at the same time, she eats without releasing the
chopsticks. When she is finished eating, she puts down both chopsticks and starts thinking
again.
Monitors
Monitors
• One fundamental high-level synchronization construct
• A monitor is a synchronization construct that supports mutual exclusion and the ability
to wait /block until a certain condition becomes true.
• A monitor is an abstract datatype that encapsulates data with a set of functions to operate
on the data.
Characteristics of Monitor
• The local variables of a monitor can be accessed only by the local functions.
condition x, y;
This solution ensures that no two neighbors are eating simultaneously and that no deadlocks
will occur. However, with this solution it is possible for a philosopher to starve to death.