UNIT-02 2015 Regulation Process Management and Threading

You might also like

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 40

OPERATING SYSTEMS (2015R REGULATION)

UNIT – 2 PROCESS MANAGEMENT AND


THREADING

Processes: Process concept – Process scheduling –


Operation on Processes - Inter-process Communication:
Shared Memory Systems - Message Passing Systems.

Process Scheduling: Basic Concepts – Scheduling Criteria


– Scheduling Algorithms: First-Come, First-Served –
Priority – Round-Robin – Multilevel Queue – Multilevel
Feedback Queue.

Threads: Overview – Multithreading models - Threading


issues.

1
OPERATING SYSTEMS (2015R REGULATION)

Process:
 Program in execution is called as process
Process in memory:
Process in memory is divided into 4 sections:

 Reserved for local variables


 Dynamic memory allocation (Dynamic Variables)

 Stores global and static variables

Compiled code
OPERATING SYSTEMS (2015R REGULATION)

Difference between stack and heap memory:

Stack Heap
Both are special region of computer memory
Memory space is managed No guaranteed
efficiently by CPU.
Very fast in access Slow in access
Limit on memory size No limit on memory size
Variables cannot be resized Variables can be resized

Note:
 Stack and heap start at opposite ends of the process’s free space and grow towards each other. [Meet stack

and heap  stack overflow error will occur]


OPERATING SYSTEMS (2015R REGULATION)

Process life cycle:


 During execution of the process , it changes state.
 State of the process defines in part by the current activity of that process.
1. NEW  The process is created

2. READY  The process has all the resources available that it need to run, but the CPU is not
currently working on this process’s instruction.

3. RUNNING  The CPU is working on this process’s instruction.


[Process being executed]

4. WAITING The process cannot run at the moment, because it is waiting for resources to
become available or for some event to occur.
[ Waiting for keyboard input, disk access so on…]

5. TERMINATED The process has completed. [Finish execution]


OPERATING SYSTEMS (2015R REGULATION)

Process State:
OPERATING SYSTEMS (2015R REGULATION)

A Process Control Block is a data structure maintained by the Operating System for every process.
The PCB is identified by an integer process ID (PID).
It contains many pieces of information associated with process.
OPERATING SYSTEMS (2015R REGULATION)

1.Pointers Pointers pointing to another process control block.


2.Process state It defines current activities of the process like new, ready, running etc…

3.Program counter Program counter indicates the address of the next instruction to be executed
for the process.
4.Process number  Unique identification number for every processes

5.CPU register Registers very in no’s and type depends upon the architecture includes
accumulators, index registers, stack registers, etc...

6.Memory allocation When process creation memory is allocated to process , when process
termination allocated memory reclaimed.

7.Event information A process in waiting state, this field contains information concerning the
event for which the process is waiting .
1.Account Information:
Information includes the amount of CPU time, Time limit, Process number etc…
2.I/O status information:
Information includes the list of I/O devices allocated to process.
OPERATING SYSTEMS (2015R REGULATION)

Process Scheduling:
 In multiprogramming is to have some process running at all time, to maximize CPU utilization.
 In time sharing is to switch CPU among processes. To meet these objective , the process scheduler selects
an available process for program execution on the CPU.
Process Scheduling Queues:
1. Job queue
 All process in the system.
2. Ready queue
 The processes that are residing in main memory with initial resources and are ready and waiting for
execution. (This queue is generally stored as a linked list)
3. Device queue
 The list of processes waiting for a particular I/O device or event or resources kept in device queue.
OPERATING SYSTEMS (2015R REGULATION)

Schedulers:
 Schedulers are special system software which handles process scheduling in various ways.

Schedulers are of three types:


1. Long term scheduler (or) Job scheduler
 The primary objective of the job scheduler is to provide a balanced mix of processes such as I/O bound
and CPU bound.
 It also controls the degree of multiprogramming.

2. Short term scheduler (or) CPU scheduler


 CPU scheduler selects process from among the processes that are ready to execute and allocates the CPU to
one of them. (Faster than long term scheduler)
OPERATING SYSTEMS (2015R REGULATION)

3. Medium term scheduler (or) Intermediate level scheduler


 Intermediate level scheduling is part of the swapping function.
 It removes the process from the memory [swap out] and reintroduces process in ready queue for execution
[swap in].

Context switch: [Moving one process to another process]


 A context switch is the mechanism to store and restore the state or context of a CPU in Process Control
block so that a process execution can be resumed from the same point at a later time.
OPERATING SYSTEMS (2015R REGULATION)

Context Switch:

 Context switches are computationally intensive since register and memory state must be saved and
restored. To avoid the amount of context switching time, some hardware systems employ two or
more sets of processor registers.

When the process is switched, the following information is stored for later use.
 Program Counter

 Scheduling information

 Base and limit register value

 Currently used register

 Changed State

 I/O State information

 Accounting information
OPERATING SYSTEMS (2015R REGULATION)

Comparison between scheduler:


OPERATING SYSTEMS (2015R REGULATION)

Operation on Processes:
 The processes in the system can execute concurrently, and they must be created and deleted dynamically.
 Operating system must provide a mechanism for process creation and termination.
Two main operations on processes:
1. Process creation
2. Process termination

Using fork () system call to create process it does not take any arguments.
 The process that invokes the fork () is known as the parent process and the new process is called as
child process.
[Parent process is a creating process. Child process is created by the parent process. Child process may create
another sub process. So it forms a tree of processes]

fork () - Fails return –ve value


Success return >0 parent process
Success return 0 child process
OPERATING SYSTEMS (2015R REGULATION)

Example One:

#include<stdio.h>
#include<unistd.h>
int main()

    // make two process which run same program after this instruction
    fork();
    printf("Hello world!\n");
    return 0;
}

Output:
Hello world!
Hello world!
OPERATING SYSTEMS (2015R REGULATION)

Example two:
Calculate number of times hello is printed

Program Output
#include<stdio.h> hello
#include<unistd.h> hello
int main() hello
{ hello
    fork(); // Line 1 hello
    fork(); // Line 2 hello
    fork(); // Line 3 hello
    printf("hello\n"); hello
    return 0;
}
OPERATING SYSTEMS (2015R REGULATION)

Example three:

Program Output

#include<stdio.h> Hello from Child!


#include<unistd.h> Hello from Parent!
void forkexample() (or)
{
    // child process because return value zero Hello from Parent!
    if (fork()==0) Hello from Child!
        printf("Hello from Child!\n");
 
    // parent process because return value non-zero.
    else    
        printf("Hello from Parent!\n");
}
int main()
{
    forkexample();
    return 0;
}
OPERATING SYSTEMS (2015R REGULATION)

Process can be terminated two ways:


1. Normal termination [exit]
2. Abnormal termination [Abort]
Normal termination[Exit]:
 Child process return data to its parent [Parent is wait for child to complete]
 Process resources and memory reclaimed by operating system
Abnormal termination [Abort] : (Parent may terminate execution of child process)
 Child has exceeded allocated resources.
 Task assigned to child but is no longer required.

Some Situation like


 Some of operating system not allow child to continue if parent is terminate.
 Parent is decide to terminate so, ask child to terminate [All child process terminated] is also called as
cascading termination.
OPERATING SYSTEMS (2015R REGULATION)

Inter – Process Communication:


Cooperating Processes:
 Process can be either independent processes or cooperating processes.
1. Independent process cannot affect or be affected by the execution of another process and the process
does not share any data with other process.

2. Cooperating process can affect or be affected by the execution of another process and the process share
the data with other processes.

Advantages of process cooperation:


1. Information sharing
2. Computation speed-up
3. Modularity
4. Convenience
Ex: Producer consumer problem
 Inter process communication provide a mechanism to allow processes to communicate and to synchronize
their action.
 IPC is particularly useful in a distributed environment.
OPERATING SYSTEMS (2015R REGULATION)

IPC can implement two ways:


1. Shared memory
2. Message passing

3. Shared memory

 Processes can then exchange information by reading and writing data to the shared region. [Address space]
 The form of the data and location are determined by these processes and also responsible for they are not
writing data on same location simultaneously.
OPERATING SYSTEMS (2015R REGULATION)

2. Message passing

 Processes to communicate and to synchronize their actions without sharing the same address space.
 Message passing is useful for exchanging smaller amount of data. Easier to implement than shared
memory.
 Message passing system is typically implemented system calls and thus requires more time consuming task
because of OS [Kernel] intervention.
Message passing facility provides two operations:
 Send (Destination address , Message)
 Receive (Source address , Message)
Message send by a process can be either fixed (or) variable in size.
OPERATING SYSTEMS (2015R REGULATION)

They are several methods for logically implementing a link:

1. Direct Communication and Indirect communication

2. Synchronous and asynchronous communication

3. Buffering
OPERATING SYSTEMS (2015R REGULATION)

Direct communication: Link is associated with exactly two processes.

Indirect communication: messages are not sent directly from sender to receiver but to a shared data structure
consisting of Queues that can temporarily hold messages.
 The two processes are communicating, one process sends a message to the appropriate mailbox (Queue)
and the other process picks up the message from the mailbox. Each mailbox has a unique identification.
OPERATING SYSTEMS (2015R REGULATION)

Synchronous and asynchronous communication:


 Synchronous and asynchronous is also known as blocking and Non blocking.

Three combinations are possible using blocking and Non blocking:

1. Blocking send and blocking receive


 Both sender and receiver blocked until the message is delivered this is also known as RENDEZVOUS.
 
2. Non blocking send and blocking receive
 Sender continues, the receiver blocked until the requested message arrives.
Ex: Server process that exists to provide services to other processes.

3. Non blocking send and Non blocking receive


 Sending process sends the message and resumes the operation. Receiver retrieves either a valid message or
null.
Ex: Suitable for concurrent programming task.
OPERATING SYSTEMS (2015R REGULATION)

Buffering:
 Buffering is used direct and indirect communication.
 Message exchanged by communicating processes reside in a temporary queue.

Buffering implemented in three ways:


1. Zero capacity
Maximum length of the queue is zero.
Ex: Direct communication
2. Bounded capacity
 The queue has a finite length. If the queue is full, it discards the message and sender is blocked until
space is available in the queue.
3. Unbounded capacity
 The queue has infinite length. Any number of messages can wait in it. The sender never blocks.
OPERATING SYSTEMS (2015R REGULATION)

Threads: Overview
 Process – Executing program with a single thread of control
 Thread – Flow of execution through process code. It has own system registers and stack.
 Thread is also known as lightweight process. The idea is achieve parallelism by dividing a process into
multiple threads
 Thread run within application. Modern operating systems provide process that contains multiple thread of
control.
 To improve application performance through parallelism and reduce the overhead of process switching.
OPERATING SYSTEMS (2015R REGULATION)

Ex: Word processor


1. Thread one: check spelling and grammar
2. Thread two: processes user input
3. Thread three: Periodic automatic backups of the file being used.

Ex: In a browser, multiple tabs can be different threads.

Difference between Process and Threads:

Process Threads

Process is heavy weight Thread is light weight, taking lesser


resources than a process

Process switching needs interaction Thread switching does not need to


with operating system. interact with operating system.

If one process is blocked, then no other While one thread is blocked and
process can execute until the first waiting, a second thread in the same
process is unblocked. task can run.
OPERATING SYSTEMS (2015R REGULATION)

Benefits of multithreading:
1. Responsiveness:
 One thread provides rapid response another thread doing intensive calculation.
2. Resource sharing:
 By default, threads share code, data and other resources and it allow multiple tasks simultaneously in
a single address space.
3. Economy:
 Creating and managing threads is much faster than performing the same task for processes.
[Allocating memory and resources for process costly creating process]
4. Scalability:
 Utilization of multiprocessor architecture.
OPERATING SYSTEMS (2015R REGULATION)

Threads are implemented in 2 ways:

1. User level thread -User managed threads.


 All the work of thread managed done by application not aware of kernel of the operating system.
Example : Java thread, POSIX threads.

Thread library:
 Code for creating and destroying thread.
 Passing message and data between thread.
 Scheduling thread execution and saving and restoring thread.
Advantages:

 Thread switching not required.

 User level thread can run any operating system.

 User level thread is faster to create and manage.


OPERATING SYSTEMS (2015R REGULATION)

2. Kernel level thread -Operating System managed threads acting on kernel, an operating system core.
 No thread managed code in the application because directly supported by kernel of the operating system.
 Kernel threads are generally slower to create and manage.

Example: Windows, Solaris

Advantages:

 Kernel can simultaneously schedule multiple threads from the same process (or) multiple processes.

 If one thread in a process is blocked, the kernel can schedule another thread of same process.
OPERATING SYSTEMS (2015R REGULATION)

Multithreading model:

 Many operating systems provide support for both user and kernel threads.
Multithreading classified into 3 types:

1. Many-to-many model (Many user thread to many kernel thread)

 The many-to-many model multiplexes many user-level threads to a smaller or equal number of kernel threads.

 The number of kernel threads may be specific to either a particular application or a particular machine.
OPERATING SYSTEMS (2015R REGULATION)

2. Many-to-one model (Many user thread to one kernel thread)

 The many-to-one model maps many user-level threads to one kernel thread.

 Thread management is done by the thread library in user space, so it is efficient.

 However, the entire process will block if a thread makes a blocking system call. [Also, because only one thread
can access the kernel at a time.]
OPERATING SYSTEMS (2015R REGULATION)

3. One-to-one model (One user thread to one kernel thread)

 The one-to-one model maps each user thread to a kernel thread.

 It provides more concurrency than the many-to-one model by allowing another thread to run when a thread makes a blocking system call.

 It also allows multiple threads to run in parallel on multiprocessors.

 The only drawback to this model is that creating a user thread requires creating the corresponding kernel thread.

 Because the overhead of creating kernel threads can burden the performance of an application, most implementations of this model restrict

the number of threads supported by the system.


OPERATING SYSTEMS (2015R REGULATION)

Difference between User-level threads and Kernel-level threads:


OPERATING SYSTEMS (2015R REGULATION)

Threading Issues:
There are a variety of issues to consider with multithreaded programming
1. Semantics of fork() and exec() system calls
Does fork() duplicate only the calling thread or all threads
The fork system call creates a new process. The new process created by fork() is copy of the
current process except the returned value. The exec system call replaces the current process with a new
program.
2. Thread cancellation
Asynchronous cancellation terminates the target thread immediately
Deferred cancellation allows the target thread to periodically check if it should be cancelled
3. Signal handling
Notify a process that a particular event has occurred
4. Thread pooling
Create a pool of threads, and then assign tasks to them
5. Thread-specific data
Allows each thread to have its own copy of data
OPERATING SYSTEMS (2015R REGULATION)

CPU Scheduling:
The objective of multiprogramming is to have some process running at all times, to maximize CPU
utilization.
The success of CPU scheduling depends on an observed property of processes:
Process execution consists of a cycle of CPU execution and I/O wait. Processes alternate between
these two states.
OPERATING SYSTEMS (2015R REGULATION)

CPU Scheduler:
The selection process is carried out by the short-term scheduler, or CPU scheduler.
 The scheduler selects a process from the processes in memory that are ready to execute and allocates the
CPU to that process.
Dispatcher:
 The dispatcher is the module that gives control of the CPU to the process selected by the short-term
scheduler.
The dispatcher should be as fast as possible, since it is invoked during every process switch.
 The time it takes for the dispatcher to stop one process and start another running is known as the
dispatch latency.
CPU-scheduling decisions may take place under the following four circumstances:
1. When a process switches from the running state to the waiting state (for example, as the result of an I/O
request or an invocation of wait() for the termination of a child process)
2. When a process switches from the running state to the ready state (for example, when an interrupt
occurs)
3. When a process switches from the waiting state to the ready state (for example, at completion of I/O)
4. When a process terminates
OPERATING SYSTEMS (2015R REGULATION)

Scheduling Criteria:
 Different CPU-scheduling algorithms have different properties, and the choice of a particular algorithm
may favor one class of processes over another.
 Many criteria have been suggested for comparing CPU-scheduling algorithms.
 Which characteristics are used for comparison can make a substantial difference in which algorithm is
judged to be best.

The criteria include the following:

1. CPU utilization. We want to keep the CPU as busy as possible. Conceptually, CPU utilization can range from
0 to 100 percent.

2. Throughput. If the CPU is busy executing processes, then work is being done. One measure of work is the
number of processes that are completed per time unit, called throughput.

3. Turnaround time. The important criterion is how long it takes to execute that process. The interval from
the time of submission of a process to the time of completion is the turnaround time.

4. Waiting time. Waiting time is the sum of the periods spent waiting in the ready queue.

5. Response time. The time from the submission of a request until the first response is produced. This measure,
called response time, is the time it takes to start responding, not the time it takes to output the response.
OPERATING SYSTEMS (2015R REGULATION)

Scheduling Algorithms: *****[ Refer Solution Book]*****


Preemptive Scheduling:
 Running process may be replaced by a higher priority process any time.

Non Preemptive Scheduling:


 Once CPU is allocated to a process, the process keeps the CPU until process termination.

1. FCFS [First Come First Serve]


- Non Preemptive Scheduling
2. SJF [Shortest Job First]
- Non Preemptive Scheduling
- Preemptive Scheduling [SRTF – Shortest Remaining Time First]
3. Priority
- Non Preemptive Scheduling
- Preemptive Scheduling
4. Round Robin
- Preemptive Scheduling
5. Multilevel Queue
6. Multilevel Feedback Queue
OPERATING SYSTEMS (2015R REGULATION)

Multilevel Queue:
 Multi-level queue scheduling algorithm is used in scenarios where the processes can be classified into groups
based on property like process type, CPU time, IO access, memory size, etc.
 General classification of the processes is foreground processes and background processes.
 In a multi-level queue scheduling algorithm, there will be 'n' number of queues, where 'n' is the number of
groups the processes are classified into.
 Each queue will be assigned a priority and will have its own scheduling algorithm like
Round-robin scheduling or FCFS. For the process in a queue to execute, all the queues of priority higher than
it should be empty, meaning the process in those high priority queues should have completed its execution.
 In this scheduling algorithm, once assigned to a queue, the process will not move to any other queues.
OPERATING SYSTEMS (2015R REGULATION)

Multilevel Feedback Queue:


 This Scheduling is like Multilevel Queue(MLQ) Scheduling but in this process can move between the queues.
 Multilevel Feedback Queue Scheduling (MLFQ) keep analyzing the behavior (time of execution) of processes
and according to which it changes its priority.
 If a process uses too much CPU time, it will be moved to a lower-priority queue.
 This scheme leaves I/O-bound and interactive processes in the higher-priority queues. In addition, a process
that waits too long in a lower-priority queue may be moved to a higher-priority queue.
 This form of aging prevents starvation.

Highest Priority Shorter Processes Terminated

Intermediate Processes Terminated

Lowest Priority All Processes Terminated

You might also like