Professional Documents
Culture Documents
UNIT-02 2015 Regulation Process Management and Threading
UNIT-02 2015 Regulation Process Management and Threading
UNIT-02 2015 Regulation Process Management and Threading
1
OPERATING SYSTEMS (2015R REGULATION)
Process:
Program in execution is called as process
Process in memory:
Process in memory is divided into 4 sections:
Dynamic memory allocation (Dynamic Variables)
Compiled code
OPERATING SYSTEMS (2015R REGULATION)
Stack Heap
Both are special region of computer memory
Memory space is managed No guaranteed
efficiently by CPU.
Very fast in access Slow in access
Limit on memory size No limit on memory size
Variables cannot be resized Variables can be resized
Note:
Stack and heap start at opposite ends of the process’s free space and grow towards each other. [Meet stack
2. READY The process has all the resources available that it need to run, but the CPU is not
currently working on this process’s instruction.
4. WAITING The process cannot run at the moment, because it is waiting for resources to
become available or for some event to occur.
[ Waiting for keyboard input, disk access so on…]
Process State:
OPERATING SYSTEMS (2015R REGULATION)
A Process Control Block is a data structure maintained by the Operating System for every process.
The PCB is identified by an integer process ID (PID).
It contains many pieces of information associated with process.
OPERATING SYSTEMS (2015R REGULATION)
3.Program counter Program counter indicates the address of the next instruction to be executed
for the process.
4.Process number Unique identification number for every processes
5.CPU register Registers very in no’s and type depends upon the architecture includes
accumulators, index registers, stack registers, etc...
6.Memory allocation When process creation memory is allocated to process , when process
termination allocated memory reclaimed.
7.Event information A process in waiting state, this field contains information concerning the
event for which the process is waiting .
1.Account Information:
Information includes the amount of CPU time, Time limit, Process number etc…
2.I/O status information:
Information includes the list of I/O devices allocated to process.
OPERATING SYSTEMS (2015R REGULATION)
Process Scheduling:
In multiprogramming is to have some process running at all time, to maximize CPU utilization.
In time sharing is to switch CPU among processes. To meet these objective , the process scheduler selects
an available process for program execution on the CPU.
Process Scheduling Queues:
1. Job queue
All process in the system.
2. Ready queue
The processes that are residing in main memory with initial resources and are ready and waiting for
execution. (This queue is generally stored as a linked list)
3. Device queue
The list of processes waiting for a particular I/O device or event or resources kept in device queue.
OPERATING SYSTEMS (2015R REGULATION)
Schedulers:
Schedulers are special system software which handles process scheduling in various ways.
Context Switch:
Context switches are computationally intensive since register and memory state must be saved and
restored. To avoid the amount of context switching time, some hardware systems employ two or
more sets of processor registers.
When the process is switched, the following information is stored for later use.
Program Counter
Scheduling information
Changed State
Accounting information
OPERATING SYSTEMS (2015R REGULATION)
Operation on Processes:
The processes in the system can execute concurrently, and they must be created and deleted dynamically.
Operating system must provide a mechanism for process creation and termination.
Two main operations on processes:
1. Process creation
2. Process termination
Using fork () system call to create process it does not take any arguments.
The process that invokes the fork () is known as the parent process and the new process is called as
child process.
[Parent process is a creating process. Child process is created by the parent process. Child process may create
another sub process. So it forms a tree of processes]
Example One:
#include<stdio.h>
#include<unistd.h>
int main()
{
// make two process which run same program after this instruction
fork();
printf("Hello world!\n");
return 0;
}
Output:
Hello world!
Hello world!
OPERATING SYSTEMS (2015R REGULATION)
Example two:
Calculate number of times hello is printed
Program Output
#include<stdio.h> hello
#include<unistd.h> hello
int main() hello
{ hello
fork(); // Line 1 hello
fork(); // Line 2 hello
fork(); // Line 3 hello
printf("hello\n"); hello
return 0;
}
OPERATING SYSTEMS (2015R REGULATION)
Example three:
Program Output
2. Cooperating process can affect or be affected by the execution of another process and the process share
the data with other processes.
3. Shared memory
Processes can then exchange information by reading and writing data to the shared region. [Address space]
The form of the data and location are determined by these processes and also responsible for they are not
writing data on same location simultaneously.
OPERATING SYSTEMS (2015R REGULATION)
2. Message passing
Processes to communicate and to synchronize their actions without sharing the same address space.
Message passing is useful for exchanging smaller amount of data. Easier to implement than shared
memory.
Message passing system is typically implemented system calls and thus requires more time consuming task
because of OS [Kernel] intervention.
Message passing facility provides two operations:
Send (Destination address , Message)
Receive (Source address , Message)
Message send by a process can be either fixed (or) variable in size.
OPERATING SYSTEMS (2015R REGULATION)
3. Buffering
OPERATING SYSTEMS (2015R REGULATION)
Indirect communication: messages are not sent directly from sender to receiver but to a shared data structure
consisting of Queues that can temporarily hold messages.
The two processes are communicating, one process sends a message to the appropriate mailbox (Queue)
and the other process picks up the message from the mailbox. Each mailbox has a unique identification.
OPERATING SYSTEMS (2015R REGULATION)
Buffering:
Buffering is used direct and indirect communication.
Message exchanged by communicating processes reside in a temporary queue.
Threads: Overview
Process – Executing program with a single thread of control
Thread – Flow of execution through process code. It has own system registers and stack.
Thread is also known as lightweight process. The idea is achieve parallelism by dividing a process into
multiple threads
Thread run within application. Modern operating systems provide process that contains multiple thread of
control.
To improve application performance through parallelism and reduce the overhead of process switching.
OPERATING SYSTEMS (2015R REGULATION)
Process Threads
If one process is blocked, then no other While one thread is blocked and
process can execute until the first waiting, a second thread in the same
process is unblocked. task can run.
OPERATING SYSTEMS (2015R REGULATION)
Benefits of multithreading:
1. Responsiveness:
One thread provides rapid response another thread doing intensive calculation.
2. Resource sharing:
By default, threads share code, data and other resources and it allow multiple tasks simultaneously in
a single address space.
3. Economy:
Creating and managing threads is much faster than performing the same task for processes.
[Allocating memory and resources for process costly creating process]
4. Scalability:
Utilization of multiprocessor architecture.
OPERATING SYSTEMS (2015R REGULATION)
Thread library:
Code for creating and destroying thread.
Passing message and data between thread.
Scheduling thread execution and saving and restoring thread.
Advantages:
2. Kernel level thread -Operating System managed threads acting on kernel, an operating system core.
No thread managed code in the application because directly supported by kernel of the operating system.
Kernel threads are generally slower to create and manage.
Advantages:
Kernel can simultaneously schedule multiple threads from the same process (or) multiple processes.
If one thread in a process is blocked, the kernel can schedule another thread of same process.
OPERATING SYSTEMS (2015R REGULATION)
Multithreading model:
Many operating systems provide support for both user and kernel threads.
Multithreading classified into 3 types:
The many-to-many model multiplexes many user-level threads to a smaller or equal number of kernel threads.
The number of kernel threads may be specific to either a particular application or a particular machine.
OPERATING SYSTEMS (2015R REGULATION)
The many-to-one model maps many user-level threads to one kernel thread.
However, the entire process will block if a thread makes a blocking system call. [Also, because only one thread
can access the kernel at a time.]
OPERATING SYSTEMS (2015R REGULATION)
It provides more concurrency than the many-to-one model by allowing another thread to run when a thread makes a blocking system call.
The only drawback to this model is that creating a user thread requires creating the corresponding kernel thread.
Because the overhead of creating kernel threads can burden the performance of an application, most implementations of this model restrict
Threading Issues:
There are a variety of issues to consider with multithreaded programming
1. Semantics of fork() and exec() system calls
Does fork() duplicate only the calling thread or all threads
The fork system call creates a new process. The new process created by fork() is copy of the
current process except the returned value. The exec system call replaces the current process with a new
program.
2. Thread cancellation
Asynchronous cancellation terminates the target thread immediately
Deferred cancellation allows the target thread to periodically check if it should be cancelled
3. Signal handling
Notify a process that a particular event has occurred
4. Thread pooling
Create a pool of threads, and then assign tasks to them
5. Thread-specific data
Allows each thread to have its own copy of data
OPERATING SYSTEMS (2015R REGULATION)
CPU Scheduling:
The objective of multiprogramming is to have some process running at all times, to maximize CPU
utilization.
The success of CPU scheduling depends on an observed property of processes:
Process execution consists of a cycle of CPU execution and I/O wait. Processes alternate between
these two states.
OPERATING SYSTEMS (2015R REGULATION)
CPU Scheduler:
The selection process is carried out by the short-term scheduler, or CPU scheduler.
The scheduler selects a process from the processes in memory that are ready to execute and allocates the
CPU to that process.
Dispatcher:
The dispatcher is the module that gives control of the CPU to the process selected by the short-term
scheduler.
The dispatcher should be as fast as possible, since it is invoked during every process switch.
The time it takes for the dispatcher to stop one process and start another running is known as the
dispatch latency.
CPU-scheduling decisions may take place under the following four circumstances:
1. When a process switches from the running state to the waiting state (for example, as the result of an I/O
request or an invocation of wait() for the termination of a child process)
2. When a process switches from the running state to the ready state (for example, when an interrupt
occurs)
3. When a process switches from the waiting state to the ready state (for example, at completion of I/O)
4. When a process terminates
OPERATING SYSTEMS (2015R REGULATION)
Scheduling Criteria:
Different CPU-scheduling algorithms have different properties, and the choice of a particular algorithm
may favor one class of processes over another.
Many criteria have been suggested for comparing CPU-scheduling algorithms.
Which characteristics are used for comparison can make a substantial difference in which algorithm is
judged to be best.
1. CPU utilization. We want to keep the CPU as busy as possible. Conceptually, CPU utilization can range from
0 to 100 percent.
2. Throughput. If the CPU is busy executing processes, then work is being done. One measure of work is the
number of processes that are completed per time unit, called throughput.
3. Turnaround time. The important criterion is how long it takes to execute that process. The interval from
the time of submission of a process to the time of completion is the turnaround time.
4. Waiting time. Waiting time is the sum of the periods spent waiting in the ready queue.
5. Response time. The time from the submission of a request until the first response is produced. This measure,
called response time, is the time it takes to start responding, not the time it takes to output the response.
OPERATING SYSTEMS (2015R REGULATION)
Multilevel Queue:
Multi-level queue scheduling algorithm is used in scenarios where the processes can be classified into groups
based on property like process type, CPU time, IO access, memory size, etc.
General classification of the processes is foreground processes and background processes.
In a multi-level queue scheduling algorithm, there will be 'n' number of queues, where 'n' is the number of
groups the processes are classified into.
Each queue will be assigned a priority and will have its own scheduling algorithm like
Round-robin scheduling or FCFS. For the process in a queue to execute, all the queues of priority higher than
it should be empty, meaning the process in those high priority queues should have completed its execution.
In this scheduling algorithm, once assigned to a queue, the process will not move to any other queues.
OPERATING SYSTEMS (2015R REGULATION)