Professional Documents
Culture Documents
Os Unit 2 Notes
Os Unit 2 Notes
When a program is loaded into the memory and it becomes a process, it can be divided into four
The process Stack contains the temporary data such as method/function parameters, return
address and local variables.
Heap
Text
This includes the current activity represented by the value of Program Counter and the contents
of the processor's registers.
Data
Program code
which may be shared with other processes that are executing the same program
when the processor begins to execute the program code, we refer to this executing entity as a
process
Attributes of a process
The Attributes of the process are used by the Operating System to create the process control block
PCB is a data structure maintained by the Operating System for every process.
The architecture of a PCB is completely dependent on Operating System and may contain different
The PCB is maintained for a process throughout its lifetime, and is deleted once the process
terminates.
Process State
The current state of the process i.e., whether it is ready, running, waiting, or whatever.
Process privileges
Process ID
Pointer
Program Counter
Program Counter is a pointer to the address of the next instruction to be executed for this
process.
CPU registers
Various CPU registers where process need to be stored for execution for running state.
Process priority and other scheduling information which is required to schedule the process.
This includes the information of page table, memory limits, Segment table depending on memory
used by the operating system.
Accounting information
This includes the amount of CPU used for process execution, time limits, execution ID etc.
IO status information
A process changes its state during its execution is called the process life cycle.
Which has states representing the execution status of process at various time and transitions.
The current state of the process tells us about the current activity of the process.
The state of a process may change due to events like I/O requests, interrupt routines, synchronization
of processes, process scheduling algorithms, etc.
When a process is first created by the OS, it initializes the program control block for the process
and the new process enters the system in Not-running state.
After some time, the currently running process will be interrupted by some events, and the OS will
move the currently running process from Running state to Not-running state.
The dispatcher then selects one process from Not-running processes and moves the process to the
Running state for execution.
Dispatcher: Dispatcher is a program that gives the CPU to the process selected by the CPU
scheduler.
Three State Process Model
There is one major drawback of two state process model.
When dispatcher brings a new process from not-running state to running state, the process might
still be waiting for some event or I/O request.
So, the dispatcher must traverse the queue and find a not-running process that is ready for
execution.
To overcome this problem, we split the not-running state into two states: Ready State and Waiting
(Blocked) State.
Ready State: The process in the main memory that is ready for execution.
Waiting or Blocked State: The process in the main memory that is waiting for some event.
The OS maintains a separate queue for both Ready State and Waiting State. A process moves
from Waiting State to Ready State once the event it’s been waiting for completes.
Five-State Process Model
States
New: The process that is just being created. The Program Control Block is already being made but
Waiting/Blocked: Process waiting for some event such as completion of I/O operation, waiting
New -> Ready: The long term scheduler picks up a new process from secondary memory and
loads it into the main memory when there are sufficient resources available. The process is now in
Ready -> Running: The short term scheduler or the dispatcher moves one process from ready
Running -> Terminated: The OS moves a process from running state to terminated state if the
Running -> Waiting: A process is put in the waiting state if it must wait for some event. For
example, the process may request some resources or memory which might not be available.
Waiting -> Ready: A process moves from waiting state to ready state if the event the process has
been waiting for, occurs. The process is now ready for execution.
Six State Process Model
A Six state process model with suspend state.
Therefore, a situation may occur where the processor executes so fast that all of the processes move to
waiting state and no process is in ready state.
The CPU sits idle until atleast one process finishes the I/O operation. This leads to low CPU utilization.
To prevent this, if all the processes in the main memory are in waiting state, the OS suspends a process
in waiting/blocked state and move it to secondary memory.
All suspend processes are kept in a queue. The memory of the swapped process is freed.
CPU can now bring some other process in the main memory.
There are two options. First is to bring a brand new process and the second option is to bring another
process from suspend queue back to main memory.
State Transitions
Waiting -> Suspend: The OS moves a process from the waiting state to a suspend state if all the
Suspend -> Ready: When sufficient memory is available, the OS moves a process from the
Suspend -> Waiting: The process brought by OS from secondary memory to main memory might
the CPU doesn’t know which process in the suspend queue is ready for execution.
CPU may swap a process that is still waiting for event completion from secondary memory back to
There is no point in moving a blocked process back to the main memory. The performance suffers.
Blocked/Suspend: The process is in secondary memory but not yet ready for execution.
Waiting -> Blocked/Suspend: If all the processes in the main memory are in the waiting state, the
processor swaps out atleast one waiting process back to secondary memory to free memory to
Blocked/Suspend -> Waiting: This transition might look unreasonable but if the priority of a
process in Blocked/Suspend state is greater than processes in Ready/Suspend state then CPU may
Ready/Suspend state if the event, the process has been waiting for occurs.
Ready/Suspend -> Ready: The OS moves a process from secondary memory to the main
memory when there is sufficient space available. Also, if there is a high priority process in
Ready/Suspend state, then OS may swap it with a lower priority process in the main memory.
New -> Ready/Suspend: The OS may move a new process to Ready/Suspended if the main
memory is full.
Schedulers
Schedulers are special system software which handle process scheduling in various ways. Their
main task is decide which process to run. Schedulers are of three types −
Long-Term Scheduler
Short-Term Scheduler
Medium-Term Scheduler
Long Term Scheduler
A long-term scheduler determines which programs are admitted to the system for processing.
It selects processes from the queue and loads them into memory for execution.
CPU scheduler selects a process among the processes that are ready to execute and allocates
CPU to one of them.
Short-term schedulers, also known as dispatchers, make the decision of which process to
execute next.
Short-term schedulers are faster than long-term schedulers.
Context Switch
A context switch is the mechanism to store and restore the state of a CPU in Process Control block
so that a process execution can be resumed from the same point at a later time.
Using this technique, a context switcher enables multiple processes to share a single CPU.
When the scheduler switches the CPU from executing one process to execute another, the state
from the current running process is stored into the process control block
When the process is switched, the following information is stored for later use.
Program Counter
Scheduling information
Changed State
Accounting information
Operations on processes
The execution of a process is a complex activity. It involves various operations. Following are
the operations that are performed while execution of a process
Creation: This the initial step of process execution activity. Process creation means the
construction of a new process for the execution. This might be performed by system, user or old
process itself.
Scheduling/Dispatching: The event or activity in which the state of the process is changed from
ready to running
Blocking: When a process invokes an input-output system call that blocks the process and
operating system put in block mode. Block mode is basically a mode where process waits for
input-output.
Preemption: When a timeout occurs that means the process hadn’t been terminated in the allotted
time interval and next process is ready to execute, then the operating system preempts the process.
Termination: Process termination is the activity of ending the process. In other words, process
termination is the relaxation of computer resources taken by the process for the execution
Interprocess communication
Interprocess communication is the mechanism provided by the operating system that allows
processes to communicate with each other.
Processes executing concurrently in the operating system may be either independent processes or
cooperating processes.
A process is independent if it cannot affect or be affected by the other processes executing in the
system.
Any process that does not share data with any other process is independent.
A process is cooperating if it can affect or be affected by the other processes executing in the
system.
Any process that shares data with other processes is a cooperating process.
The communication between these processes can be seen as a method of co-operation between
them.
Shared Memory
Message passing
Shared Memory:
Shared memory is the memory that can be simultaneously accessed by multiple processes.
This is done so that the processes can communicate with each other.
Message Queue:
Multiple processes can read and write data to the message queue without being connected to
each other.
Messages are stored in the queue until their recipient retrieves them.
Message queues are quite useful for Interprocess communication and are used by most
operating systems.
CPU Scheduling
It is a process of determining which process will own CPU for execution while the execution of another
process is on hold(in waiting state) due to unavailability of any resource like I/O etc.,.
The aim of CPU scheduling is to make the system efficient, fast and fair.
Whenever the CPU becomes idle, the operating system must select one of the processes in the ready
queue to be executed.
The selection process is carried out by the short-term scheduler (or CPU scheduler).
The scheduler selects one from set of processes in memory that are ready to execute, and allocates the
Preemptive Scheduling
At times it is necessary to run a certain task that has a higher priority before another task although
it is running.
Therefore, the running task is interrupted for some time and resumed later when the priority task
Once the CPU has been allocated to a process, the process keeps the CPU until it releases the CPU
2. Specific process switches from the running state to the ready state.
3. Specific process switches from the waiting state to the ready state.
CPU Utilization
To make out the best use of CPU and not to waste any CPU cycle, CPU would be working most of the
time(Ideally 100% of the time).
Throughput
It is the total number of processes completed per unit time or rather say total amount of work done in a
unit of time.
Turnaround Time
It is the amount of time taken to execute a particular process, i.e. The interval from time of submission of
the process to the time of completion of the process(Wall clock time).
Waiting Time
The amount of time a process has been waiting in the ready queue to acquire get control on the
CPU.
Load Average
It is the average number of processes residing in the ready queue waiting for their turn to get
into the CPU.
Response Time
Amount of time it takes from when a request was submitted until the first response is produced.
Scheduling Algorithms
To decide which process to execute first and which process to execute last to achieve maximum
CPU utilization, we have some algorithms, they are:
Shortest-Job-First(SJF) Scheduling
Priority Scheduling
The process which arrives first, gets executed first, or we can say that the process which requests
First Come First Serve, is just like FIFO(First in First out) Queue data structure, where the data
element which is added to the queue first, is the one who leaves the queue first.
P0 0-0=0
P1 5-1=4
P2 8-2=6
P3 16 - 3 = 13
SJF is a full form of (Shortest job first) is a scheduling algorithm in which the process with the
shortest execution time should be selected for execution next.
This scheduling method can be preemptive or non-preemptive. It significantly reduces the average
waiting time for other processes awaiting execution.
In this method, when the CPU is available, the next process or job with the shortest completion
time will be executed first.
Non Pre-emptive Shortest Job First
Execution
Process Arrival Time Time Service Time
P0 0 5 0
P1 1 3 5
P2 2 8 14
P3 3 6 8
In this method, the scheduler selects the tasks to work as per the priority.
The processes with higher priority should be carried out first, whereas jobs with equal priorities
P1 0 8 3
P2 1 1 1
P3 2 3 2
P4 3 2 3
P5 4 6 4
Once a process is executed for a given time period, it is preempted and other process executes for
According to the priority of process, processes are placed in the different queues.
Generally high priority process are placed in the top level queue.
Only after completion of processes from top level queue, lower level queued processes are
scheduled.
The idea is to separate processes according to the characteristics of their CPU bursts.
selecting an algorithm can be difficult. The first problem is defining the criteria to be used in
selecting an algorithm.
Criteria are often defined in terms of CPU utilization, response time, or throughput.
Maximizing CPU utilization under the constraint that the maximum response time is 1 second
Maximizing throughput such that turnaround time is (on average) linearly proportional to total
execution time
Thread
Threads provide a way to improve application performance through parallelism by dividing a process
A thread shares with its peer threads few information like code segment, data segment and open files.
When one thread alters a code segment memory item, all other threads see that.
Advantages
Efficient communication.
Kernel Level Threads − Operating System managed threads acting on kernel, an operating
system core.
Multithreading Models
Multiple threads within the same application can run in parallel on multiple processors and a
The following diagram shows the many-to-many threading model where 6 user level threads are
multiplexing with 6 kernel level threads.
Many to One Model
Many-to-one model maps many user level threads to one Kernel-level thread.
When thread makes a blocking system call, the entire process will be blocked.
Only one thread can access the Kernel at a time, so multiple threads are unable to run in parallel
on multiprocessors.
One to One Model
There is one-to-one relationship of user-level thread to the kernel-level thread.
It also allows another thread to run when a thread makes a blocking system call.
In the multiprocessor system all the processor operate under the single operating system.
In multiple processor scheduling the processors are identical i.e. HOMOGENEOUS, in terms of
their functionality, we can use any processor available to run any process in the queue.
Approaches
when all the scheduling decisions and I/O processing are handled by a single processor which is
called the Master Server and the other processors executes only the user code. This entire
in a common ready queue or each processor may have its own private queue for ready processes.
Multi core programming
A processor that has more than one core is called Multicore Processor
These cores can individually read and execute program instructions, giving feel like computer
system has several processors but in reality, they are cores and not processors.
Processor can run instructions on separate cores at same time. This increases overall speed of
program execution in system.
Software that can run parallelly is preferred because we want to achieve parallel execution with
the help of multiple cores.
Concurrent systems that we create using multicore programming have multiple tasks executing in
parallel. This is known as concurrent execution
Thread Libraries
A thread library provides the programmer an API for creating and managing threads. There are
two primary ways of implementing a thread library.
The first approach is to provide a library entirely in user space with no kernel support. All
code and data structures for the library exist in user space. This means that invoking a
function in the library results in a local function call in user space and not a system call.
The second approach is to implement a kernel-level library supported directly by the OS. In
this case, code and data structures for the library exist in kernel space. Invoking a function in
the API for the library typically results in a system call to the kernel.
Three main thread libraries are in use today:
POSIX Pthreads: Pthreads, the threads extension of the POSIX standard, may be provided as
Win32: The Win32 thread library is a kernel-level library available on Windows systems.
Java: The Java thread API allows thread creation and management directly in Java programs.
However, because in most instances the JVM is running on top of a host OS, the Java thread
API is typically implemented using a thread library available on the host system.