Unit 2 Process Processor Memory Management

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 8

2 INTRODUCTION

1
UNIT 2: PROCESS,
PROCESSOR AND • A Program does nothing unless its instructions are executed by a CPU. A program in
execution is called a process. In order to accomplish its task, process needs the computer
MEMORY MANAGEMENT resources.
• There may exist more than one process in the system which may require the same
resource at the same time. Therefore, the operating system has to manage all the
processes and the resources in a convenient and efficient way.
• Some resources may need to be executed by one process at one time to maintain the
consistency otherwise the system can become inconsistent and deadlock may occur.
• OS uses the terms job and process almost interchangeably and much prefer the term process.

3 INTRODUCTION (CONT.) 4 PROCESS CONCEPT

• The operating system is responsible for the following activities in connection with • A process is a program in execution. A process is more than the
Process Management: program code, which is sometimes known as the text section.

1. Scheduling processes and threads on the CPUs. • It also includes the current activity, as represented by the value
of the program counter and the contents of the processor’s
2. Creating and deleting both user and system processes.
registers.
3. Suspending and resuming processes.
• A process generally also includes the process stack, which
4. Providing mechanisms for process synchronization. contains temporary data (such as function parameters, return
5. Providing mechanisms for process communication. addresses, and local variables), and a data section, which
contains global variables.
• A process may also include a heap(mass), which is memory that
is dynamically allocated during process run time.
5 PROCESS STATE 6 PROCESS CONTROL BLOCK (PCB)

• Each process is represented in the operating system by a process control block


• As a process executes, it changes state. The state of a (PCB)— also called a task control block.
process is defined in part by the current activity of
• It contains many pieces of information associated with a specific process,
that process. Each process may be in one of the including these:
following states:  Process state (the state may be new, ready, running, waiting, halted and so on.)
 Program counter (indicates the address of the next instruction to be executed for this process.)
 new: The process is being created
 CPU registers (registers vary in number and type, depending on the computer architecture.)
 running: Instructions are being executed  CPU scheduling information (includes a process priority, pointers to scheduling queues, and
 waiting: The process is waiting for some event to other scheduling parameters)
 Memory-management information (includes the value of the base and limit registers, page
occur tables, depending on the memory system used by the OS.
 ready: The process is waiting to be assigned to a  Accounting information (includes the amount of CPU and real time used, time limits, account
numbers, job or process numbers and so on)
processor  I/O status information (includes the list of I/O devices allocated to this process, a list of open
 terminated: The process has finished execution files, and so on)

7 PROCESS SCHEDULING 8 SCHEDULERS

• The objective of multiprogramming is to have some process running at all • Processes migrate among the various queues. The selection process is carried out by the
appropriate scheduler.
times, to maximize CPU utilization.
1. Long term scheduler - is also known as job scheduler. It chooses the processes from the
• The objective of time sharing is to switch the CPU among processes so pool (secondary memory) and keeps them in the ready queue maintained in the primary
memory.
frequently that users can interact with each program while it is running.
Long Term scheduler mainly controls the degree of Multiprogramming.The purpose of long
• To meet these objectives, the process scheduler selects an available term scheduler is to choose a perfect mix of IO bound and CPU bound processes among
process (possibly from a set of several available processes) for program the jobs present in the pool.

execution on the CPU.


9 SCHEDULERS 10 SCHEDULERS

• Processes migrate among the various queues. The selection process is carried out by the • Processes migrate among the various queues. The selection process is carried out by the
appropriate scheduler. appropriate scheduler.
2. Short term scheduler - is also known as CPU scheduler. It selects one of the Jobs from the 3. Medium term scheduler - takes care of the swapped out processes. If the running state
ready queue and dispatch to the CPU for the execution. processes needs some IO time for the completion then there is a need to change its state
A scheduling algorithm is used to select which job is going to be dispatched for the from running to waiting.
execution. The Job of the short term scheduler can be very critical in the sense that if it Medium term scheduler is used for this purpose. It removes the process from the running
selects job whose CPU burst time is very high then all the jobs after that, will have to wait state to make room for the other processes. Such processes are the swapped out processes
in the ready queue for a very long time. and this procedure is called swapping.The medium term scheduler is responsible for
suspending and resuming the processes.

11 VARIOUS TIMES RELATED TO THE PROCESS 12 VARIOUS TIMES RELATED TO THE PROCESS

1. Arrival Time: The time at which the process enters into the ready queue is called the arrival time.
2. Burst Time: The total amount of time required by the CPU to execute the whole process is called the Burst Time. This
does not include the waiting time. It is confusing to calculate the execution time for a process even before executing it
hence the scheduling problems based on the burst time cannot be implemented in reality.

3. Completion Time: The Time at which the process enters into the completion state or the time at which the process
completes its execution, is called completion time.
4. Turnaround time: The total amount of time spent by the process from its arrival to its completion, is called
Turnaround time.
5. Waiting Time: The Total amount of time for which the process waits for the CPU to be assigned is called waiting time.

6. Response Time: The difference between the arrival time and the time at which the process first gets the CPU is called
Response Time.
13 CPU SCHEDULING 14 SCHEDULING ALGORITHMS

• In the uniprogrammming systems like MS DOS, when a process waits for any I/O • There are various algorithms which are used by the Operating System to schedule the
operation to be done, the CPU remains idol. processes on the processor in an efficient way.
• In Multiprogramming systems, the Operating system schedules the processes on the CPU • The Purpose of a Scheduling algorithm
to have the maximum utilization of it and this procedure is called CPU scheduling. The • Maximum CPU utilization
Operating System uses various scheduling algorithm to schedule the processes. • Fare allocation of CPU
• This is a task of the short term scheduler to schedule the CPU for the number of • Maximum throughput
processes present in the Job Pool. Whenever the running process requests some IO • Minimum turnaround time
operation then the short term scheduler saves the current context of the process (also • Minimum waiting time
called PCB) and changes its state from running to waiting. • Minimum response time

15 SCHEDULING ALGORITHMS 16 SCHEDULING ALGORITHMS

• First - Come, First – Serve (FCFS) - It is the simplest algorithm to implement. The process with the • First - Come, First – Serve (FCFS)
minimal arrival time will get the CPU first. The lesser the arrival time, the sooner will the process
gets the CPU. It is the non-preemptive type of scheduling.
17 SCHEDULING ALGORITHMS 18 SCHEDULING ALGORITHMS

• Shortest-Job-First (SJF) Scheduling • Shortest-Job-First (SJF) Scheduling


 Associate with each process the length of its next CPU burst. Use these lengths to schedule the  Non Preemptive
process with the shortest time
 Two schemes:
 nonpreemptive – once CPU given to the process it cannot be preempted until completes its CPU burst
 preemptive – if a new process arrives with CPU burst length less than remaining time of current
executing process, preempt. This scheme is known as the Shortest-Remaining-Time-First (SRTF)

• SJF is optimal – gives minimum average waiting time for a given set of processes

19 SCHEDULING ALGORITHMS 20 SCHEDULING ALGORITHMS

• Shortest-Job-First (SJF) • Priority based scheduling


Scheduling  A priority number (integer) is associated with each process
 Preemptive  Designed for interactive systems
 Equal-priority processes are scheduled in FCFS order.
 The CPU is allocated to the process with the highest priority (smallest integer 
highest priority)
 SJF is a priority scheduling where priority is the predicted next CPU burst time
21 SCHEDULING ALGORITHMS 22 SCHEDULING ALGORITHMS

• Priority based scheduling • Round Robin Scheduling


• One of the most popular scheduling algorithm which can actually be implemented in most of
the operating systems.
• This is the preemptive version of first come first serve scheduling.
• Designed especially for time-sharing systems.
• In this algorithm, every process gets executed in a cyclic way. A certain time slice is defined in
the system which is called time quantum.
• Each process present in the ready queue is assigned the CPU for that time quantum, if the
execution of the process is completed during that time then the process will terminate else
the process will go back to the ready queue and waits for the next turn to complete the
execution.

23 SCHEDULING ALGORITHMS 24

• Round Robin Scheduling


• There are six processes named as P1, P2, P3, P4, P5 and P6. Their arrival time and burst
time are given below in the table. The time quantum of the system is 4 units.
25 SAMPLE PROBLEM 26 MEMORY MANAGEMENT

• The operating system manages the resources of the computer,


controls application launches, and performs tasks such as data
protection and system administration.
• Memory is a storage area on the computer that contains the
instructions and data that the computer uses to run the
applications.

• When the applications or the operating system need more


memory than is available on the computer, the system must swap
the current contents of the memory space with the contents of
the memory space that is being requested.
• Ultimately, deciding which memory management technique to use
is a matter of optimizing the user interface for the available
hardware and software.

27 MEMORY ALLOCATION SCHEMES 28 MEMORY MANAGEMENT TECHNIQUES

1. Contiguous memory management schemes: means assigning continuous blocks of • Swapping


memory to the process. The best example of i is Array.
When process is to be executed then
2. Non-Contiguous memory management schemes: the program is divided into that process is taken from secondary
blocks(fixed size or variable size)and loaded at different portions of memory. That memory to stored in RAM. But RAM
means program blocks are not stored adjacent to each other. have limited space so we have to take
out and take in the process from RAM
time to time. The purpose is to make a
free space for other processes.
29 MEMORY MANAGEMENT TECHNIQUES 30 MEMORY MANAGEMENT TECHNIQUES

• Paging • Compaction
Paging is the memory management Compaction is a memory management
technique in which secondary memory is
technique in which the free space of a
divided into fixed-size blocks called pages,
and main memory is divided into fixed- running system is compacted, to reduce
size blocks called frames.The Frame has fragmentation problem and improve
the same size as that of a Page.The memory allocation efficiency.
processes are initially in secondary Compaction is used by many modern
memory, from where the processes are operating systems, such as Windows,
shifted to main memory(RAM) when Linux, and Mac OS X.
there is a requirement.

31 MEMORY MANAGEMENT TECHNIQUES 32

• Segmentation

END OF UNIT 2
Segmentation is another memory management
technique used by operating systems. The
process is divided into segments of different
sizes and then put in the main memory. The
program/process is divided into modules, unlike
paging, in which the process was divided into
fixed-size pages or frames. The corresponding
segments are loaded into the main memory
when the process is executed. Segments
contain the program’s utility functions, main
function, and so on.

You might also like