Download as pdf or txt
Download as pdf or txt
You are on page 1of 36

MODULE 3

 CPU Scheduling
CPU scheduling is a process which allows one process to use the CPU while the
execution of another process is on hold (in waiting state) due to unavailability
of any resource like I/O etc, thereby making full use of CPU.
The aim of CPU scheduling is to make the system efficient, fast and fair.

It involves the following sub tasks:-

1) Scheduling:- Determines the process to be executed next on the CPU

2) Dispatching:- Sets up the execution of the selected process on the CPU

3) Context Save:- Save the status of the process when its execution is to be

suspended.

 CPU Scheduler
Schedulers are special system software which handles process scheduling in
various ways. Their main task is to select the jobs to be submitted into the
system and to decide which process to run. Schedulers are of three types:-
1) Long-Term Scheduler
2) Short-Term Scheduler
3) Medium-Term Scheduler
Long-Term Scheduler
o Long-term schedulers are also called Job Schedulers.
o A long-term scheduler is a scheduler that is responsible for bringing
processes from the JOB queue (or secondary memory) into the READY
queue (or main memory).
o In other words, a long-term scheduler determines which programs will
enter into the RAM for processing by the CPU.
o Long-term schedulers have a long-term effect on the CPU performance.
o They are responsible for the degree of multiprogramming, i.e., managing
the total processes present in the READY queue.
Short-Term Scheduler
o It is also called as CPU scheduler. Its main objective is to increase
system performance in accordance with the chosen set of criteria. It is the
change of ready state to running state of the process.
o CPU scheduler selects a process among the processes that are ready to
execute and allocates CPU to one of them.
o Short-term schedulers, also known as dispatchers, make the decision of
which process to execute next.
o Dispatch latency is the amount of time needed by the CPU scheduler to
stop one process and start another.
o Functions performed by Dispatcher:
i. Context Switching
ii. Switching to user mode
iii. Moving to the correct location in the newly loaded program.
o Short-term schedulers are faster than long-term schedulers.
Medium-Term Scheduler
o Medium-term scheduling is a part of swapping. It removes the processes
from the memory.
o It reduces the degree of multiprogramming. The medium-term scheduler
is in-charge of handling the swapped out-processes.
o A running process may become suspended if it makes an I/O request.
o A suspended process cannot make any progress towards completion. In
this condition, to remove the process from memory and make space for
other processes, the suspended process is moved to the secondary storage.
This process is called swapping, and the process is said to be swapped
out or rolled out. Swapping may be necessary to improve the process mix.
 Scheduling Queues
The Operating System maintains the following important process scheduling
queues-
1) Job queue:- This queue keeps all the processes in the system.
2) Ready queue:- This queue keeps a set of all processes residing in main
memory, ready and waiting to execute. A new process is always put in
this queue.

3) Device queues:- The processes which are blocked due to unavailability


of an I/O device constitute this queue.

 Important CPU Scheduling Terminologies


1) Burst Time/Execution Time: It is a time required by the process to
complete execution. It is also called running time.
2) Arrival Time: when a process enters in a ready state.
3) Completion/Finish/Exit Time: when process complete and exit from a
system.
4) Multiprogramming: A number of programs which can be present in
memory at the same time.
5) CPU/IO burst cycle: Characterizes process execution, which alternates
between CPU and I/O activity. CPU times are usually shorter than the time
of I/O.
o CPU Burst: Time period used by the process on CPU
o I/O Burst: Time period used by the process on I/O device.
 CPU Scheduling Criteria

 CPU Utilization: CPU utilization is the main task in which the operating
system needs to make sure that CPU remains as busy as possible. It can
range from 0 to 100 percent.
 Throughput: The number of processes that finish their execution per unit
time is known Throughput. So, when the CPU is busy executing the process,
at that time, work is being done, and the work completed per unit time is
called Throughput.
 Waiting Time: Waiting time is an amount that specific process needs to wait
in the ready queue.
 Response Time: It is an amount to time in which the request was submitted
until the first response is produced.
 Turnaround Time: Turnaround time is an amount of time to execute a
specific process. It is the calculation of the total time spent waiting to get into
the memory, waiting in the queue and, executing on the CPU. The period
between the time of process submission to the completion time is the
turnaround time.
 Types of CPU Scheduling
 Preemptive Scheduling
o In Preemptive Scheduling, the tasks are mostly assigned with their
priorities.
o Sometimes it is important to run a task with a higher priority before
another lower priority task, even if the lower priority task is still running.
o The lower priority task holds for some time and resumes when the higher
priority task finishes its execution.
 Non-Preemptive Scheduling
o In this type of scheduling method, the CPU has been allocated to a specific
process.
o The process that keeps the CPU busy will release the CPU either by
switching context or terminating.
o It is the only method that can be used for various hardware platforms.
That’s because it doesn’t need special hardware (for example, a timer) like
preemptive scheduling.

 Types of CPU Scheduling Algorithm

First Come
First Serve

Multi-Level Shortest Job


Feedback Next
Queue

Scheduling
Algorithms

Multi-Level Round Robin


Queue

Priority Based
What is a Generalized Activity Normalization Time Table (GANTT chart)?
A Gantt chart is a horizontal bar chart that show the amount of work done or
production completed in given period of time in relation to amount planned for
those projects. It is simply used for graphical representation of schedule that helps
to plan in an efficient way, coordinate, and track some particular tasks in project.
1) First Come First Serve (FCFS) Scheduling
o Jobs are executed on first come, first serve basis.
o In case of a tie, process with smaller process id is executed first.
o It is always non-preemptive in nature.
o Non-Preemptive FCFS:- Consider the set of 5 processes whose arrival time
and burst time are given below-

 Gantt Chart-

 Turn Around time & Waiting time

Turn Around time = Exit time – Arrival time

Waiting time = Turn Around time – Burst time


 Average Turn Around time = (4 + 8 + 2 + 9 + 6) / 5 = 29 / 5 = 5.8 unit
 Average waiting time = (0 + 5 + 0 + 8 + 3) / 5 = 16 / 5 = 3.2 unit
o Advantages of FCFS:-
i. It is simple and easy to understand.
ii. It can be easily implemented using queue data structure.
iii. It does not lead to starvation.
o Disadvantages of FCFS:-
i. It does not consider the priority or burst time of the processes.
ii. It suffers from convoy effect.

2) Shortest Job Next (SJN) Scheduling

o This is also known as shortest job first, or SJF.


o Out of all the available processes, CPU is assigned to the process having
smallest burst time.
o In case of a tie, the FCFS scheduling is applied.
o SJN(SJF) Scheduling can be used in both preemptive and non-preemptive
mode.
o Preemptive mode of Shortest Job First is called as Shortest Remaining Time
First (SRTF).
o Best approach to minimize waiting time.

o Non-preemptive SJN: Consider the set of 5 processes whose arrival time


and burst time are given below-

 Gantt Chart-

 Turn Around time & Waiting time

 Average Turn Around time = (4 + 15 + 5 + 6 + 10) / 5 = 40 / 5 = 8 unit


 Average waiting time = (3 + 11 + 3 + 0 + 7) / 5 = 24 / 5 = 4.8 unit
o Preemptive SJN (SRTF – Shortest Remaining Time First): Consider
the set of 5 processes whose arrival time and burst time are given below-

 Gantt Chart-

 Turn Around time & Waiting time

 Average Turn Around time = (1 + 5 + 4 + 16 + 9) / 5 = 35 / 5 = 7 unit


 Average waiting time = (0 + 1 + 2 + 10 + 6) / 5 = 19 / 5 = 3.8 unit
o Advantages of SJN:-
i. It is optimal and guarantees the minimum average waiting time.
o Disadvantages of SJN:-
i. It cannot be implemented practically since burst time of the
processes cannot be known in advance.
ii. It leads to starvation for processes with larger burst time.
iii. Priorities cannot be set for the processes.
iv. Processes with larger burst time have poor response time.

3) Round Robin (RR) Scheduling

o CPU is assigned to the process on the basis of FCFS for a fixed amount of
time.
o This fixed amount of time is called as time quantum or time slice.
o After the time quantum expires, the running process is preempted and sent
to the ready queue.
o Then, the processor is assigned to the next arrived process.
o It is always preemptive in nature.
o Round Robin Scheduling is FCFS Scheduling with preemptive mode.
o Important Notes:

With decreasing value of time quantum,

 Number of context switch increases


 Response time decreases
 Chances of starvation decreases
Thus, smaller value of time quantum is better in terms of response time.

With increasing value of time quantum,

 Number of context switch decreases


 Response time increases
 Chances of starvation increases
Thus, higher value of time quantum is better in terms of number of context
switch.
 With increasing value of time quantum, Round Robin Scheduling tends to
become FCFS Scheduling.
 When time quantum tends to infinity, Round Robin Scheduling becomes
FCFS Scheduling.
 The performance of Round Robin scheduling heavily depends on the value
of time quantum.
 The value of time quantum should be such that it is neither too big nor too
small.

o Preemptive RR Scheduling: Consider the set of 5 processes whose arrival


time and burst time are given below- Time Quantum = 4 unit

 Gantt Chart-
 Turn Around time & Waiting time

 Average Turn Around time = (13+22+9+9+19+15) / 6 = 87 / 6 = 14.5 unit


 Average waiting time = (8+16+6+8+14+11) / 6 = 63 / 6 = 10.5 unit

o Advantages of RR:-
i. It gives the best performance in terms of average response time.
ii. It is best suited for time sharing system, client server architecture and
interactive system.
o Disadvantages of RR:-
i. It leads to starvation for processes with larger burst time as they have
to repeat the cycle many times.
ii. Its performance heavily depends on time quantum.
iii. Priorities cannot be set for the processes.

4) Priority Scheduling

o Out of all the available processes, CPU is assigned to the process having the
highest priority.
o In case of a tie, it is broken by FCFS Scheduling.
o Priority Scheduling can be used in both preemptive and non-preemptive
mode.

o Important Notes:
 The waiting time for the process having the highest priority will
always be zero in preemptive mode.
 The waiting time for the process having the highest priority may not
be zero in non-preemptive mode.

o Non-Preemptive Priority Scheduling: Consider 4 processes P1, P2, P3


and P4 with arrival time, burst time, and priority as given below –

 Gantt Chart-

 Turn Around time & Waiting time

 Average Turn Around time = (24 + 3 + 29 + 21) / 4 = 77 / 4 = 19.25 unit


 Average waiting time = (3 + 0 + 25 + 18) / 4 = 46 / 4 = 11.5 unit

o Preemptive Priority Scheduling: Consider 5 processes P1, P2, P3, P4, P5


with arrival time, burst time, and priority shown as below –
 Gantt Chart-

 Turn Around time & Waiting time

 Average Turn Around time = (7 + 13 + 2 + 12 + 7) / 5 = 41 / 5 = 8.2 unit


 Average waiting time = (2 + 7 + 0 + 9 + 6) / 5 = 24 / 5 = 4.8 unit
o Advantages of Priority Scheduling:-
i. It considers the priority of the processes and allows the important
processes to run first.
ii. Priority scheduling in preemptive mode is best suited for real time
operating system.
o Disadvantages of Priority Scheduling:-
i. Processes with lesser priority may starve for CPU.
ii. There is no idea of response time and waiting time.
Solution For Starvation:- Aging Technique
Aging is a technique of gradually increasing the priority (at a
particular interval) of processes that wait in the system for a long
time and thereby each waiting process gets a chance to use the
processor.

5) Multi-level Queue (MLQ) Scheduling

o A multi-level queue scheduling algorithm partitions the ready queue into


several separate queues. The processes are permanently assigned to one
queue, generally based on some property of the process, such as memory
size, process priority, or process type. Each queue has its own scheduling
algorithm.

o There are two sorts of processes that require different scheduling algorithms
because they have varying response times and resource requirements. The
foreground (interactive) and background processes (batch process) are
distinguished. Background processes take priority over foreground
processes.

o Some queues are utilized for the foreground process, while others are used
for the background process. The foreground queue may be scheduled
using a round-robin method, and the background queue can be scheduled
using an FCFS strategy.
o Example Problem: Consider below table of four processes under Multi-level
queue scheduling. Queue number denotes the queue of the process.

Priority of queue 1 is greater than queue 2. Queue 1 uses Round Robin


(Time Quantum = 2) and queue 2 uses FCFS.

 Gantt Chart-

 Turn Around time & Waiting time

 Average Turn Around time = (6 + 7 + 20 + 5 + 7) / 4 = 45 / 4 = 11.25 unit


 Average waiting time = (2 + 4 + 12 + 0) / 4 = 18 / 4 = 4.5 unit
o Advantages of Multi-level Queue Scheduling:-
i. It allows us to apply different scheduling algorithms for different
processes.
ii. It will have low overhead in terms of scheduling.
o Disadvantages of Multi-level Queue Scheduling:-
i. There is a risk of starvation for lower priority processes.
ii. It is inflexible.
6) Multi-level Feedback Queue (MLFQ)Scheduling

o It allows a process to move between queues.


o The idea is to separate processes with different CPU-burst characteristics.
o If a process uses too much CPU time, it will be moved to a lower-priority
queue.
o Similarly, a process that waits too long in a lower-priority queue may be
moved to a higher-priority queue.
o This form of aging prevents starvation.
o A multilevel feedback queue scheduler is defined by the following
parameters:
i. The number of queues
ii. The scheduling algorithm for each queue
iii. The method used to determine when to upgrade a process to a higher
priority queue
iv. The method used to determine when to demote a process to a lower-
priority queue
v. The method used to determine which queue a process will enter
when that process needs service
o Now, let us consider multilevel feedback queue with three queues.
I. A Round Robin queue with time quantum of 8 milliseconds, say Q1.
II. A Round Robin queue with time quantum of 16 milliseconds, say Q2.
III. A First Come First Serve queue, say Q3.
 Now, when the process enters Q1 it is allowed to execute and if it
does not complete in 8 milliseconds it is shifted to Q2 and receives 16
milliseconds. Again it is preempted to Q3 if it does not complete in
16 seconds.
 Problems in the above implementation: A process in the lower
priority queue can suffer from starvation due to some short processes
taking all the CPU time.
 Solution: A simple solution can be to boost the priority of all the
processes after regular intervals and place them all in the highest
priority queue. That is Aging Technique.
o Advantages of Multi-level Feedback Queue Scheduling:-
i. It is more flexible.
ii. It allows different processes to move between different queues.
iii. It prevents starvation by moving a process that waits too long for
the lower priority queue to the higher priority queue.
o Disadvantages of Multi-level Feedback Queue Scheduling:-
i. It produces more CPU overheads.
ii. It is the most complex algorithm.
MODULE 3 – FILE MANAGEMENT
What is file?
 Files is a collection of co-related information that is recorded in
some format (such as text, pdf, docs, etc.) and is stored on
various storage mediums such as flash drives, hard disk drives
(HDD), magnetic tapes, optical disks, and tapes, etc.
 Files can be read-only or read-write.
 Files are used to provide a uniform view of data storage by the
operating system.
 All the files are mapped onto physical devices that are usually
non-volatile so data is safe in the case of system failure.

File Attributes

1) Name: This denotes the symbolic name of the file. The file
name is the only attribute that is readable by humans easily.
2) Identifier: This denotes the file name for the system. It is
usually a number and uniquely identifies a file in the file system.
3) Type: If there are different types of files in the system, then the
type attribute denotes the type of file (such as regular, directory,
or special).
4) Location: This point to the device that a particular file is stored
on and also the location of the file on the device.
5) Size: This attribute defines the size of the file in bytes, words or
blocks. It may also specify the maximum allowed file size.
6) Protection: The protection attribute contains protection
information for the file such as who can read or write on the file.

Operations on Files

1) Creating a file: To create a file, there should be space in the file


system. Then the entry for the new file must be made in the
directory. This entry should contain information about the file
such as its name, its location etc.
2) Reading a file: To read from a file, the system call should
specify the name and location of the file. There should be a read
pointer at the location where the read should take place. After
the read process is done, the read pointer should be updated.
3) Writing a file: To write into a file, the system call should
specify the name of the file and the contents that need to be
written. There should be a write pointer at the location where the
write should take place. After the write process is done, the
write pointer should be updated.
4) Deleting a file: The file should be found in the directory to
delete it. After that all the file space is deleted so it can be
reused by other files.
5) Repositioning in a file: This is also known as file seek. To
reposition a file, the current file value is set to the appropriate
entry. This does not require any actual I/O operations.
6) Truncating a file: This deletes the data from the file without
destroying all its attributes. Only the file length is reset to zero
and the file contents are erased. The rest of the attributes remain
the same.
File System Structure

 File System provide efficient access to the disk by allowing data


to be stored, located and retrieved in a convenient way. A file
System must be able to store the file, locate the file and retrieve
the file.
 Most of the Operating Systems use layering approach for every
task including file systems. Every layer of the file system is
responsible for some activities.

 Logical file system – It manages metadata information about a


file i.e includes all details about a file except the actual contents
of file. It also maintains via file control blocks. File control
block (FCB) has information about a file – owner, size,
permissions, and location of file contents.
 File organization Module – It has information about files,
location of files and their logical and physical blocks. Physical
blocks do not match with logical numbers of logical block
numbered from 0 to N. It also has a free space which tracks
unallocated blocks.
 Basic file system – It Issues general commands to device driver
to read and write physical blocks on disk. It manages the
memory buffers and caches. A block in buffer can hold the
contents of the disk block and cache stores frequently used file
system metadata.
 I/O Control level – Device drivers acts as interface between
devices and OS, they help to transfer data between disk and
main memory. It takes block number as input and as output it
gives low level hardware specific instruction.

File Organization

 It refers to the logical structuring of the records as determined


by the way in which they are accessed.
 In choosing a file organization, several criteria are important:
1) Short access time
2) Ease of update
3) Economy of storage
4) Simple maintenance
5) Reliability
 Important file organization technique include:-
1) Sequential
2) Directed or Hashed
3) Indexed
4) Indexed Sequential
 Sequential File Organization
 The easiest method for file Organization is Sequential method.
In this method the file are stored one after another in a
sequential manner.
 There are two ways to implement this method:
1) Pile File Method – This method is quite simple, in
which we store the records in a sequence i.e one after
other in the order in which they are inserted into the
tables.

Insertion of new record: Let the R1, R3 and so on up to


R5 and R4 be four records in the sequence. Here, records
are nothing but a row in any table. Suppose a new record
R2 has to be inserted in the sequence, then it is simply
placed at the end of the file.

2) Sorted File Method –In this method, as the name itself


suggest whenever a new record has to be inserted, it is
always inserted in a sorted (ascending or descending)
manner. Sorting of records may be based on any primary
key or any other key.
Insertion of new record – Let us assume that there is a
pre-existing sorted sequence of four records R1, R3, and
so on up to R7 and R8. Suppose a new record R2 has to
be inserted in the sequence, then it will be inserted at the
end of the file and then it will sort the sequence.

 Directed or Hashed File Organization


 Direct file organization is also known as hash file organization.
 A hash function is calculated in this approach for storing the
records – that provides us with the address of the block that
stores the record.
 Any mathematical function can be used in the form of a hash
function. It can be straightforward.
 Hash File Organization uses the computation of the hash
function on some fields of a record.
 The output of the hash function defines the position of the disc
block where the records will be stored.
 When a record is requested using the hash key columns, an
address is generated, and the entire record is fetched using that
address.
 When a new record needs to be inserted, the hash key is used to
generate the address, and the record is then directly placed.
 In the case of removing and updating, the same procedure is
followed.
 There is no effort involved in searching and categorising the full
file using this method.
 Each record will be put in the RAM at random using this
procedure. Therefore it is also known as Random File
Organization.

 Indexed File Organization


 An indexed file contains records ordered by a record key. A
record key uniquely identifies a record and determines the
sequence in which it is accessed with respect to other records.
 Each record contains a field that contains the record key. A
record key for a record might be, for example, an employee
number or an invoice number.
 An indexed file can also use alternate indexes, that is, record
keys that let you access the file using a different logical
arrangement of the records. For example, you could access a
file through employee department rather than through
employee number.
 The possible record transmission (access) modes for indexed
files are sequential, or random.

 Indexed Sequential File Organization


 This method is an advanced sequential file organization.
 In this method, records are stored in the file using the primary
key. An index value is generated for each primary key and
mapped with the record. This index contains the address of
the record in the file.
 That is, it consists of two parts –
 Data File contains records in sequential scheme.
 Index File contains the primary key and its address in
the data file.
 This method allows both sequential and indexed access.
 Records can be read in sequential order just like in sequential
file organization.
 Records can be accessed randomly if the primary key is
known. Index file is used to get the address of a record and
then the record is fetched from the data file.

File Allocation

 Whenever a hard disk is formatted, a system has many small


areas called blocks or sectors that are used to store any kind of
file.
 The allocation methods define how the files are stored in the
disk blocks.
 The main idea behind these methods is to provide:
 Efficient disk space utilization.
 Fast access to the file blocks.
 There are three main disk space or file allocation methods:-
1) Contiguous Allocation
2) Chained or Linked Non-Contiguous Allocation
3) Indexed Non-Contiguous Allocation
 Contiguous Allocation
 In this scheme, each file occupies a contiguous set of blocks on
the disk.
 For example, if a file requires n blocks and is given a block b as
the starting location, then the blocks assigned to the file will
be: b, b+1, b+2,……b+n-1. This means that given the starting
block address and the length of the file (in terms of blocks
required), we can determine the blocks occupied by the file.
 The directory entry for a file with contiguous allocation
contains:-
 Address of starting block
 Length of the allocated portion
 Consider the following example:-

The file ‘mail’ in this figure starts from the block 19 with
length = 6 blocks. Therefore, it occupies 19, 20, 21, 22, 23,
24 blocks.
 Advantages:
1) Both the Sequential and Direct Accesses are supported by
this. For direct access, the address of the kth block of the
file which starts at block b can easily be obtained as (b+k).
2) This is extremely fast since the number of seeks are
minimal because of contiguous allocation of file blocks.
 Disadvantages:
1) This method suffers from both internal and external
fragmentation. This makes it inefficient in terms of
memory utilization.
2) Increasing file size is difficult because it depends on the
availability of contiguous memory at a particular instance.
 Chained or Linked Non-Contiguous Allocation
 In this scheme, each file is a linked list of disk blocks which
need not be contiguous. The disk blocks can be scattered
anywhere on the disk.
 The directory entry contains a pointer to the starting and the
ending file block. Each block contains a pointer to the next
block occupied by the file.
 Consider the following example:

 Advantages:
1) There is no external fragmentation.
2) The directory entry just needs the address of starting block.
3) The memory is not needed in contiguous form, it is more
flexible than contiguous file allocation.
 Disadvantages:
1) It does not support random access or direct access.
2) If pointers are affected so the disk blocks are also affected.
3) Extra space is required for pointers in the block.
 Indexed Non-Contiguous Allocation
 The indexed file allocation is somewhat similar to linked file
allocation as indexed file allocation also uses pointers but the
difference is here all the pointers are put together into one
location which is called index block.
 That means we will get all the locations of blocks in one index
file.
 The blocks and pointers were spread over the memory in the
Linked Allocation method, where retrieval was accomplished by
visiting each block sequentially. But here in indexed allocation,
it becomes easier with the index block to retrieve.
 Consider the following example:

 Advantages:
1) It reduces the possibilities of external fragmentation.
2) Rather than accessing sequentially it has direct access to
the block.
 Disadvantages:
1) Here more pointer overhead is there.
2) If we lose the index block we cannot access the complete
file.
3) It becomes heavy for the small files.

Free-Space Management

 There is a system software in an operating system that


manipulates and keeps a track of free spaces to allocate and de-
allocate memory blocks to files, this system is called a file
management system in an operating system". There is a free
space list in an operating system that maintains the record of
free blocks.

 Bit Vector (Bitmap)


 A bit vector is a most frequently used method to implement the
free space list. A bit vector is also known as a Bit map.
 It is a series or collection of bits in which each bit represents a
disk block. The values taken by the bits are either 1 or 0.
 If the block bit is 1, it means the block is empty and if the block
bit is 0, it means the block is not free; it is allocated to some
files.
 Since all the blocks are empty initially so, each bit in the bit
vector represents 1.
 Example: Given below is a diagrammatic representation of a
disk in which there are 16 blocks. There are some free and some
occupied blocks present. The upper part is showing block
number. Free blocks are represented by 1 and occupied blocks
are represented by 0.

 Advantages:-
1) Simple and easy to understand.
2) Consumes less memory.
 Disadvantages:-
1) The operating system goes through all the blocks until it
finds a free block. (block whose bit is '1').
2) It is not efficient when the disk size is large.
 Linked List
 A linked list is another approach for free space management in
an operating system.
 In it, all the free blocks inside a disk are linked together in
a linked list.
 These free blocks on the disk are linked together by a pointer.
These pointers of the free block contain the address of the next
free block and the last pointer of the list points to null which
indicates the end of the linked list.
 This technique is not enough to traverse the list because we have
to read each disk block one by one which requires I/O time.
 Example:

 Advantages:-
1) In this method, available space is used efficiently.
2) As there is no size limit on a linked list, a new free space
can be added easily.
 Disadvantages:-
1) In this method, the overhead of maintaining the pointer
appears.
2) The Linked list is not efficient when we need to reach
every block of memory.
 Grouping
 The grouping technique is also called the "modification of a
linked list technique".
 In this method, first, the free block of memory contains the
addresses of the n-free blocks. And the last free block of
these n free blocks contains the addresses of the next n free
block of memory and this keeps going on.
 This technique separates the empty and occupied blocks of
space of memory.

 Advantages:-
1) By using this method, we can easily find addresses of a
large number of free blocks easily and quickly.
 Disadvantages:-
1) We need to change the entire list if one block gets occupied.

 Counting
 In memory space, several files are created and deleted at the
same time. For which memory blocks are allocated and de-
allocated for the files. Creation of files occupies free blocks and
deletion of file frees blocks.
 When there is an entry in the free space, it consists of two
parameters- "address of first free disk block (a pointer)" and "a
number 'n'".
 Example:

When the "counting technique" is applied, the block


number 3 will represent that block 3 is the first free block. Then,
the block stores the number of free blocks i.e. - there are 4 free
blocks together. In the same way, the block number 9 will
represent that block 9 is the first free block and keeps the
number of rest free blocks i.e.- there are 6 free blocks together.
 Advantages:-
1) In this method, a bunch of free blocks takes place fastly.
2) The list is smaller in size.
 Disadvantages:-
1) In the counting method, the first free block stores the rest
free blocks, so it requires more space.

You might also like