Professional Documents
Culture Documents
Unit 4
Unit 4
IV-SEMESTER
OPERATING SYSTEMS
UNIT-4
Memory management techniques
Syllabus
• Memory management
techniques-contiguous and
non-contiguous, paging and
segmentation, translation look aside
buffer (TLB) and overheads
DTEL 2
INTRODUCTION
DTEL 3
INTRODUCTION
DTEL 4
INTRODUCTION
DTEL 5
Information stored in main memory can be
classified in a variety of ways:
•Program (code) and data (variables, constants)
•Readonly (code, constants) and readwrite
(variables)
•Address (e.g., pointers) or data (other
variables); binding (when memory is allocated
for the object): static or dynamic
The compiler, linker, loader and runtime libraries
all cooperate to manage this information.
DTEL 6
Creating an executable code
DTEL 7
From source to executable code
translation linking
executable workspace
code
(load module)
.
.
.
seconda main
ry memo
storag ry
e
DTEL 8
Address binding (relocation)
DTEL 9
Address binding (relocation)
The process of associating program instructions and data
(addresses) to physical memory addresses is called address
binding, or relocation.
It may take place during:
Compile time: The compiler or assembler translates symbolic addresses
(e.g., variables) to absolute addresses.(The absolute addresses are
generated by hardware.)
Load time: The compiler translates symbolic addresses to relative
(relocatable) addresses. Must generate relocatable code if memory
location is not known at compile time
DTEL 10
Multistep Processing of a User Program
DTEL 11
Logical vs. Physical Address Space
DTEL 12
Memory-Management Unit (MMU)
DTEL 13
Memory Management schemes
Memory management is the process of regulating
and organizing computer memory in order to
allocate and deallocate memory space efficiently
for programs and applications that require it.
1
An important task of a memory management
system is to bring (load) programs into main
memory for execution.
Memory allocation schemes
Contiguous memory management schemes
Non-Contiguous memory management schemes
1
1
Contiguous memory allocation techniques
were commonly employed by earlier operating
systems*:
•Direct placement
•Overlays
•Partitioning
*Note: Techniques similar to those listed above are still used by some modern, dedicated special
purpose os and real time os
1
Direct placement
18
Overlays
It is a technique to run a program
0,0
that is bigger than the size of the
overlay
tree physical memory.i.e to allow
1,0 2,0 3,0 large programs to execute (fit) in
smaller memory.
2,1 2,3
1,1 3,1
1,2 3,2
A program is organized (by the
2,2 user) into a treelike structure of
object modules, called overlays.
1,1 2,1 The root overlay is always loaded
1,0 2,0
into the memory, Divide the
program into modules in such a
0,0
0,0 way that not all modules need to
Operating
System
Operating be in the memory at the same time.
System
memory
snapshots 19
Question –
The overlay tree for a program is as shown below:
What will be the size of the partition (in physical memory) required to
load (and run) this program?
(a) 12 KB (b) 14 KB (c) 10 KB (d) 8 KB
20
Solution –
The overlay tree for a program is as shown below:
Using the overlay concept we need not actually have the entire program
inside the main memory.Only we need to have the part which are required at
that instance of time, either we need Root-A-D or Root-A-E or Root-B-F or
Root-C-G part.
Root+A+D = 2KB + 4KB + 6KB = 12KB
Root+A+E = 2KB + 4KB + 8KB = 14KB
Root+B+F = 2KB + 6KB + 2KB = 10KB
Root+C+G = 2KB + 8KB + 4KB = 14KB
2
memory management techniques
difference between fixed size partitioning and variable size
partitioning
S.No Fixed-size partitioning Variable size partitioning
2
Static partitioning (Fixed size partitioning)
Each process in this method of contiguous
memory allocation is given a fixed size
continuous block in the main memory.
24
1.Static partitioning (Fixed size partitioning)
It is a memory allocation technique used in
operating systems to divide the physical
memory into fixed-size partitions or regions,
each assigned to a specific process or user.
2
1.Static partitioning (Fixed size partitioning)
For example, in the below diagram, the
memory is divided into five blocks, each of
size 4 MB.
2
Fragmentation
Fragmentation refers to the unused memory that the
memory management system cannot allocate.
• Internal fragmentation
Waste of memory within a partition, caused by the
difference between the size of a partition and the process
loaded. Severe in static partitioning schemes.
• External fragmentation
Waste of memory between partitions, caused by scattered
noncontiguous free space. Severe in dynamic partitioning
schemes.
Compaction is a technique that is used to overcome
external fragmentation.
2
Internal Fragmentation
When a process is assigned to a memory block and if
that process is smaller than the memory requested, it
creates a free space in the assigned memory block. Then
the difference between assigned and requested memory
is called internal fragmentation. Usually, memory is
divided into fixed size blocs.
2
External fragmentation:
When portions of allocated memory are too small to hold
any process.
Example
The RAM has a total of 10 kb free space , but it is not
contiguous, or is fragmented. If a process with 10 kb size
wants to loads on the RAM, then it cannot load because
space is not contiguously free.
3
Parameter Internal Fragmentation External Fragmentation
Definition, The difference between the memory When there are empty spaces
space needed and the assigned among the non-contiguous
memory is considered as internal memory blocks that cannot be
fragmentation. assigned to any process, this
problem is considered as external
fragmentation.
Memory In internal fragmentation, the In external fragmentation, the
Block size memory blocks are of fixed size. memory blocks are of varying
sizes.
Occurrence Internal fragmentation occurs when External fragmentation occurs
we divide physical memory into when a process or processes are
contiguous mounted-sized blocks removed from the main memory
and allocate memory for a process and the free spaces created are
that may be larger than the amount too small to fit a new process.
of memory requested. As a result,
the unused allocated space remains
and cannot be used by other
processes.
Solution The use of a dynamic partitioning Compaction, paging, and
scheme and best-fit block search are segmentation are the solutions for
the solutions that can reduce internal external fragmentation. 3
Multiple-partition allocation
• Hole – block of available memory;
• holes of various size are scattered throughout memory
• When a process arrives, it is allocated memory from a
hole large enough to accommodate it
• Operating system maintains information about:
a) allocated partitions b) free partitions (hole)
OS OS OS OS
process 8 process 10
DTEL 32
Difference between contiguous and non-contiguous
allocation
S.No Contiguous Allocation Non-Contiguous Allocation
DTEL 34
Strategies Used for Contiguous Memory Allocation Input Queues
DTEL 35
DTEL 36
DTEL 37
DTEL 38
Solve:
1. Consider five memory partitions of size 100 KB, 500 KB, 200 KB, 450
KB and 600 KB in same order. If sequence of requests for blocks of
size 212 KB, 417 KB, 112 KB and 426 KB in same order come, then
which of the following algorithm makes the efficient use of memory?
A.Best fit algorithm
B.First fit algorithm
C.Worst fit algorithm
DTEL 39
Solve:
2.Given five memory partitions of 100Kb, 500Kb, 200Kb,
300Kb,600Kb (in order), how would the first-fit, best-fit, and
worstfit algorithms place processes of 212 Kb, 417 Kb, 112
Kb, and 426 Kb (in order)?
Which algorithm makes the most efficient use of memory?
DTEL 40
Dynamic partitioning (variable size partitioning)
Any number of programs can be loaded to memory as long as there
is room for each. When a program is loaded (relocatable loading), it is
allocated memory in exact amount as it needs. Also, the addresses in
the program are fixed after loaded, The operating system keeps track of
each partition (their size and locations in the memory.)
K
B B
A A
...
Operati Operati Operati Operati
ng ng ng ng
Syste Syste Syste Syste
m m m m
Partition allocation at
different times 4
Variable size partitioning :
The memory is divided into blocks of varying sizes. , the
process will be allotted a block of main memory with the
same size as the process requires.
For example, a process P1 of size 4 MB comes into the
memory. After that, another process P2 of size 12 MB comes
into the memory, and then another process P3 of size 5 MB
comes into the memory. In the memory, these processes will
look like this-
4
Difference between fixed size partitioning and variable size
partitioning
4
Contiguous Allocation
DTEL 44
A pair of base and limit registers define the logical address space
DTEL 45
Non-Contiguous Allocation
This method allocates memory space present in
different locations to the process based on its needs.
Since all the available memory space is distributed, the
freely available space is also scattered here and there.
This memory allocation technique reduces memory
wastage, which reduces Internal and External
Fragmentation.
4
1.Paging : (storage mechanism)
Paging is a non-contiguous memory management
technique that allows the operating system to fetch
processes from secondary memory and store them in
the main memory in the form of pages.
4
The mapping between logical pages and physical page
frames is maintained by the page table,
which is used by the memory management unit to translate
logical addresses into physical addresses
Secondary
memory 4
Paging
Physical memory is divided into a number of fixed size
blocks, called frames.
The logical memory is also divided into chunks of the
same size, called pages.
The size of frame/page is determined by the hardware
and can be any value between 512 bytes (VAX) and 16
megabytes (MIPS 10000)
A page table defines (maps) the base address of pages
for each frame in the main memory.
The major goals of paging are to make memory
allocation and swapping easier and to reduce
fragmentation.
Paging also allows allocation of noncontiguous memory
(i.e., pages need not be adjacent.)
5
1.Paging : example
Assuming that the main memory is 16 KB and the frame size is 1 KB,
the main memory will be partitioned into a collection of 16 1 KB frames.
P1, P2, P3, and P4 are the four processes in the system, each of which is 4 KB in size.
Each process is separated into 1 KB pages, allowing one page to be saved in a single
frame.
5
Translating Logical Address into Physical Address-
52
Step-01:
53
Step-02:
For the page number generated by the CPU,
•Page Table provides the corresponding frame number
(base address of the frame) where that page is stored in the
main memory.
Step-03:
The frame number combined with the page offset forms
the required physical address.
•Frame number specifies the specific frame where the
required page is stored.
•Page Offset specifies the specific word that has to be read
from that page.
54
55
1.Segmentation :
It is a method in which the process is divided into
parts of variable sizes, and put it in to main memory.
Each segmented part is known as a segment.
5
1.Segmentation :
5
Segmentation—an example
0x5000 0x4520
Seg 4
0x4000
0x4000
0x3F00 Seg 1 0x3340
0x3000
0x3000 0x2F00
Seg 3
0x2000
0x2000
0x1340 0x1120
Seg 0
0x1000
0x1000 Seg 2
0x0120
0x0000 0x0000
5
Segmentation
A table stores the information about all such segments and is called
Segment Table.
Segment Table – It maps two-dimensional Logical address
into one-dimensional Physical address.
It’s each table entry has:
6
6
The difference between paging and segmentation is-
Pure Segmentation means segmentation without
paging
6
Demand Demand
Consideration
Paging Segmentation
6
The difference between paging and segmentation is-
r. Key Paging Segmentation
No.
Memory Size In Paging, a process address In Segmentation, a process
1 space is broken into fixed sized address space is broken in varying
blocks called pages. sized blocks called sections.
Accountability Operating System divides the Compiler is responsible to calculate
2 memory into pages. the segment size, the virtual
address and actual address.
3
Size Page size is determined by Section size is determined by the
available memory. user.
4
Speed Paging technique is faster in Segmentation is slower than
terms of memory access. paging.
Fragmentation Paging can cause internal Segmentation can cause external
5 fragmentation as some pages fragmentation as some memory
may go underutilized. block may not be used at all.
Logical Address During paging, a logical address During segmentation, a logical
6 is divided into page number and address is divided into section
page offset. number and section offset.
Table During paging, a logical address During segmentation, a logical
7 is divided into page number and address is divided into section
page offset. number and section offset.
8
Data Storage Page table stores the page data. Segmentation table stores the
segmentation data.
6
Swapping : Swapping is a medium term
scheduling method. Memory becomes preemptable
Swapping is a memory management scheme in
which any process can be temporarily swapped
from main memory to secondary memory so that
the main memory can be made available for other
processes. It is used to improve main memory
utilization. In secondary memory, the place where
the swapped-out process is stored is called swap
space.
A running process may become suspended if it makes
an I/O request.
processes
on
swap in
processes
dispatch
in process
memory running
disk
swapout suspend
6
Swapping : Swapping has been subdivided into two concepts:
swap-in and swap-out.
•Swap-in is a method of transferring a program from a hard
disc to main memory, or RAM.
•Swap-out is a technique for moving a process from RAM
to the hard disc.
6
Memory protection
6
Translation Lookaside Buffer (TLB) in Paging
Translation Lookaside Buffer (TLB) is nothing but a special
cache used to keep track of recently used transactions.
TLB contains page table entries that have been most
recently used.
Given a virtual address, the processor examines the TLB,
DTEL 69
Translation Lookaside Buffer (TLB) in Paging
DTEL 70
Steps in TLB hit:
DTEL 71
Steps in TLB miss:
5.The TLB is updated with new PTE (if space is not there, one of the
replacement technique comes into picture i.e either FIFO, LRU or
MFU etc).
DTEL 72
Effective memory access time(EMAT) : TLB is
used to reduce effective memory access time as it
is a high speed associative cache.
EMAT = h*(c+m) + (1-h)*(c+2m)
where, h = hit ratio of TLB
m = Memory access time
c = TLB access time
DTEL 73
THANK YOU
DTEL 74