Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 56

UNIT-4

MEMORY MANAGEMENT
BY:

Dr. R. C. Tripathi
Professor, FOECS, TMU &
Chairman CSI-NOIDA CHAPTER
Mobile: +91-9868257510,
Email: tripathi.computers@tmu.ac.in
visit us at: http://tmu.ac.in/college-of-computing-sciences-and-it/about/our-faculty/
http://www.csi-india.org/web/noida-chapter/management-committee

UT-5 File Mgt DrRCTripathi1


Lecture Highlights
 Introduction to Memory Management
 What is memory management
 Related Problems of Redundancy, Fragmentation
and Synchronization
 Memory Placement Algorithms
 Continuous Memory Allocation Scheme
 Parameters Involved
 Parameter-Performance Relationships
 Some Sample Results
Goals of Memory Management

 Allocate available memory efficiently to


multiple processes.
 Main functions
 Allocate memory to processes when
needed.
 Keep track of what memory is used and what is
free
 Protect one process’s memory from another
 https://www.youtube.com/watch?v=wteUW2s
L7bc
Introduction
What is memory management
 Memory management primarily deals with space
multiplexing.
 All the processes need to be scheduled in such a way that all
the users get the illusion that their processes reside on the
RAM.
 The job of the memory manager:
 keep track of which parts of memory are in use and which parts are not
in use
 to allocate memory to processes when they need it and deallocate it
when they are done
 to manage swapping between main memory and disk when main
memory is not big enough to hold all the processes.
DeFragmentation
 External Fragmentation: total memory space exists to
satisfy a request, but it is not contiguous. In the present
context, we are concerned with external fragmentation and
shall explore the same in greater details in the below slides.
 Internal Fragmentation: allocated memory may be larger
than requested memory; this size difference is memory
internal to a partition, but not being used
 Reduce external fragmentation by compaction
 Shuffle memory contents to place all free memory
together in one large block
https://www.youtube.com/watch?v=AtRIOUZuI2c
Managing Virtual Memory
 Usually, only part of the program needs to be in memory
for execution.
 Allow logical address space to be larger than physical
memory size.
 Bring only what is needed in memory when it is needed.
 Virtual memory implementation
 Demand paging
 Demand segmentation
 https://www.youtube.com/watch?v=qdkxXygc3rE
What is memory management
Visual Representation
Main Memory Hard disc

Operating
system P1 -Swap
out Process
p1

P2 -
Swap in Process
User
p2
space

https://www.youtube.com/watch?v=AJwONVSOAeE
Thrashing
 If a process does not have “enough” pages, the
page-fault rate is very high. This leads to:
 low CPU utilization.
 operating system thinks that it needs to increase the
degree of multiprogramming.
 another process added to the system.
 Adds to the problem as even less page frames are
available for each process.
 Thrashing  a process is busy swapping pages in
and out
 https://www.youtube.com/watch?v=IyWaK8pbN6A
Thrashing
 Thrashing in computing is an issue caused when virtual
memory is in use. It occurs when the virtual memory of a
computer is rapidly exchanging data for data on hard disk,
to the exclusion of most application-level processing.

 As the main memory gets filled, additional pages need to


be swapped in and out of virtual memory. The swapping
causes a very high rate of hard disk access. Thrashing can
continue for a long duration until the underlying issue is
addressed. Thrashing can potentially result in total
collapse of the hard drive of the compute
Memory Allocation
 Contiguous Allocation
 Each process allocated a single contiguous
chunk of memory
 Non-contiguous Allocation
 Parts of a process can be allocated
non- contiguous chunks of memory

In this part, we assume that the entire process


needs to be in memory for it to run
Contiguous Allocation
 Fixed Partition Scheme
 Memory broken up into fixed size partitions
 But the size of two partitions may be different
 Each partition can have exactly one process
 When a process arrives, allocate it a free partition
 Can apply different policy to choose a
partition
 Easy to manage
 Problems:
 Maximum size of process bound by max. partition
size
 Large internal fragmentation possible
Contiguous Allocation (Cont.)
 Variable Partition Scheme (Example-1)
 Hole – block of available memory; holes of various
size are scattered throughout memory
 When a process arrives, it is allocated memory from a
hole large enough to accommodate it
 Operating system maintains information about:
a) allocated partitions b) free partitions
(hole)
OS OS OS OS

process 5 process 5 process 5 process 5


process 9 process 9
process 8 process 10

process 2 process 2 process 2 process 2


Memory Management
Example-2
 This example illustrates the basic concept of
memory management. We consider a mickey
mouse system where:
 Memory Size: 16MB
 Transfer Rate: 2MB/ms
 RR Time Quantum: 2ms
 We’ll use the process mix on the next slide
and follow the RAM configuration before and
after each time slot as also the action taking
place during the time slot for five time slots.
Memory Management
An Example – The Process Mix
Process ID Execution Size (in Transfer time
Time (ms) MB) needed (ms)

P1 4 2 1
P2 2 6 3
P3 6 4 2
P4 8 4 2
P5 2 2 1
P6 10 4 2
P7 2 2 1
Memory Management
An Example – Time Slot 1
Before: After:
Time Slot 1
P1 (4ms) P1 (2ms)

P2 (2ms) P2 (2ms)

P3 (6ms) P3 (6ms)
• P1 Executes
P4 (8ms) P4 (8ms)

RAM Configuration RAM Configuration


Memory Management
An Example – Time Slot 2
Before: After:
Time Slot 2
P1 (2ms) • P1 spooled in P5 (2ms)
in 1ms
• P5 spooled in
P2 (2ms) in 1ms P2 (0ms)

P3 (6ms) • P2 Executes P3 (6ms)

• P2 Done
P4 (8ms) P4 (8ms)

RAM Configuration RAM Configuration


Memory Management
An Example – Time Slot 3
Before: After:
Time Slot 3
P5 (2ms) P5 (2ms)

• P2 spooled
P2 (0ms) out in 2ms
P2 (0ms)

P3 (6ms) P3 (4ms)
• P3 Executes

P4 (8ms) P4 (8ms)

RAM Configuration RAM Configuration


Memory Management
An Example – Time Slot 4
Before: After:
Time Slot 4
P5 (2ms) • P2 spooled P5 (2ms)
out in 1ms
• P6 spooled in P6 (10ms)
P2 (0ms)
in 1ms
2MB Hole

P3 (4ms) P3 (4ms)
• P4 Executes

P4 (8ms) P4 (6ms)

RAM Configuration RAM Configuration


Memory Management
An Example – Time Slot 5
Before: After:
Time Slot 5
P5 (2ms) • P6 spooled in P5 (0ms)
in 1ms
P6 (10ms) • P7 spooled in P6 (10ms)
in 1ms
2MB Hole P7 (2ms)

P3 (4ms) • P5 Executes P3 (4ms)

• P5 Done
P4 (6ms) P4 (6ms)

RAM Configuration RAM Configuration


Memory Management
An Example
 The previous slides gave started a
stepwise walk-through of the mickey
mouse system.
 Try and complete the walk through from
this point on.
Memory Placement Algorithms
Simulations have shown that both first-fit and
best-fit are better than worst-fit in terms of
decreasing both time and storage utilization.

Neither first-fit nor best-fit is clearly the best in


terms of storage utilization, but first-fit is usually
faster.
Continuous Memory Allocation Scheme
 The continuous memory allocation scheme entails
loading of processes into memory in a sequential
order.
 When a process is removed from main memory,
new processes are loaded if there is a hole big
enough to hold it.
 This algorithm is easy to implement, however, it
suffers from the drawback of external
fragmentation. Compaction, consequently,
becomes an inevitable part of the scheme.
Continuous Memory Allocation Scheme
Parameters Involved
 Memory size
 RAM access time
 Disc access time
 Compaction thresholds
 Memory hole-size threshold
 Total hole percentage
 Memory placement algorithms
 Round robin time slot
Continuous Memory Allocation Scheme
Effect of Memory Size
 As anticipated, greater the amount of
memory available, the higher would be the
system performance.
Continuous Memory Allocation Scheme
Effect of RAM and disc access times
 RAM access time and disc access time
together define the transfer rate in a system.
 Higher transfer rate means less time it takes
to move processes from main memory to
secondary memory and vice-versa thus
increasing the efficiency of the operating
system.
 Since compaction involves accessing the
entire RAM twice, a lower RAM access time
will translate to lower compaction times.
Continuous Memory Allocation Scheme
Effect of Compaction Thresholds
 Optimal values of hole size threshold largely depend
on the size of the processes since it is these
processes that have to be fit in the holes.
 Thresholds that lead to frequent compaction can
bring down performance at an accelerating rate since
compaction is quite expensive in terms of time.
 Threshold values also play a key role in determining
state of fragmentation present.
 Its effect on system performance is not very
straightforward and has seldom been the focus of
studies in this field.
Fragmentation
 External Fragmentation: total memory space exists to
satisfy a request, but it is not contiguous. In the present
context, we are concerned with external fragmentation and
shall explore the same in greater details in the below slides.
 Internal Fragmentation: allocated memory may be larger
than requested memory; this size difference is memory
internal to a partition, but not being used
 Reduce external fragmentation by compaction
 Shuffle memory contents to place all free memory
together in one large block
https://www.youtube.com/watch?v=l3HRgoDMbng
Generation of Holes In A System
An Example
Figure: P5 of size 500K cannot be allocated in part (c)

OS OS OS
400K 400K 400K
P1 P1 P1
1000K 1000K 1000K
P2 P2 P4
terminates allocate P4
1700K
2000K 2000K 2000K
P3 P3 P3
2300K 2300K 2300K

2560K a 2560K b 2560K c


Generation of Holes In A System
An Example
 In the previous visual presentation, we see that initially
P1, P2, P3 are in the RAM and the remaining 260K is not
enough for P4 (700K). (part a)
 When P2 terminates, it is spooled out leaving behind a
hole of size 1000K. So now we have two holes of sizes
1000K and 260K respectively. (part b)
 At this point, we have a hole big enough to spool in P4
which leaves us with two holes of sizes 300K and 260K.
(part c)
Thus, we see holes are generated because the size
of the spooled out process is not that same as the
size of the process waiting to be spooled in.
Related Problems
Fragmentation – External Fragmentation
 External fragmentation exists when
enough total memory space exists to satisfy
a request, but it is not contiguous; storage is
fragmented into a large number of small
holes.
 Referring to the figure of the scheduling
example on the next slide, two such cases
can be observed.
Related Problems
Fragmentation – External Fragmentation
Figure: P5 of size 500K cannot be allocated due to external
fragmentation
OS OS OS
400K 400K 400K
P1 P1 P1
1000K 1000K 1000K
P2 P2 P4
terminates allocate P4
1700K
2000K 2000K 2000K
P3 P3 P3
2300K 2300K 2300K

2560K a 2560K b 2560K c


Fragmentation
 External Fragmentation: total memory space exists to
satisfy a request, but it is not contiguous. In the present
context, we are concerned with external fragmentation and
shall explore the same in greater details in the below slides.
 Internal Fragmentation: allocated memory may be larger
than requested memory; this size difference is memory
internal to a partition, but not being used
 Reduce external fragmentation by compaction
 Shuffle memory contents to place all free memory
together in one large block
https://www.youtube.com/watch?v=W_baoquYJ5Q
https://www.youtube.com/watch?v=l3HRgoDMbng
Related Problems
Fragmentation – External Fragmentation
From the figure on the last slide, we see
 In part (a), there is a total external fragmentation of
260K, a space that is too small to satisfy the requests of
either of the two remaining processes, P4 and P5.
 In part (c), however, there is a total external
fragmentation of 560K. This space would be large
enough to run process P5, except that this free memory
is not contiguous. It is fragmented into two pieces,
neither one of which is large enough, by itself, to
satisfy the memory request of process P5.
Related Problems
Fragmentation – External Fragmentation
This fragmentation problem can be severe.
In the worst case, there could be a block of
free (wasted) memory between every two
processes. If all this memory were in one
big free block, a few more processes could
be run. Depending on the total amount of
memory storage and the average process
size, external fragmentation may be either a
minor or major problem.
Related Problems
Fragmentation – External Fragmentation
 One solution to the problem of external fragmentation is
compaction.
 The goal is to shuffle the memory contents to place all free
memory together in one large block.
 The simplest compaction algorithm is to move all processes
toward one end of the memory; all holes in the other
direction, producing one large hole of available memory.
 This scheme can be quite expensive.
 The figure on the following slide shows different ways to
compact memory.
 Selecting an optimal compaction strategy is quite difficult.
Related Problems
Fragmentation – External Fragmentation
Different Ways To Compact Memory
OS OS OS OS
300K 300K 300K
P1 P1 300K P1
500K 500K P1
500K
P2 500K P2 600K P2
600K 600K P2
400K 600K P3 P4 900K
1000K 800K P4 1000K
P3 P3
1200K 300K 1200K 1200K
900K 900K
1500K
1500K P4 P4
1900K
1900K 200K P3
2100K 2100K 2100K
2100K
Original Moved Moved Moved
allocation 600K 400K 200K
Related Problems
Fragmentation – External Fragmentation
 As mentioned earlier, compaction is an expensive
scheme. The following example gives a more concrete
idea of the same.
 Given the following:
 RAM size = 128 MB
 Access speed of 1byte of RAM = 10ns
 Each byte will need to be accessed twice during
compaction. Thus,
 Compaction time = 2 x 10 x 10-9 x 128 x 106
= 2560 x 10-3 s = 2560ms  3s
 Supposing we are using RR scheduling with time quantum of
2ms, the compaction time is equivalent to 1280 time slots.
Related Problems
Fragmentation – External Fragmentation
Compaction is usually defined by the following two
thresholds:
 Memory hole size threshold: If the sizes of all the holes
are at most a predefined hole size, then the main memory
undergoes compaction. This predefined hole size is termed as
the hole size threshold.
e.g. If we have two holes of size ‘x’ and size ‘y’ respectively and the
hole threshold is 4KB, then compaction is done provided x<= 4KB and
y<= 4KB
 Total hole percentage: The total hole percentage refers to
the percentage of total hole size over memory size. Only if it
exceeds the designated threshold is compaction undertaken.
e.g. taking the two holes with size ‘x’ and size ‘y’ respectively,
total hole percentage threshold equal to 6%, then for a RAM size of
32MB, compaction is done only if (x+y) >= 6% of 32MB.
Related Problems
Fragmentation – External Fragmentation
 Another possible solution to the external fragmentation
problem is to permit the physical address space of a process to
be noncontiguous, thus allowing a process to be allocated
physical memory wherever the latter is available. One way of
implementing this solution is through the use of a paging
scheme.
 Paging entails division of physical memory into many small
equal-sized frames. Logical memory is also broken into blocks
of the same size called pages. When a process is to be
executed, its pages are loaded into any available memory
frames. On using a paging scheme, external fragmentation can
be eliminated totally.
 Paging is discussed in details in the next lecture.
Related Problems
Fragmentation – Internal Fragmentation
 Consider a hole of 18,464 Internal fragmentation
bytes as shown in the figure.
Suppose that the next operating
process requests 18,462 system
bytes. If we allocate exactly P7
the requested block, we are
left with a hole of 2 bytes. Hole of
The overhead to keep track 18,464
of this hole will be bytes
substantially larger than the P43
hole itself. The general Next
approach is to allocate very request is
small holes as part of the for 18,462
larger request. bytes
Related Problems
Fragmentation – Internal Fragmentation
 As illustrated in the previous slide, the allocated
memory may be slightly larger then the requested
memory. The difference between these two
numbers is internal fragmentation – memory
that is internal to a partition, but is not being
used.
 In other words, unused memory within allocated
memory is called internal fragmentation.
Keeping Track of Free Partitions
 Bitmap method
 Define some basic fixed allocation unit size
 1 bit maintained for each allocation unit
 0 – unit is free, 1 – unit is allocated
 Bitmap – bitstring of the bits of all allocation units
 To allocate space of size n allocation units, find a
run of n consecutive 0’s in bitmap
 Maintain a linked list of free partitions
 Each node contains start address, size,
and pointer to next free block
Non-contiguous Allocation
 Paging
 Segmentation &
 Virtual Memory
 Page Table
 Architecture of Segmentation
 Fragmentation
 http://embed.wistia.com/deliveries/e9a0
d08771b30ab3c365571af7db8bf6.bin
Paging
 Divide physical memory into fixed-sized (power of 2)
blocks called frames.
 Divide logical memory into blocks of same size called
pages.
 Keep track of all free frames.
 To run a program of size n pages, need to find n free
frames and load program.
 Page table: used to translate logical to physical
addresses.
 One page table per process.

 https://www.youtube.com/watch?v=pJ6qrCB8pDw
Paging: A Better Solution
 Allows processes to use logical
memory/physical as and when needed.
 Allows Swapping in & swapping out of
processes as and when needed.
 Allows multiple processes to reside in
memory at the same time.
Segmentation
 Memory-management scheme that supports
user view of memory.
 A program is a collection of segments. A
segment can be any logical unit.
 code, global variables, heap, stack,…
 Segment sizes may be different.
 https://www.youtube.com/watch?v=p9yZNLeOj4s
Virtual Memory That is Larger
Than Physical Memory
Page Table
 One entry for each page in the logical address
space.
 Contains the base address of the page frame
where the page is stored.
 Also contains a valid bit
 If set, logical address is valid and has physical
memory allocated to it
 If not set, logical address is invalid
 https://www.youtube.com/watch?v=7VshrpZM-DA
Demand Paging
 Bring a page into memory only when it is
needed (on demand)
 Less I/O needed to start a process
 Less memory needed
 Faster response
 More users
 Page is needed  reference to it
 invalid reference  abort
 not-in-memory  bring to memory
 https://www.youtube.com/watch?v=dOVrEbZVeoU
Page Replacement Algorithm
Page replacement algorithms decides which
memory pages to page out, sometimes called swap
out, or write to disk, when a page of memory
needs to be allocated.
Page replacement happens when a requested page
is not in memory.
http://embed.wistia.com/deliveries/766783aab52a5d2
dc0a46339c11f132c08743ab8.bin
Types of Page Replacement
Algorithms
There are four types of page replacement algorithms.
 First-fit: Allocate the first hole that is
big enough.
 Next-fit: Similar to first-fit, but start from last
hole allocated.
 Best-fit: Allocate the smallest hole that is big
enough; must search entire list, unless ordered by
size. Produces the smallest leftover hole.
 Worst-fit: Allocate the largest hole; must also
search entire list. Produces the largest
leftover hole.
https://www.youtube.com/watch?v=EXDWHjnrIYk
Page Fault

Page fault occurs when a program attempts to


access a block of memory that is not stored in
the physical memory, or RAM.

The fault notifies the operating system that it


must locate the data in virtual memory, then
transfer it from the storage device, such as an
HDD.
Page Fault
Implementation of Page Table
 Page table is kept in main memory.
 Page-table base register (PTBR) points to the page table
 Page-table length register (PTLR) indicates size of the
page table
 In this scheme every data/instruction access requires
two memory accesses. One for the page table and
one for the data/instruction
 The two memory access problems can be solved by the
use of a special fast-lookup hardware cache called
translation look-aside buffers (TLBs)
Address Translation Scheme
 Address generated by CPU is divided into:
 Page number (p) – used as an index into the page table
which contains base address of the corresponding
page frame in physical memory
 Page offset (d) – combined with base address to define the
physical memory address that is sent to the memory unit
 Use page number to index the page table
 Get the page frame start address
 Add offset with that to get the actual physical
memory address
 Access the memory

You might also like