Download as pdf or txt
Download as pdf or txt
You are on page 1of 37

Memory Management

07-Apr-17 Bijoy Kumar Mandal 1


Memory Management

Memory management is the functionality of an operating


system which handles or manages primary memory. Memory
management keeps track of each and every memory location
either it is allocated to some process or it is free. It checks how
much memory is to be allocated to processes.

07-Apr-17 Bijoy Kumar Mandal 2


07-Apr-17 Bijoy Kumar Mandal 3
Background
Program must be brought into memory and
placed within a process for it to be run.
Input queue – collection of processes on the
disk that are waiting to be brought into
memory to run the program.
User programs go through several steps before
being run.

07-Apr-17 Bijoy Kumar Mandal 4


Binding of Instructions and Data to Memory

Address binding of instructions and data to memory addresses can


happen at three different stages.

 Compile time: If memory location known a priori, absolute code can


be generated; must recompile code if starting location changes.

 Load time: Must generate reloadable code if memory location is not


known at compile time.

 Execution time: Binding delayed until run time if the process can be
moved during its execution from one memory segment to another.
Need hardware support for address maps (e.g., base and limit
registers).

07-Apr-17 Bijoy Kumar Mandal 5


Logical vs. Physical Address Space

 The concept of a logical address space that is


bound to a separate physical address space is
central to proper memory management.
 Logical address – generated by the CPU; also
referred to as virtual address.
 Physical address – address seen by the memory unit.

 Logical and physical addresses are the same in


compile time and load-time addr; logical (virtual)
and physical addresses differ in execution-time
address-binding scheme.
07-Apr-17 Bijoy Kumar Mandal 6
Memory-Management Unit (MMU)

A memory management unit (MMU) is a computer


hardware component that handles all memory and
caching operations associated with the processor. In
other words, the MMU is responsible for all aspects
of memory management. It is usually integrated into
the processor, although in some systems it occupies a
separate IC (integrated circuit) chip.

07-Apr-17 Bijoy Kumar Mandal 7


The work of the MMU can be divided into
three major categories
 Hardware memory management, which oversees and
regulates the processor's use of RAM (random access
memory) and cache memory.
 OS (operating system) memory management, which
ensures the availability of adequate memory
resources for the objects and data structures of each
running program at all times.
 Application memory management, which allocates
each individual program's required memory, and then
recycles freed-up memory space when the operation
concludes.
07-Apr-17 Bijoy Kumar Mandal 8
Dynamic Loading
Routine is not loaded until it is called
Better memory-space utilization; unused
routine is never loaded.
Useful when large amounts of code are
needed to handle infrequently occurring
cases.
No special support from the operating system
is required implemented through program
design.
07-Apr-17 Bijoy Kumar Mandal 9
Dynamic Linking
 Linking postponed until execution time.
 Small piece of code, stub, used to locate the
appropriate memory-resident library routine.
 Stub replaces itself with the address of the
routine, and executes the routine.
 Operating system needed to check if routine is in
processes’ memory address.
 Dynamic linking is particularly useful for libraries.
 Operating System

07-Apr-17 Bijoy Kumar Mandal 10


Overlays
Keep in memory only those instructions and
data that are needed at any given time.
Needed when process is larger than amount
of memory allocated to it.
Implemented by user, no special support
needed from
operating system, programming design of
overlay
structure is complex
07-Apr-17 Bijoy Kumar Mandal 11
Overlaying is a programming method that
allows programs to be larger than the
computer's main memory. An embedded
system would normally use overlays because
of the limitation of physical memory, which is
internal memory for a system-on-chip and the
lack of virtual memory facilities

07-Apr-17 Bijoy Kumar Mandal 12


07-Apr-17 Bijoy Kumar Mandal 13
Swapping
 A process can be swapped temporarily out of memory to a
backing store, and then brought back into memory for
continued execution.
 Backing store – fast disk large enough to accommodate
copies of all memory images for all users; must provide
direct access to these memory images.
 Roll out, roll in – swapping variant used for priority-based
scheduling algorithms; lower-priority process is swapped
out so higher-priority process can be loaded and executed.
 Major part of swap time is transfer time; total transfer time
is directly proportional to the amount of memory swapped.
 Modified versions of swapping are found on many systems,
i.e., UNIX, Linux, and Windows.

07-Apr-17 Bijoy Kumar Mandal 14


07-Apr-17 Bijoy Kumar Mandal 15
Memory Allocation
• Contiguous Allocation:
Each process allocated a single contiguous
chunk of memory

• Non-contiguous Allocation:
Parts of a process can be allocated
noncontiguous chunks of memory

07-Apr-17 Bijoy Kumar Mandal 16


Contiguous Allocation
 Main memory usually into two partitions:

 Resident operating system, usually held in


low memory with interrupt vector.

 User processes then held in high memory.

07-Apr-17 Bijoy Kumar Mandal 17


Single / Fixed -partition allocation
• Memory broken up into fixed size partitions
But the size of two partitions may be different
• Each partition can have exactly one process
• When a process arrives, allocate it a free partition
Can apply different policy to choose a partition
• Easy to manage

• Problems:
 Maximum size of process bound by max. partition size
 Large internal fragmentation possible

07-Apr-17 Bijoy Kumar Mandal 18


Multiple / Variable Partition Scheme
Hole : Block of available memory; holes of
various size are scattered throughout memory
 When a process arrives, it is allocated
memory from a hole large enough to
accommodate it
Operating system maintains information
about:
a) allocated partitions b) free partitions (hole)

07-Apr-17 Bijoy Kumar Mandal 19


07-Apr-17 Bijoy Kumar Mandal 20
Dynamic Storage-Allocation Problem
How to satisfy a request of size n from a list of free holes?

 First-fit: Allocate the first hole that is big enough

 Best-fit: Allocate the smallest hole that is big enough;


must search entire list, unless ordered by size.
Produces the smallest leftover hole.

 Worst-fit: Allocate the largest hole; must also search


entire list. Produces the largest leftover hole

07-Apr-17 Bijoy Kumar Mandal 21


Fragmentation
As processes are loaded and removed from
memory, the free memory space is broken
into little pieces. It happens after sometimes
that processes can not be allocated to
memory blocks considering their small size
and memory blocks remains unused. This
problem is known as Fragmentation.

07-Apr-17 Bijoy Kumar Mandal 22


External Fragmentation
Fragmentation happens when a dynamic
memory allocation algorithm allocates some
memory and a small piece is left over that
cannot be effectively used. If too much
external fragmentation occurs, the amount of
usable memory is drastically reduced. Total
memory space exists to satisfy a request, but
it is not contiguous.

07-Apr-17 Bijoy Kumar Mandal 23


Internal Fragmentation
Internal fragmentation is the space wasted
inside of allocated memory blocks because of
restriction on the allowed sizes of allocated
blocks. Allocated memory may be slightly
larger than requested memory; this size
difference is memory internal to a partition,
but not being used

07-Apr-17 Bijoy Kumar Mandal 24


Compaction
Memory compaction is the process of moving
allocated objects together, and leaving empty
space together.

07-Apr-17 Bijoy Kumar Mandal 25


Paging
• In a memory management system that takes
advantage of paging, the OS reads data from
secondary storage in blocks called pages, all of
which have identical size. The physical region
of memory containing a single page is called a
frame.

07-Apr-17 Bijoy Kumar Mandal 26


Use of Paging
Paging is used for faster access to data. When
a program needs a page, it is available in the
main memory as the OS copies a certain
number of pages from your storage device to
main memory. Paging allows the physical
address space of a process to be
noncontiguous.

07-Apr-17 Bijoy Kumar Mandal 27


External fragmentation is avoided by using
paging technique. Paging is a technique in
which physical memory is broken into blocks
of the same size called pages (size is power of
2, between 512 bytes and 8192 bytes). When
a process is to be executed, it's corresponding
pages are loaded into any available memory
frames.

07-Apr-17 Bijoy Kumar Mandal 28


• Logical address space of a process can be non-
contiguous and a process is allocated physical
memory whenever the free memory frame is
available. Operating system keeps track of all
free frames. Operating system needs n free
frames to run a program of size n pages.

07-Apr-17 Bijoy Kumar Mandal 29


Page Table
A page table is the data structure used by a
virtual memory system in a computer
operating system to store the mapping
between virtual addresses and physical
addresses. Virtual addresses are used by the
accessing process, while physical addresses
are used by the hardware or by the RAM
subsystem

07-Apr-17 Bijoy Kumar Mandal 30


• Address generated by CPU is divided into

• Page number (p) -- page number is used as an


index into a page table which contains base
address of each page in physical memory

• Page offset (d) -- page offset is combined with


base address to define the physical memory
address

07-Apr-17 Bijoy Kumar Mandal 31


07-Apr-17 Bijoy Kumar Mandal 32
Paging Table Architecture

07-Apr-17 Bijoy Kumar Mandal 33


Segmentation

• Segmentation is a Memory Management


technique in which memory is divided into
variable sized chunks which can be allocated
to processes. Each chunk is called a segment.
A table stores the information about all such
segments and is called Global Descriptor Table
(GDT). A GDT entry is called Global Descriptor.
It comprise of :

07-Apr-17 Bijoy Kumar Mandal 34


07-Apr-17 Bijoy Kumar Mandal 35
• Address generated by CPU is divided into
• Segment number (s) -- segment number is
used as an index into a segment table which
contains base address of each segment in
physical memory and a limit of segment.
• Segment offset (o) -- segment offset is first
checked against limit and then is combined
with base address to define the physical
memory address.

07-Apr-17 Bijoy Kumar Mandal 36


07-Apr-17 Bijoy Kumar Mandal 37

You might also like