Download as pdf or txt
Download as pdf or txt
You are on page 1of 24

UNIT – IV

MEMORY MANAGEMENT

• When a number of users share the same memory the Memory Management Modules performs
the following functions :
o To keep track of all memory locations- free or allocated.
▪ If allocated, to which process and how much.
o To decide the memory allocation policy i.e., which process should get how much memory,
when and where.
o To use various techniques and algorithms to allocate and de-allocate memory locations.
• There are a variety of memory management systems. These systems are categorized into two
major parts :
o Contiguous Memory Management Scheme – Programs are loaded in contiguous memory
locations.
o Non - Contiguous Memory Management Scheme – Programs are divided into different
chunks and loaded at different portions of the memory.
▪ In Paging, these chunks are of the same size.
▪ In Segmentation, the chunks can be of different sizes.
• Memory Management can be of,
o Real Memory Management Systems : the full process image is expected to be loaded in
the memory before execution.
o Virtual Memory Management Systems : a part of the process image is expected to be
loaded in the memory before execution.
• Issues involved in the schemes :
o Relocation and Address Translation:
▪ This problem arises at the time of compilation.
▪ The exact physical memory locations that a program is going to occupy at the run
time are not known.
▪ The compiler generates the executable machine code assuming that each
program is going to be loaded from memory word 0.
▪ At the time of execution, the program needs to be relocated to different
locations.
▪ All the addresses will need to be changed before execution.
o Protection and Sharing :
▪ Protection refers to the preventing of one program from interfering with other
programs.
▪ Sharing :
• It is the opposite of protection.
• Multiple processes have to refer to the same memory locations.
• The process might be using the same piece of data or all processes want
to run the same program.
o Evaluation / Efficiency :
▪ Wasted Memory : Wasted memory is the amount of physical memory which
remains unused and wasted.
▪ Access Times : Access time is the time to access the physical memory by the
Operating System.
▪ Time complexity : Time complexity is related to the overheads of the allocation /
de-allocation algorithm and the time taken by the specific method.

SINGLE CONTIGUOUS MEMORY MANAGEMENT

• In the Single Contiguous Memory Management scheme, the physical memory is divided into two
contiguous areas.
o One area is allocated to the resident portion of the OS.
▪ The OS may be loaded at the lower addresses or it can be loaded at the higher
addresses.
o The other area contains the user process.
▪ At any time only one user process is in the memory.
▪ When the running process getscompleted,the OS brings the next process in to the
memory.

• Working Process :
o All the ‘ready’ processes are held on the disk as executable images – whereas the OS holds
their PCBs in the memory in the order of priority.
o At any time, one them runs in the main memory.
o When this process is blocked, it is ‘swapped out’ from the main memory to the disk.
o The next highest priority process is ‘swapped in’ the main memory from the disk and it
starts running.
o Thus, there is only one process in the main memory even it is a multi-programming
system.
• Relocation and Address Translation :
o The starting physical address of the program is known at the time of compilation.
o The problem of Relocation or Address Translation does not exist.
o The executable machine program contains absolute addresses only.
o They do not need to be changed or translated at the time of execution.
• Protection and Sharing :
o Protection can be achieved by two methods :
▪ Protection Bits :
• A bit is associated with each memory block because a memory block can
either an OS or a application process.
• So there can be only two possibilities.
• Only one bit is sufficient for each block.
• The size of the memory block must be known.
• A memory block can be small as a word or large unit consisting number
of words.
• The bit could be 0 if the word belongs to the OS.
• It could be 1 if the word belongs to the user process.
• At any moment,
o the machine is in the supervisor mode executing an instruction
within the OS.
o Or in the user mode executing a user process.
▪ Fence Register :
• The use of a Fence register is another method of protection.
• This is like any other register in the CPU.
• It contains the address of the fence between the OS and the user process.

o Sharing :
▪ Sharing of code and data in memory does not make sense in this scheme.
▪ Sharing is not supported.
• Evaluation
o This method does not have a large wasted memory.
o This scheme has very fast access times.
o This has very little time – complexity.
FIXED PARTITION MEMORY MANAGEMENT

• The Main memory is divided into various sections called ‘partitions’.


• These partitions could be of different sizes.
• But once, decided at the time of system generation, it cannot be changed.
• The partitions are fixed at the time of system generation.
• To change the partitions, the operations have to be stopped and the OS has to be generated with
different partition specifications.
• These partitions are called ‘Static Partitions’.
• On declaring static partitions, the OS creates a Partition Description Table (PDT).
• Initially all the entries are marked as “FREE”.
• As and when a process is loaded into one of the partitions, the status entry for that partition is
changed to “ALLOCATED”.
• When the process terminates, the system call “kill the process” will remove the PCB, the set the
status of the partition to “FREE”.

Allocating Algorithms

• The OS maintains and uses the PDT.


• The strategies of partition allocation are the same as the disk space allocation viz., first fit, best fit
and worst fit.
• The process waiting to be loaded in the memory are held in the queue by the OS.
• There are two methods of maintaining this queue,
o Multiple Queue
▪ In multiple queues, there is one separate queue for each partition.
▪ When a process wants to occupy memory, it is added to a proper queue
depending on the size of the process.
▪ The scheduling methods can be Round Robin, Priority driven, etc.,
▪ Advantages :
• A very small process is not in a very large partition.
• Avoids memory wastage.
▪ Disadvantages :
• A long queue for smaller partition.
• Bigger partition queue can be empty.
• Not an optimal and efficient use of resources.

o Single Queue
▪ Only one unified queue is maintained of all ready processes.
▪ The order in which the PCBs of ready processes are chained, depends on the
scheduling algorithm.
▪ A free partition is found based on either first, best or worst fit algorithms.
▪ Disadvantage : External fragmentation

Swapping

• Lifting the program from the memory and placing it on the disk is called “swapping out”.
• To bring the program again from the disk into the main memory is called “swapping in”.
• A blocked process is swapped out to give space for a ready process to improve CPU utilization.
• If more than one process is blocked, the swapper chooses a process with the lowest priority or a
process waiting for a slow IO event for swapping out.
• Swapping algorithm has to coordinate amongst Information, Process and Memory Management
Systems.
• The OS has to find a place on the disk for the swapped out process image.
• There are two alternatives:
o Creating a separate swap file for each process.
▪ This method is flexible.
▪ Very inefficient due to increases number of files and directory entries.
o Keeping a common swap file on the disk
▪ The location of the each swapped out process image within that file.
▪ An estimate of the swap file size has to be made initially.

Relocation and Address Translation

• The program referring addresses are called “virtual addresses or logical addresses”.
• The program loaded at different memory locations are called as “physical address”.
• Address Translation(AT) must be done for all the instructions.
• There are two ways to achieve relocation and AT : Static and Dynamic
• Static Relocation and AT
o This is preformed before or during the loading of the program in the memory by the
relocating linker or relocating loader.
o The compiler compiles the program assuming that the program is to be loaded in the main
memory, at the starting address 0.
o The relocating loader / linker uses this compiled object program as a source program and
the starting address of the partition as parameter.
o The relocating loader / linker goes through each instruction and changes the addresses in
each instruction of the program before it is loaded and executed.
o The relocating loader / linker will have to know which portion of the instruction is an
address and depending upon the type of instruction and addressing mode.
o It has to decide where one instruction ends and the next starts.
o Problems :
▪ Slow process
▪ As it is slow, it is used only once before the initial loading of the program.

• Dynamic Relocation
o It is used at the run time.
o It is normally done by a special piece of hardware.
o It is faster
o More expensive.
o It uses a special register called “base register”.
o The base register can be considered as another special purpose CPU register.
o When a partition allocation algorithm allocated a partition for a process and the PCB is
created, the value of the base register is stored in the PCB Register Save Area.
o When a process is “running”, this value is loaded in the base register.
o Whenever the process gets blocked, the base register value does not need to be stored
again as the PCB already has it.
o When the process is to be dispatched, the value from the PCB can be used to load the
base register.
o This is the most commonly used scheme, due to its enhanced speed and flexibility.
o Advantage :
▪ Supports swapping easily. (only the base register value needs to be changed
before dispatching).
Protection and Sharing

There are two approaches for preventing interference and achieving protection and sharing:

1. Protections bits
2. Limit Register
• Protection bits
o Protection bits are used by the IBM 360 / 370 systems.
o This scheme is expensive.
o IBM 360 series of computers are divided into 2KB blocks and reserved 4 protection bits
called the key.
o This causes internal fragmentation.
o All the blocks associated with a partition allocated to a process are given the same key
value in this scheme.
o When a process is assigned to a partition, the key value for that partition is stored in
“Program Status Word (PSW)”.
• Limit Register
o Another method of providing protection is by using a Limit Register.
o This ensures the virtual address present in the original instruction moved into IR before
any relocation / Address Translation.
o Every logical or virtual address will be checked to ensure that it is less that or equal to the
address range and only then added to the base register.
o If it is within the bounds, the hardware itself will generate an error message and the
process will be aborted.

Sharing

• Sharing poses a serious problem in fixed partitions because it might compromise on protection.
• Approach 1 :
o Sharing any code or data is to go through the OS for any such request.
o Because the OS has access to the entire memory space.
o Disadvantages :
▪ Very tedious.
▪ Not followed in practice.
• Approach 2 :
o Keeping all the shareable code and data in one partition.
o Disadvantages :
▪ Fairly complex.
▪ Requires specialized hardware registers.

Evaluation

1. Wasted Memory
a. There is awastage of memory space causing both Internal and External Fragmentation.
2. Access Times
a. Access times are not very high due to the assistance of special hardware.
b. The translation from virtual to physical address is done by the hardware itself.
3. Time Complexity
a. Time complexity is very low because allocation / deallocation routines are simple.
VARIABLE PARTITIONS

• The starting address of any partition is not fixed, it keeps varying.


• It starts with two partitions.
• The partitions are created by the OS at the run time, and differ in sizes.
Swapping

• Lifting the program from the memory and placing it on the disk is called “swapping out”.
• To bring the program again from the disk into the main memory is called “swapping in”.
• A blocked process is swapped out to give space for a ready process to improve CPU utilization.
• If more than one process is blocked, the swapper chooses a process with the lowest priority or a
process waiting for a slow IO event for swapping out.
• Swapping algorithm has to coordinate amongst Information, Process and Memory Management
Systems.
• The OS has to find a place on the disk for the swapped out process image.
• There are two alternatives:
o Creating a separate swap file for each process.
▪ This method is flexible.
▪ Very inefficient due to increases number of files and directory entries.
o Keeping a common swap file on the disk
▪ The location of the each swapped out process image within that file.
▪ An estimate of the swap file size has to be made initially.

Relocation and Address Translation

• The program referring addresses are called “virtual addresses or logical addresses”.
• The program loaded at different memory locations are called as “physical address”.
• Address Translation(AT) must be done for all the instructions.
• There are two ways to achieve relocation and AT : Static and Dynamic
• Static Relocation and AT
o This is preformed before or during the loading of the program in the memory by the
relocating linker or relocating loader.
o The compiler compiles the program assuming that the program is to be loaded in the main
memory, at the starting address 0.
o The relocating loader / linker uses this compiled object program as a source program and
the starting address of the partition as parameter.
o The relocating loader / linker goes through each instruction and changes the addresses in
each instruction of the program before it is loaded and executed.
o The relocating loader / linker will have to know which portion of the instruction is an
address and depending upon the type of instruction and addressing mode.
o It has to decide where one instruction ends and the next starts.
o Problems :
▪ Slow process
▪ As it is slow, it is used only once before the initial loading of the program.

Diagram 9.8
• Dynamic Relocation
o It is used at the run time.
o It is normally done by a special piece of hardware.
o It is faster
o More expensive.
o It uses a special register called “base register”.
o The base register can be considered as another special purpose CPU register.
o When a partition allocation algorithm allocated a partition for a process and the PCB is
created, the value of the base register is stored in the PCB Register Save Area.
o When a process is “running”, this value is loaded in the base register.
o Whenever the process gets blocked, the base register value does not need to be stored
again as the PCB already has it.
o When the process is to be dispatched, the value from the PCB can be used to load the
base register.
o This is the most commonly used scheme, due to its enhanced speed and flexibility.
o Advantage :
▪ Supports swapping easily. (only the base register value needs to be changed
before dispatching).

Protection and Sharing

• Limit Register
o Another method of providing protection is by using a Limit Register.
o This ensures the virtual address present in the original instruction moved into IR before
any relocation / Address Translation.
o Every logical or virtual address will be checked to ensure that it is less that or equal to the
address range and only then added to the base register.
o If it is within the bounds, the hardware itself will generate an error message and the
process will be aborted.

Diagram 9.11

• Sharing
o Sharing is possible by using ‘overlapping partitions’.
o Limitations :
▪ It allows only sharing of two processes.
▪ The shared code must be either reentrant or must be executed in a mutually
exclusive way with no preemptions.

Evaluation

1. Wasted Memory
a. This scheme wastes less memory.
b. No Internal Fragmentation.
c. Small External Fragmentation exists.
2. Access Times
a. Access times are not very high due to the assistance of special hardware.
b. The translation from virtual to physical address is done by the hardware itself.
3. Time Complexity
a. Time complexity is higher with the variable partition due to various data structures and
algorithms used.

NON – CONTIGUOUS ALLOCATION

• Non Contiguous memory provides a better method to reduce the problem of fragmentation.
PAGING

• Paging permits the physical address space of a process to be non-contiguous.


• It is a fixed-size partitioning scheme.
• In the Paging technique, the secondary memory and main memory are divided into equal fixed-
size partitions.
• Paging solves the problem of fitting memory chunks of varying sizes onto the backing store and
this problem is suffered by many memory management schemes.
• Paging helps to avoid external fragmentation and the need for compaction.
• The paging technique divides the physical memory(main memory) into fixed-size blocks that are
known as Frames.
• Also divide the logical memory(secondary memory) into blocks of the same size that are known
as Pages.
• This technique keeps the track of all the free frames.
• The Frame has the same size as that of a Page.
• A frame is basically a place where a (logical) page can be (physically) placed.
• Each process is mainly divided into parts where the size of each part is the same as the page size.
• Pages of a process are brought into the main memory only when there is a requirement otherwise
they reside in the secondary storage.
• One page of a process is mainly stored in one of the frames of the memory.
• The pages can be stored at different locations of the memory but always the main priority is to
find contiguous frames
• The CPU always generates a logical address.
• In order to access the main memory always a physical address is needed.

The logical address generated by CPU always consists of two parts:

1. Page Number(p)
2. Page Offset (d)

where,

• Page Number is used to specify the specific page of the process from which the CPU wants to read
the data. and it is also used as an index to the page table.
• Page offset is mainly used to specify the specific word on the page that the CPU wants to read.

Page Table

• The Page table mainly contains the base address of each page in the Physical memory.
• The base address is then combined with the page offset in order to define the physical memory
address which is then sent to the memory unit.
• Thus page table mainly provides the corresponding frame number (base address of the frame)
where that page is stored in the main memory.

The physical address consists of two parts:

1. Page offset(d)
2. Frame Number(f)

Where,

• The Frame number is used to indicate the specific frame where the required page is stored.
• Page Offset indicates the specific word that has to be read from that page.
Translation of look-aside buffer(TLB)

• TLB is associative and high-speed memory.


• Each entry in the TLB mainly consists of two parts: a key(that is the tag) and a value.
• When associative memory is presented with an item, then the item is compared with all keys
simultaneously. In case if the item is found then the corresponding value is returned.
• The search with TLB is fast though the hardware is expensive.
• The number of entries in the TLB is small and generally lies in between 64 and 1024.

TLB is used with Page Tables in the following ways:

• If the page number is found, then its frame number is immediately available and is used in order
to access the memory
• In case if the page number is not in the TLB (which is known as TLB miss), then a memory reference
to the Page table must be made.
• When the frame number is obtained it can be used to access the memory. Page number and frame
number is added to the TLB.
• In case if the TLB is already full of entries then the Operating system must select one for
replacement.
Advantages of Paging

• Paging mainly allows to storage of parts of a single process in a non-contiguous fashion.


• The problem of external fragmentation is solved.
• Paging is one of the simplest algorithms for memory management.

Disadvantages of Paging

• In Paging, sometimes the page table consumes more memory.


• Internal fragmentation is caused by this technique.
• There is an increase in time taken to fetch the instruction since now two memory accesses are
required.

Paging Hardware

Every address generated by CPU mainly consists of two parts:

1. Page Number(p)
2. Page Offset (d)

where,

• Page Number is used as an index into the page table that generally contains the base address of
each page in the physical memory.
• Page offset is combined with base address in order to define the physical memory address which
is then sent to the memory unit.

The logical address is as follows:


SEGMENTATION

• Segmentation is another way of dividing the addressable memory.


• It is another scheme of memory management.
• It generally supports the user view of memory.
• The Logical address space is basically the collection of segments.
• Each segment has a name and a length.
• A process is divided into segments.
• Like paging, segmentation divides or segments the memory.
• Segmentation divides the memory into variable segments these are then loaded into logical
memory space.
• A Program is basically a collection of segments. And a segment is a logical unit such as:

▪ main program
▪ procedure
▪ function
▪ method
▪ object
▪ local variable and global variables.
▪ symbol table
▪ common block
▪ stack
▪ arrays

Types of Segmentation

• Virtual Memory Segmentation - With this type of segmentation, each process is segmented
into n divisions.
• Simple Segmentation - Each process is segmented into n divisions and they are all together
segmented at once exactly but at the runtime and can be non-contiguous.

Characteristics of Segmentation
• The Segmentation partitioning scheme is variable-size.
• Partitions of the secondary memory are commonly known as segments.
• Partition size mainly depends upon the length of modules.
• Secondary memory and main memory are divided into unequal-sized partitions.

The logical address consists of two values:

<segment-number,offset>

where,

• Segment Number(s): Segment Number is used to represent the number of bits that are
required to represent the segment.
• Offset(d) : Segment offset is used to represent the number of bits that are required to
represent the size of the segment.

Segmentation Architecture

Segment Table

• A Table that is used to store the information of all segments of the process is commonly known
as Segment Table.
• The mapping of a two-dimensional Logical address into a one-dimensional Physical address is
done using the segment table.
• This table is mainly stored as a separate segment in the main memory.
• The table that stores the base address of the segment table is commonly known as the Segment
table base register (STBR)

In the segment table each entry has :

1. Segment Base/base address: The segment base mainly contains the starting physical address
where the segments reside in the memory.
2. Segment Limit: The segment limit is mainly used to specify the length of the segment.

Segment Table Base Register(STBR)


The STBR register is used to point the segment table's location in the memory.

Segment Table Length Register(STLR)


This register indicates the number of segments used by a program. The segment number s is legal
if s<STLR
The logical address generated by CPU consist of two parts:

• Segment Number(s): It is used as an index into the segment table.


• Offset(d): It must lie in between '0' and 'segment limit'.In this case, if the Offset exceeds the

correct offset+segment base= address in Physical memory

Advantages of Segmentation

• The segment table is mainly used to keep the record of segments.


• The segment table occupies less space.
• There is no Internal Fragmentation.
• Segmentation generally allows us to divide the program into modules that provide better
visualization.
• Segments are of variable size.

Disadvantages of Segmentation

• Maintaining a segment table for each process leads to overhead.


• This technique is expensive.
• The time is taken in order to fetch the instruction increases.
• Segments are of unequal size in segmentation and thus are not suitable for swapping.
• This technique leads to external fragmentation.

VIRTUAL MEMORY MANAGEMENT SYSTEM

• Virtual Memory is a storage mechanism which offers user an illusion of having a very big main
memory.
• It is done by treating a part of secondary memory as the main memory.
• In Virtual memory, the user can store processes with a bigger size than the available main
memory.
• Instead of loading one long process in the main memory, the OS loads the various parts of more
than one process in the main memory.
• Virtual memory is mostly implemented with demand paging and demand segmentation.
• A demand paging mechanism is very much similar to a paging system with swapping where
processes stored in the secondary memory and pages are loaded only on demand, not in
advance.
• When a context switch occurs, the OS never copy any of the old program’s pages from the disk
or any of the new program’s pages into the main memory.
• Instead, it will start executing the new program after loading the first page and fetches the
program’s pages, which are referenced.
• During the program execution, if the program references a page that may not be available in the
main memory because it was swapped, then the processor considers it as an invalid memory
reference.
• That’s because the page fault and transfers send control back from the program to the OS,
which demands to store page back into the memory.
Types of Page Replacement Methods

• FIFO
• Optimal Algorithm
• LRU Page Replacement

FIFO Page Replacement


FIFO (First-in-first-out) is a simple implementation method. In this method, memory selects the page for
a replacement that has been in the virtual address of the memory for the longest time.

Features:

• Whenever a new page loaded, the page recently comes in the memory is removed. So, it is easy
to decide which page requires to be removed as its identification number is always at the FIFO
stack.
• The oldest page in the main memory is one that should be selected for replacement first.

Optimal Algorithm
The optimal page replacement method selects that page for a replacement for which the time to the
next reference is the longest.

Features:

• Optimal algorithm results in the fewest number of page faults. This algorithm is difficult to
implement.
• An optimal page-replacement algorithm method has the lowest page-fault rate of all algorithms.
This algorithm exists and which should be called MIN or OPT.
• Replace the page which unlike to use for a longer period of time. It only uses the time when a
page needs to be used.

LRU Page Replacement


The full form of LRU is the Least Recently Used page. This method helps OS to find page usage over a
short period of time. This algorithm should be implemented by associating a counter with an even-
page.

• Page, which has not been used for the longest time in the main memory, is the one that will be
selected for replacement.
• Easy to implement, keep a list, replace pages by looking back into time.
Features:

• The LRU replacement method has the highest count. This counter is also called aging registers,
which specify their age and how much their associated pages should also be referenced.
• The page which hasn’t been used for the longest time in the main memory is the one that
should be selected for replacement.
• It also keeps a list and replaces pages by looking back into time.

Fault rate
Fault rate is a frequency with which a designed system or component fails. It is expressed in failures per
unit of time. It is denoted by the Greek letter ? (lambda).

Advantages of Virtual Memory

• Virtual memory helps to gain speed when only a particular segment of the program is required
for the execution of the program.
• It is very helpful in implementing a multiprogramming environment.
• It allows you to run more applications at once.
• It helps you to fit many large programs into smaller programs.
• Common data or code may be shared between memory.
• Process may become even larger than all of the physical memory.
• Data / code should be read from disk whenever required.
• The code can be placed anywhere in physical memory without requiring relocation.
• More processes should be maintained in the main memory, which increases the effective use of
CPU.
• Each page is stored on a disk until it is required after that, it will be removed.
• It allows more applications to be run at the same time.
• There is no specific limit on the degree of multiprogramming.
• Large programs should be written, as virtual address space available is more compared to
physical memory.

Disadvantages of Virtual Memory

• Applications may run slower if the system is using virtual memory.


• Likely takes more time to switch between applications.
• Offers lesser hard drive space for your use.
• It reduces system stability.
• It allows larger applications to run in systems that don’t offer enough physical RAM alone to run
them.
• It doesn’t offer the same performance as RAM.
• It negatively affects the overall performance of a system.
• Occupy the storage space, which may be used otherwise for long term data storage.
I / O MANAGEMENT AND DISK SCHEDULING

Refer Book Page Number : 172 to 178

FILE SYSTEMS

Direct Memory Access

Refer Book Page Number : 85

You might also like