Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 7

Basic Main Memory Allocation

What is main memory?


Main Memory refers to a physical memory that is the internal memory to the computer. The word main is
used to distinguish it from external mass storage devices such as disk drives. Main memory is also known as RAM.
The computer is able to change only data that is in main memory. Therefore, every program we execute and every
file we access must be copied from a storage device into main memory.
All the programs are loaded in the main memory for execution. Sometimes complete program is loaded into
the memory (static loading), but sometimes a certain part or routine of the program is loaded into the main memory
only when it is called by the program (dynamic loading).
Static loading becomes possible because the linker combines all other modules needed by a program into a
single executable program to avoid any runtime dependency. This strategy is called (static linking). Dynamic
loading is also possible because there is no link on the actual module or library with the program but a reference to
the dynamic module is provided at the time of compilation and linking. This strategy is called (dynamic linking).
Dynamic Link Libraries (DLL) in Windows and Shared Objects in Unix are good examples of dynamic libraries.
The choice between static or dynamic loading of programs is to be made at the time the computer program
is being developed. If you have to load your program statically, then at the time of compilation, the complete
programs will be compiled and linked without leaving any external program or module dependency. But if you have
to load your program dynamically, your compiler will compile the program. For all the modules which you want to
include dynamically, only references will be provided and rest of the work will be done at the time of execution.
What is Fragmentation?
Fragmentation

As processes are loaded and removed from memory, the free memory space is broken into little pieces. It happens
after sometimes that processes cannot be allocated to memory blocks considering their small size and memory
blocks remains unused. This problem is known as Fragmentation.

2 Types of Fragmentation

a.) External fragmentation- the available memory is broken up into lots of little pieces, none of which is
big enough to satisfy the next memory requirement, although the sum total could.

b.) Internal fragmentation- memory block assigned to process is bigger. Some portion of memory is left
unused, as it cannot be used by another process

What are the basic memory allocation strategies?

As you have learned in the previous lesson, memory management is one of the most important features of
the operating system because it affects the execution time of process directly. The execution time of the process
directly depends upon the availability of data in the main memory. Therefore, an operating system must perform its
memory management in such a form that the essential data is always present in the main memory.

The following are the different memory allocation strategies used by the operating system:

1.) Swapping

Swapping is a mechanism in which a process can be swapped temporarily out of main memory (or
move) to secondary storage (disk) and make that memory available to other processes. At some later time, the
system swaps back the process from the secondary storage to main memory.
Figure 2. Swapping
Source:tutorialspoint.com

Though performance is usually affected by swapping process but it helps in running multiple and
big processes in parallel and that's the reason Swapping is also known as a technique for memory
compaction.

Most modern OSes no longer use swapping, because it is too slow and there are faster alternatives
available. However some UNIX systems will still invoke swapping if the system gets extremely full, and
then discontinue swapping when the load reduces again.

2.) Contiguous Memory Allocation

One approach to memory management is to load each process into a contiguous space. The operating
system is allocated space first, usually at either low or high memory locations, and then the remaining available
memory is allocated to processes as needed.

In contiguous memory allocation each process is contained in a single contiguous block of memory.
Memory is divided into several fixed size partitions. Each partition contains exactly one process. When a
partition is free, a process is selected from the input queue and loaded into it. The set of holes (available
memories) is searched to determine which hole is best to allocate.

The following are the three(3) most discussed contiguous memory allocation:

1. First Fit: The first hole that is big enough is allocated to program.

2. Best Fit: The smallest hole that is big enough is allocated to program.

3. Worst Fit: The largest hole that is big enough is allocated to program.

First and best fits experience fragmentation problems more so than worst fit.

3.) Paging

Paging is a memory management scheme where physical memory is divided into a number of
equal sized blocks called frames, and where programs’ logical memory space is divided into blocks of the
same size called pages. It is the predominant memory management technique used today.

There is no external fragmentation with paging because all blocks of physical memory are used,
and there are no gaps in between and no problems with finding the right sized hole for a particular chunk of
memory. However, internal fragmentation occurs.
Figure 3. Paging
Source:tutorialspoint.com

4.) Segmentation

Segmentation is a memory management technique in which each job is divided into several segments
of different sizes. Each segment is actually a different logical address space of the program. Segmentation
memory management works very similar to paging but here segments are of variable-length where as in paging
pages are of fixed size.

When a process is to be executed, its corresponding segmentation is loaded into non-contiguous


memory though every segment is loaded into a contiguous block of available memory.

Figure 4. Segmentation
Source:tutorialspoint.com

What is Virtual Memory?

A computer can address more memory more than the amount physically installed on the system. This extra
memory is actually called virtual memory and it is a section of a hard disk that's set up to emulate the computer's
RAM.

Virtual Memory stores large program in form of pages during their execution and only the required pages
or portions of processes are loaded into the main memory. This technique is useful as large virtual memory is
provided for user programs when a very small physical memory is there.

The following are the benefits of having virtual memory:


 Large programs can be written, as virtual space available is huge compared to physical memory.
 Less I/O required, leads to faster and easy swapping of processes.
 More physical memory available, as programs are stored on virtual memory, so they occupy very less space
on actual physical memory.

Modern microprocessors intended for general-purpose use, a memory management unit, or MMU, is built into
the hardware. The MMU's job is to translate virtual addresses into physical addresses. A basic example is given
below –

Figure 5. Illustration of MMU’s Job


Source:tutorialspoint.com

What is Demand Paging?


Virtual memory is commonly implemented by demand paging. The basic idea behind demand paging is
that when a process is swapped in in the main memory, its pages are not swapped in all at once. Rather they are
swapped in only when the process needs them. ( on demand. )
While executing a program, if the program references a page which is not available in the main memory
because it was swapped out a little ago, the processor treats this invalid memory reference as a page fault and
transfers control from the program to the operating system to demand the page back into the memory.

There are two major requirements to implement a successful demand paging system these are the frame-
allocation algorithm and page-replacement algorithm.

Page-replacement algorithm
This algorithm deals with how to select a page loaded for replacement when there are no free frames
available for a requested page that is not loaded in the memory.
Basic Page Replacement Algorithm
1. Find the location of the page requested by ongoing process on the disk.
2. Find a free frame. If there is a free frame, use it. If there is no free frame, use a page-replacement
algorithm to select any existing frame to be replaced, such frame is known as victim frame.
3. Write the victim frame to disk and change all related page tables to indicate that this page is no longer in
memory.
4. Move the required page and store it in the frame. Adjust all related page and frame tables to indicate the
change.
5. Restart the process that was waiting for this page.

Figure 6. Basic Page Replacement


Source:tutorialspoint.com

Reference String
The string of memory references is called reference string. Reference strings are generated artificially or
by tracing a given system and recording the address of each memory reference.

-For a given page size, we need to consider only the page number, not the entire address.

-If we have a reference to a page p, then any immediately following references to page p will never cause a
page fault. Page p will be in memory after the first reference; the immediately following references will not
fault.

For example:
Consider the following sequence of addresses − 123,215,600,1234,76,96
-If page size is 100, then the reference string is 1,2,6,12,0,0

1. First In-First Out (FIFO) Page Replacement Algorithm


As the name implies, when new pages are requested and are swapped in, they are added to tail (end)
of a queue and the page that comes first which is at the head becomes the victim.

Fault rate is a frequency with which a designed system or component fails.

Figure 7. FIFO Algorithm


Source:tutorialspoint.com

2. Optimal Page Replacement Algorithm


In this algorithm, the page that will not be used for the longest time in the future will be determined
and will be replaced by a new requested page.

Figure 8. Optimal Page Replacement


Source:tutorialspoint.com

3. Least Recently Used (LRU) Algorithm


The page that has not been used in the longest time and replaces this page since it assumes that this
will not be used again in the near future.

Figure 9. LRU Algorithm


Source:tutorialspoint.com

Other Page-Replacement Algorithm

Page Buffering algorithm


This algorithm maintains a certain minimum number of free frames at all times. When a page-fault
(new requested page) occurs, it allocates one of the free frames from the free list, to get the requesting
process up and running again as quickly as possible, and then select a victim page to write to disk and free up
a frame as a second step.

Least frequently Used (LFU) algorithm


Replace the page with the smallest count. A problem can occur if a page is used frequently initially
and then not used any more, as the reference count remains high.

Most frequently Used(MFU) algorithm


This algorithm is based on the argument that the page with the smallest count was probably just
brought in and has yet to be used.

Frame-allocation Algorithm
These algorithm centers around how many frames (memory blocks) are allocated to each process (and to
other needs).
Equal Allocation
When there are available frames and processes to share them, the frames will be equally divided
equally and each process gets the same number of frames.

Proportional Allocation
Allocates the frames proportionally to the size of the process, relative to the total size of all
processes

Priority allocation
Higher priority processes get more frames.

You might also like