Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 14

Operating System (22516)

Memory Management
Memory is central to the operation of a modern computer system. Memory is a large array of
words or bytes, each with its own address.
A program resides on a disk as a binary executable file. The program must be brought into
memory and placed within a process for it to be executed. Depending on the memory
management in use the process may be moved between disk and memory during its execution.
The collection of processes on the disk that are waiting to be brought into memory for execution
forms the input queue. i.e. selected one of the process in the input queue and to load that process
into memory. We can provide protection by using two registers, usually a base and a limit, as
shown in fig.5.1 The base register holds the smallest legal physical memory address; the limit
register specifies the size of the range. For example, if the base register holds 300040 and the
limit register is 120900, then the program can legally access all addresses from 300040 through
420939(inclusive).

Fig. 5.1

Logical versus Physical Address Space


 An address generated by the CPU is commonly referred to as a logical address, whereas
an address seen by the memory unit is commonly referred to as a physical address.
 The compile-time and load-time address-binding schemes result in an environment where
the logical and physical addresses are the same.

Operating System chapter -5 notes Meena talele 1


 The execution-time address-binding scheme results in an environment where the logical
and physical addresses differ.
 In this case, we usually refer to the logical address as a virtual address.
 The set of all logical addresses generated by a program is referred to as a logical address
space.
 The set of all physical addresses corresponding to these logical addresses is referred to as
a physical address space.
 The run-time mapping from virtual to physical addresses is done by the memory
management unit (MMU), which is a hardware device.
 The base register is called a relocation register. The value in the relocation register is
added to every address generated by a user process at the time it is sent to memory.

Partitioning
 The memory is usually divided into two partitions, one for the resident operating system,
and one for the user processes.
 If the operating system is residing in low memory then user processes are executed in high
memory.
 The code and data of operating system must be protected from changes by user processes.
The user processes are also protected from one another.

Static Memory Partitioning (Fixed)


 Memory is divided into number of fixed size partitions, which is called as static memory
partitioning.
 Each partition contains exactly one process.
 The number of programs to be executed depends on number of partitions.
 When the partition is free, a selected process from the input queue is loaded into the free
partition.
 When the process terminates, the partition becomes available for another process.
 The operating system keeps a table indicating parts of memory which are available and
which are occupied.
 Initially, all memory is available for user processes and it is considered as one large block
of available memory, a hole.
 When a process arrives, large enough hole of memory is allocated to the processes.
For example, assume that we have 2560K of memory available and a resident
operating system of 400K. This situation leaves 2160K for user processes. If the input queue
is as given in following fig. 5.2 then by using FCFS job scheduling, we can immediately
allocate memory to processes P1, P2, P3. The remaining Hole of size 260K cannot be used by
any of the remaining processes in the input queue.

Operating 0
system Job Queue
2160 400k Process Memor Time
y
P1 600K 10
Operating System chapter -5 notes Meena talele P2 1000K 5 2
P3 300K 20
P4 700K 8
P5 500K 15
2560K

Operating 0
System
400K
P1
1000K

P2

2000K
P3
2300K
2560K
Memory Allocation for processes
Fig 5.2

Disadvantages
1. The main drawback of fixed partition is memory wastage.
2. In a fixed partition, if the memory required for process is less than size of partition, then
remaining part of memory can not be allocated to any other process causing internal
fragmentation.

Dynamic memory partitioning(Variable)


 In this a set of holes of various sizes are scattered throughout the memory.
 When a process arrives a hole that is large enough for the process is searched from the set
of free holes.
 If the hole is too large, it is divided into two parts: one part is allocated to the arriving
process ; the other is returned to the set of holes.
 When a process terminates, it releases its block of memory to the set of holes.
 If the new hole is adjacent to other holes, they are merged to form one large hole.
 Then it is checked whether there are processes waiting for memory and whether this
newly freed and recombined memory could satisfy the demands of any of these waiting
processes
There are three common strategies to select free hole from set of free holes:

1. First fit:-
 Allocates first hole that is big enough.
 This algorithm scans memory from the beginning and selects the first available
block that is large enough to hold the process.

Operating System chapter -5 notes Meena talele 3


2. Best fit:-
 It chooses the hole i.e., closest in size to the request. It allocates the smallest hole
i.e., big enough to hold the process.
 If the list is not ordered by size, the entire list must be searched.
3. Worst fit:-
 It allocates the largest hole to the process request. It searches for the largest hole in
the entire list.
 If the list is not ordered by size, the entire list must be searched.

Free space management


When memory is allocated dynamically it is the responsibility of OS to manage it properly.
There are two methods :
1. Memory management with Bitmap:
 In this type of memory management memory is divided into allocation units.
 These units may be small in size to store few words or large to store several KB.
 A bit is associated with each unit in bitmap.
 The bit is 0 if the unit is free and 1 if it is occupied.

Fig (a) part of memory with five processes and three holes. The shaded regions (0 in the bitmap)
are free. (b) The corresponding bitmap

2. Memory management with linked list : ( Refer fig a and c)


 In this type linked list is used for allocated and free memory segments. Each node of
the linked list is divided into four parts.
 The first part is used to specify whether the memory segment is used to store process
(P) or it is a hole (H).
 The second part stores the starting address of process or hole.
 The third part specifies length of process or hole.
 Fourth part is pointer to the next entry.
 The linked list is sorted by address field. Due to this the list updation will be easy
when a process terminates or swapped out.

Operating System chapter -5 notes Meena talele 4


 The terminating process has two neighbors: processes or holes. These may be present
in following four combinations as shown in fig below:

Fig 5.3

Swapping
 A process, can be swapped temporarily out of memory to a backing store, and then
brought back into memory for continued execution.
 Assume a multiprogramming environment with a round robin CPU-scheduling algorithm.
When a quantum expires, the memory manager will start to swap out the process that just
finished, and to swap in another process to the memory space that has been freed ( Fig ).
When each process finishes its quantum, it will be swapped with another process.

Fig 5.4

 A variant of this swapping policy is used for priority-based scheduling algorithms.


 If a higher-priority process arrives and wants service, the memory manager can swap out
the lower-priority process so that it can load and execute the higher priority process.
 When the higher priority process finishes, the lower-priority process can be swapped back
in and continued. This variant of swapping is sometimes called rollout, roll in.
 A process swapped out will be swapped back into the same memory space that it occupies
previously. If binding is done at assembly or load time, then the process cannot be moved
to different location. If execution-time binding is being used, then it is possible to swap a
process into a different memory space.

Operating System chapter -5 notes Meena talele 5


 Swapping requires a backing store. The backing store is commonly a fast disk. It is large
enough to accommodate copies of all memory images for all users. The system maintains
a ready queue consisting of all processes whose memory images are on the backing store
or in memory and are ready to run. The context-switch time in such a swapping system is
fairly high.
 Let us assume that the user process is of size 100K and the backing store is a standard
hard disk with transfer rate of 1 megabyte per second. The actual transfer of the 100K
process to or from memory takes
100K / 1000K per second = 1/10 second = 100 milliseconds

Fragmentation:
Memory fragmentation can be of two types:
1. Internal Fragmentation.
2. External Fragmentation

In Internal Fragmentation there is wasted space internal to a portion due to the fact that block
of data loaded is smaller than the partition.
Eg:-If there is a block of 50kb and if the process requests 40kb and if the block is
allocated to
the process then there will be 10kb of memory left.

External Fragmentation exists when there is enough memory space exists to satisfy the request,
but it is not contiguous i.e., storage is fragmented into large number of small holes.
External Fragmentation may be either minor or a major problem.
 One solution for over-coming external fragmentation is compaction. The goal is to move
all the free memory together to form a large block.
 Compaction is not possible always. If the relocation is static and is done at load time then
compaction is not possible. Compaction is possible if the re-location is dynamic and done
at execution time.
 Another possible solution to the external fragmentation problem is to permit the logical
address space of a process to be non-contiguous, thus allowing the process to be allocated
physical memory whenever the latter is available.

Segmentation
 A user program can be subdivided using segmentation, in which the program and its
associated data are divided into a number of segments.
 It is not required that all segments of all programs be of the same length, although there is
a maximum segment length.
 A logical address using segmentation consists of two parts, a segment number and an
offset.
 Because of the use of unequal-size segments, segmentation is similar to dynamic
partitioning.
 In segmentation, a program may occupy more than one partition, and these partitions need
not be contiguous.
 Segmentation eliminates internal fragmentation but, like dynamic partitioning, it suffers
from external fragmentation.

Operating System chapter -5 notes Meena talele 6


 However, because a process is broken up into a number of smaller pieces, the external
fragmentation should be less.
 Paging is invisible to the programmer, but segmentation is usually visible and is provided
as a convenience for organizing programs and data.
 Another problem with unequal-size segments is that there is no simple relationship
between logical addresses and physical addresses.
 Segmentation scheme would make use of a segment table for each process and a list of
free blocks of main memory.
 Each segment table entry would have to give the starting address in main memory of the
corresponding segment. The entry should also provide the length of the segment, to assure
that invalid addresses are not used.
 When a process enters the Running state, the address of its segment table is loaded into a
special register used by the memory management hardware.
 Consider an address of n + m bits, where the leftmost n bits are the segment number and
the rightmost m bits are the offset. The following steps are needed for address translation:
 Extract the segment number as the leftmost n bits of the logical address.
 Use the segment number as an index into the process segment table to find the
starting physical address of the segment.
 Compare the offset, expressed in the rightmost m bits, to the length of the segment.
If the offset is greater than or equal to the length, the address is invalid.
 The desired physical address is the sum of the starting physical address of the
segment plus the offset.
 Segmentation and paging can be combined to have a good result.

PAGING AND STRUCTURE OF PAGE TABLE


 Paging is a memory management scheme that permits the physical address space of a
process to be non-contiguous.
 Support for paging is handled by hardware.
 It is used to avoid external fragmentation.
 Paging avoids the considerable problem of fitting the varying sized memory chunks on to
the backing store.
 Paging is a technique in which physical memory is broken into blocks of the same size
called pages (size is power of 2, between 512 bytes and 8192 bytes).
 When a process is to be executed, it's corresponding pages are loaded into any available
memory frames.
 Logical address space of a process can be non-contiguous and a process is allocated
physical memory whenever the free memory frame is available.
 Operating system keeps track of all free frames. Operating system needs n free frames to
run a program of size n pages.
Address generated by CPU is divided into
 Page number (p) -- page number is used as an index into a page table which
contains base address of each page in physical memory.
 Page offset (d) -- page offset is combined with base address to define the physical
memory address.

Operating System chapter -5 notes Meena talele 7


Following fig. 5.5 show the paging table architecture:

Fig. 5.5
For Example,
Using a page size of 4 bytes and a physical memory of 32 bytes (8 pages) , the logical memory
can be mapped to physical memory as shown in below fig. 5.6

The mapping can be done byusing following formula:

Operating System chapter -5 notes Meena talele 8


Physical memory address = (frame number * page size) + page offset
So, logical address 0 can be mapped to physical address 20 (= (5 * 4) + 0).
Logical address 3 (i.e page 0 and offset 3) can be mapped to physical address 23 (= (5 * 4) +
3).

(Note To calculate page number and offset from the given logical address the formula is :
Logical address/page size. The quotient gives the page number and remainder gives the offset )

Physical
Logical Frame Physical
Info Page Page size Offset Address
Address no. Address
=
1 B 0 5 4 1 5*4+1 21
3 D 0 5 4 3 5*4+3 23
8 I 2 1 4 0 1*4+0 4
14 O 3 2 4 2 2*4+2 10
10 K 2 1 4 2 1*4+2 6

Demand Paging and page fault


A demand paging is similar to a paging system with swapping (Fig 8.2). When we want to
execute a process, we swap it into memory.
When a process is to be swapped in, the pager guesses which pages will be used before the
process is swapped out again. Instead of swapping in a whole process, the pager brings only
those necessary pages into memory. Thus, it avoids reading into memory pages that will not be
used in anyway, decreasing the swap time and the amount of physical memory needed.
Hardware support is required to distinguish between those pages that are in memory and those
pages that are on the disk.
The valid-invalid bit scheme can be used to distinguish between the pages that are on the disk
and that are in memory.
• If the bit is valid then the page is both legal and is in memory.
• If the bit is invalid then either page is not valid or is valid but is currently on the disk.
Marking a page as invalid will have no effect if the processes never access to that page. Suppose
if it access the page which is marked invalid, causes a page fault trap. This
may result in failure of OS to bring the desired page in to memory.

Operating System chapter -5 notes Meena talele 9


The step for handling page fault is straight forward and is given below:
1. We check the internal table of the process to determine whether the reference made is
valid or invalid.
2. If invalid terminate the process,. If valid, then the page is not yet loaded and we now page
it in.
3. We find a free frame.
4. We schedule disk operation to read the desired page in to newly allocated frame.
5. When disk reed is complete, we modify the internal table kept with the process to indicate
that the page is now in memory.
6. We restart the instruction which was interrupted by illegal address trap. The process can
now access the page.
In extreme cases, we start the process without pages in memory. When the OS points to the
instruction of process it generates a page fault. After this page is brought in to memory
the process continues to execute, faulting as necessary until every demand paging i.e., it never
brings the page in to memory until it is required.

Advantages of Demand Paging:


1. Large virtual memory.
2. More efficient use of memory.
3. Unconstrained multiprogramming. There is no limit on degree of multiprogramming.

Disadvantages of Demand Paging:


1. Number of tables and amount of processor over head for handling page interrupts are greater
than in the case of the simple paged management techniques.
2. due to the lack of an explicit constraints on a jobs address space size.

Operating System chapter -5 notes Meena talele 10


Page Replacement Algorithm
1. Demand paging also improves the degree of multiprogramming by allowing more process
to run at the same time.
2. Page replacement policy deals with the solution of pages in memory to be replaced by a
new page that must be brought in.
3. When a user process is executing, page fault may occur.
4. The hardware traps to the operating system, which checks the internal table to see that this
is a page fault and not an illegal memory access.
5. The operating system determines where the derived page is residing on the disk, and then
finds for the free frame.
6. If there are no free frames on the list of free frames then it is necessary to replace a page
which is currently in memory.
7. The page selected for removal from memory should be the page which will be least likely
to be referenced in future.

Reference String
The string of memory references is called reference string. Reference strings are generated
artificially or by tracing a given system and recording the address of each memory reference. The
latter choice produces a large number of data, where we note two things.
 For a given page size we need to consider only the page number, not the entire address.
 If we have a reference to a page p, then any immediately following references to page p
will never cause a page fault. Page p will be in memory after the first reference. The
immediately following references will not cause page fault.
 For example, consider the following sequence of addresses - 123,215,600,1234,76,96
 If page size is 100 then the reference string is 1,2,6,12,0,0

FIFO Algorithm:
1. This is the simplest page replacement algorithm. A FIFO replacement algorithm
associates each page the time when that page was brought into memory.
2. When a Page is to be replaced the oldest one is selected.
3. We replace the queue at the head of the queue. When a page is brought into memory, we
insert it at the tail of the queue.

Example: Consider the following references string with frames initially empty.

Operating System chapter -5 notes Meena talele 11


 The first three references (7,0,1) causes page faults and are brought into the empty
frames.
 The next references 2 replaces page 7 because the page 7 was brought in first.
 Since 0 is the next references and 0 is already in memory e has no page faults.
 The next references 3 results in page 0 being replaced so that the next references to 0
causes page fault. This will continue till the end of string.
 There are 15 faults all together.

Example 2

Optimal Page Replacement Algorithm:


 Optimal page replacement algorithm has the lowest page fault rate of all algorithms.
 An optimal page replacement algorithm exists and has been called OPT.
 The working is simple “Replace the page that will not be used for the longest period of
time” Example:
Consider the following reference string

 The first three references cause faults that fill the three empty frames.
 The references to page 2 replaces page 7, because 7 will not be used until reference 18.
 The page 0 will be used at 5 and page 1 at 14.

Operating System chapter -5 notes Meena talele 12


 With only 9 page faults, optimal replacement is much better than a FIFO, which had 15
faults.
This algorithm is difficult t implement because it requires future knowledge of reference
strings.

Example 2

Least Recently Used (LRU) Algorithm


If the optimal algorithm is not feasible, an approximation to the optimal algorithm is possible.
The main difference b/w OPTS and FIFO is that;
 FIFO algorithm uses the time when the pages was built in and OPT uses the time when a
page is to be used.
 The LRU algorithm replaces the pages that have not been used for longest period of time.
x The LRU associated its pages with the time of that pages last use. x This strategy is the
optimal page replacement algorithm looking backward in time rather than forward.

Ex: Consider the following reference string

Operating System chapter -5 notes Meena talele 13


 When reference to page 4 occurs, LRU sees that of the three frames, page 2 is used least
recently.
 The most recently used page is page 0 and just before page 3 was used.
 The LRU policy is often used as a page replacement algorithm and considered to be good.

Operating System chapter -5 notes Meena talele 14

You might also like