Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 22

MEMORY

ORGANIZATION
MEMORY
ORGANIZATION

• Memory Hierarchy

• Main Memory

• Virtual Memory

•Cache Memory
Memory Hierarchy

MEMORY
HIERARCHY
Memory Hierarchy is to obtain the highest possible
access speed while minimizing the total cost of the memory system
Auxiliary memory
Magnetic
tapes I/O Main
processor memory
Magnetic
disks

CPU Cache
memory

Register

Cache

Main Memory

Magnetic Disk

Magnetic Tape
Virtual
Memory
Virtual
Memory
• It gives the programmer the illusion that the
system has a very large memory, even though the
computer actually has a relatively small main
memory

• It provides a mechanism for translating


virtual
address to physical address
• Mapping is handled by the H/W automatically
by means of a mapping table
Address
Mapping

• The apping table may be stored in


m dditional memory unit is required and one extra memory ss time
• Aacce
• Takes space from main memory and 2 accesses to memory are
required with program running at half speed
• Using associative memory
Address Mapping
contd.,
• Group of words in address space –
pages
• Group of words in memory space –
block
• Ex: page – 1024 words
so, 10bits for line no.
8 pages => 3 bits for page no.
Address Mapping using
Pages
• Line address in address space and memory space
is the same.
• The only mapping required is from a page to a
block
• Memory page table gives the block of the page
• Presence bit indicates whether the
corresponding page is in main memory or not
• If number of blocks in memory = m
• Number of pages in virtual address space = n
• n entry page table in memory is required
• Disadvantage:
• Inefficient storage space utilization –> (n-m) entries of
the
table are empty
Address Mapping using Pages contd.,
Associative Memory
Page Table
• Here number of words in page table = number of
blocks
in main memory
• Implemented using associative memory with each
word in memory containing a page number together
with its corresponding block number
• Page fault – page number cannot be found in page
table
• A virtual memory system is a combination of
hardware and software techniques
• H/W mapping mechanism and Memory
management software together forms
the architecture of virtual memory
• Memory management software
• Which page to be removed to create a room
for a new page
• When new page is to be transferred from
secondary
memory to main memory
• Where the page is to be placed in main
memory
• Loading a page from secondary to main memory is
an I/O operation. So when page fault occurs, CPU
suspends the present program and control is
transferred to next program which is waiting for
CPU
Virtual Memory

PAGE REPLACEMENT ALGORITHMS


FIFO Reference string
7 0 1 2 0 3 0 4 2 3 0 3 2 1 2 0 1 7 0 1
7 7 7 2 2 2 4 4 4 0 0 0 7 7 7
0 0 0 3 3 3 2 2 2 1 1 1 0 0
1 1 1 0 0 0 3 3 3 2 2 2 1
Page frames

FIFO algorithm selects the page that has been in memory the longest time
Using a queue - every time a page is loaded, its
- identification is inserted in the queue
Easy to implement
May result in a frequent page fault

Optimal Replacement (OPT) - Lowest page fault rate of all algorithms

Replace that page which will not be used for the longest period of time
Reference string
7 0 1 2 0 3 0 4 2 3 0 3 2 1 2 0 1 7 0 1
7 7 7 2 2 2 2 2 7
0 0 0 0 4 0 0 0
1 1 3 3 3 1 1
Page frames
Virtual Memory

PAGE REPLACEMENT ALGORITHMS


LRU
- OPT is difficult to implement since it requires future knowledge
- LRU uses the recent past as an approximation of near future.

Replace that page which has not been


used for the longest period of time

Reference string
7 0 1 2 0 3 0 4 2 3 0 3 2 1 2 0 1 7 0 1
7 7 7 2 2 4 4 4 0 1 1 1
0 0 0 0 0 0 3 3 3 0 0
1 1 3 3 2 2 2 2 2 7
Page frames

- LRU may require substantial hardware assistance


- The problem is to determine an order for the frames
defined by the time of last use
Cache Memory

CACHE MEMORY

Locality of Reference
Principle of locality – programs tend to reuse the data and
instructions they have used recently.

Cache
- Cache is a fast small capacity memory that should hold
those information which are most likely to be accessed

Main memory
CPU
Cache memory
Parameters of Cache
memory
• Cache Hit
• A referenced item is found in the cache by
the processor
• Cache Miss
• A referenced item is not present in the cache
• Hit ratio
• Ratio of number of hits to total number of
references
=> number of hits/(number of hits + number
of Miss)
• Miss penalty
• Additional cycles required to serve the miss
Cache Memory

PERFORMANCE OF CACHE
Memory Access
• All the memory accesses are directed first to Cache
• If the word is in Cache; Access cache to provide it to CPU
• If the word is not in Cache; Bring a block (or a line)
including that word to replace a block now in Cache

- How can we know if the word that is required


is there ?
- If a new block is to replace one of the old blocks,
which one should we choose ?
Cache Memory

ASSOCIATIVE MAPPING
Mapping Function
Specification of correspondence between main
memory blocks and cache blocks
Associative mapping
Direct mapping
Set-associative mapping
Associative Mapping
- Any block location in Cache can store any
block in memory
-> Most flexible
- Mapping Table is implemented in an
associative memory
-> Fast, very Expensive
- Mapping Table
Stores both address and the content of the
Argument register
memory word
Address
address (15 bits) Data
01000 3450
CAM 02777 6710
22235 1234
Cache Memory

DIRECT MAPPING
- Each memory block has only one place to load in Cache
- n-bit memory address consists of 2 parts; k bits of Index field and
n-k bits of Tag field
- n-bit addresses are used to access main memory
and k-bit Index is used to access the Cache

Addressing Relationships Tag(6) Index(9)

00 000 32K x 12
000
512 x 12
Main memory Cache memory
Address = 15 bits Address = 9 bits
Data = 12 bits Data = 12 bit s
77 777 777

Direct Mapping Cache Organization


Memory
Memory data
address 1220 Cache memory
00000 Index
address Tag
00777 2340 00 0Data
0 1220
01000 3450 0

01777 4560
02000 5670

777 02 6710
02777 6710
Cache Memory

DIRECT
MAPPING
Direct Mapping with block size of 8 words

Index tag data 6 6 3


000 01 3450 Tag Block Word
Block 0
007 01 6578
010 INDEX
Block 1 017

770 02
Block 63
777 02 6710
Cache Memory

SET ASSOCIATIVE MAPPING


- Each memory block has a set of locations in the Cache to load

Set Associative Mapping Cache with set size of two


Index Tag Data Tag Data
000 01 3450 02 5670

777 02 6710 00 2340


Cache Memory

CACHE
WRITE
Write Through
-Update MM with every write operation with cache memory being
updated in parallel
Adv: MM contains same data as the cache
Disadv : Slow, due to the memory access time
Write-Back (Copy-Back)
- only cache location is updated.
- when removed from cache, it is copied to MM

-Memory is not up-to-date, i.e., the same item in


Cache and memory may have different value
References
• M. M. Mano, Computer System Architecture, Prentice-Hall
• William Stallings “Computer Organization and
architecture” Prentice Hall, 7th edition, 2006

You might also like