Memory Organization 1

You might also like

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 49

Memory Organization

12.1 Memory Hierarchy


12.2 Main Memory
12.3 Auxiliary Memory
12.4 Associative Memory
12.5 Cache Memory
12.6 Virtual Memory
12.7 Memory management hardware
Memory Hierarchy

The overall goal of using a memory hierarchy is to obtain the


highest-possible average access speed while minimizing the
total cost of the entire memory system.

Microprogramming: refers to the existence of many programs in


.different parts of main memory at the same time
Main memory
ROM Chip

bit Data-8
bus
Memory Address Map

The designer of a computer system must calculate the


amount of memory required for the particular application
and assign it to either RAM or ROM.

The interconnection between memory and processor is then


established from knowledge of the size of memory needed
.and the type of RAM and ROM chips available

The addressing of memory can be established by means


of a table that specifies the memory address assigned to
.each chip

The table, called a memory address map, is a pictorial


representation of assigned address space for each chip in
.the system

Memory Configuration (case study):

Required: 512 bytes ROM + 512 bytes RAM


Available: 512 byte ROM + 128 bytes RAM
Memory Address Map
Memory connections Address bus CPU

to the CPU 16 - 11 10 9 8 7 - 1 RD WR Data bus

Decoder
3210
CS1
CS2
128×8 Data
RD
RAM 1
WR
AD7

CS1
CS2
128×8 Data
RD
RAM 2
WR
AD7

CS1
CS2
128×8 Data
RD
RAM 3
WR
AD7

CS1
CS2
128×8 Data
RD
RAM 4
WR
AD7

CS1
CS2
1-7 128×8 Data
ROM
8
AD9
9
Associative Memory
Associative Memory

The time required to find an item stored in memory can be


reduced considerably if stored data can be identified for access
by the content of the data itself rather than by an address.

A memory unit access by content is called an associative


memory or Content Addressable Memory (CAM). This type of
memory is accessed simultaneously and in parallel on the basis
of data content rather than specific address or location.

When a word is written in an associative memory, no address is


given. The memory is capable of finding an empty unused
location to store the word. When a word is to be read from an
associative memory, the content of the word or part of the
word is specified.

The associative memory is uniquely suited to do parallel


searches by data association. Moreover, searches can be done
on an entire word or on a specific field within a word.
Associative memories are used in applications where the
search time is very critical and must be very short.
Hardware Organization

Argument register (A)

Key register (K)

Match
register

Input

Associative memory
array and logic M
Read
Write m words
n bits per word

Output
Associative memory of an m word, n cells per word

A1 Aj An

K1 Kj Kn

Word 1 C 11 C 1j C1n M1

Word i C i1 C ij C in Mi

Word m Cm1 Cmj Cmn Mm

Bit 1 Bit j Bit n


One Cell of Associative Memory
Ai Kj
Input

Write

R S Match
F ij To M i
logic
Read

Output
Match logic

Neglect the K bits and compare the argument in A with the bits
stored in the cells of the words.

Word i is equal to the argument in A if

Two bits are equal if they are both 1 or 0

For a word i to be equal to the argument in A we must have all xj


variables equal to 1.

This is the condition for setting the corresponding match bit Mi to 1.


Now include the key bit Kj in the comparison logic Cont.
The requirement is that if Kj=0, the corresponding bits of Aj and
need no comparison. Only when Kj=1 must be compared. This
requirement is achieved by OR ing each term with K j

The match logic for word i in an associative memory can now be


expressed by the following Boolean function.

If we substitute the original definition of x j, the above Boolean


function can be expressed as follows:

Where is a product symbol designating the AND operation of all n terms.


Match Logic cct.

K1 A1 K2 A2 Kn An

'Fi1 Fi1 'Fi2 Fi2 'Fin Fin

Mi
Read Operation

If more than one word in memory matches the unmasked argument


field, all the matched words will have 1’s in the corresponding bit
.position of the match register

It is then necessary to scan the bits of the match register one at


a time. The matched words are read in sequence by applying a
read signal to each word line whose corresponding Mi bit is a 1.

If only one word may match the unmasked argument field, then
connect output Mi directly to the read line in the same word
,position

The content of the matched word will be presented automatically


at the output lines and no special read command signal is
needed.

If we exclude words having zero content, then all zero output will
indicate that no match occurred and that the searched item is not
available in memory.
Write Operation

If the entire memory is loaded with new information at once,


then the writing can be done by addressing each location in
.sequence

.The information is loaded prior to a search operation

If unwanted words have to be deleted and new words inserted


one at a time, there is a need for a special register to
.distinguish between active an inactive words

.”This register is called “Tag Register

.A word is deleted from memory by clearing its tag bit to 0


Cache Memory
Cache Memory

Locality of reference
The references to memory at any given interval of time tent to be
contained within a few localized areas in memory.

If the active portions of the program and data are placed in a fast
small memory, the average memory access time can be reduced.

Thus, reducing the total execution time of the program. Such a


fast small memory is referred to as “Cache Memory”.
The performance of the cache memory is measured in terms of a
quality called “Hit Ratio”.

When the CPU refers to memory and finds the word in cache, it
produces a hit. If the word is not found in cache, it counts it as
a miss.

The ratio of the number of hits divided by the total CPU references
to memory (hits + misses) is the hit ratio. The hit ratios of 0.9 and
higher have been reported
Cache Memory
The average memory access time of a computer system can be
improved considerably by use of cache.

The cache is placed between the CPU and main memory. It is the
faster component in the hierarchy and approaches the speed of
CPU components.

When the CPU needs to access memory, the cache is examined. If


it is found in the cache, it is read very quickly.

If it is not found in the cache, the main memory is accessed.

A block of words containing the one just accessed is then


.transferred from main memory to cache memory
Cache Memory

The basic characteristic of cache memory is its fast access time.


Therefore, very little or no time must be wasted when searching
for words in the cache.

The transformation of data from main memory to cache memory


is referred to as a “Mapping Process”.

There are three types of mapping procedures are available.

· Associative Mapping
· Direct Mapping
· Set – Associative Mapping.
Cache Memory

Consider the following memory organization to show mapping


.procedures of the cache memory

.The main memory can stores 32k word of 12 bits each ·


The cache is capable of storing 512 of these words at any given ·
.time
For every word stored in cache, there is a duplicate copy in main ·
.memory
The CPU communicates with both memories ·
.It first sends a 15 – bit address to cache ·
If there is a hit, the CPU accepts the 12 bit data from cache ·
If there is a miss, the CPU reads the word from main memory and ·
.the word is then transferred to cache
Address Mapping
Process to define memory address to
.any variable
:It can be done by
Programmer
Compiler
Loader
Run-Time Memory Management
Example
;int a,b,c
Word Size 8 bit with displacement
and effective address (EA=B+D
or B.D) , according to addressing
mode
Associative Mapping
The associative mapping stores both the address and content (data)
.of the memory word
Octal
Argument register

A CPU address of 15 bits is placed in the argument register and


.associative memory is searched for a matching address

If the address is found, the corresponding 12 bit data is read and


.sent to the CPU
If no match occurs, the main memory is accessed for the word.
The address – data pair is then transferred to associative cache
.memory
If the cache is full, it must be displayed, using replacement algorithm.
.FIFO may be used
Direct Mapping

Associative Mapping is expensive.


The 15-bit CPU address is divided into two fields.
The 9 least significant bits constitute the index field and the
remaining 6 bits form the tag fields.

The main memory needs an address but includes both the tag and
the index bits.

The cache memory requires the index bit only i.e., 9 bits.

There are 2k words in the cache memory & 2n words in the main
memory.

e.g: k = 9, n = 15
Direct Mapping
Direct Mapping

00000

6710
Direct Mapping

.Each word in cache consists of the data word and it associated tag

When a new word is brought into cache, the tag bits store along
data

When the CPU generates a memory request, the index field is


.used in the address to access the cache

The tag field of the CPU address is equal to tag in the word from
.cache; there is a hit, otherwise miss

Restrictions: When Same Index is occur in different tags. It can


.not be accessed on same time by cache

Example: 10 000 2012


1567 000 12
Set – Associative Mapping

In set – Associative mapping, each word of cache can store two or


more words of memory under the same index address.
Each data word is stored together with its tag and the number of
tag – data items in one word of cache is said to form a set.
Each index address refers to two data words and their associated tags.
Set – Associative Mapping

Each tag requires 6 bits & each data word has 12 bits, so the word
length is 2(6+12) =36 bits

An index address of 9 bits can accommodate 512 cache words. It


can accommodate 1024 memory words.

When the CPU generates a memory request, the index value of the
address is used to access the cache.

The tag field of the CPU address is compared with both tags in the
cache.

:The most common replacement algorithms are

Random replacement ·
FIFO ·
Least Recently Used (LRU) ·
Writing into cache

there are two writing methods that the system can proceed.

Write-through method (The simplest & commonly used way)


update main memory with every memory write operation, with cache memory
being update in parallel if it contains the word at the specified address.

This method has the advantage that main memory always contains the
same data as the cache.

Write-back method
In this method only the cache location is updated during a write
operation.
The location is then marked by a flag so that later when the word is
removed from the cache it is copied into main memory.

The reason for the write-back method is that during the time a word resides
in the cache, it may be updated several times.
Virtual Memory

33
Execution of a Program
Operating system brings into main
memory a few pieces of the program
Resident set - portion of process that
is in main memory
An interrupt is generated when an
address is needed that is not in main
memory
Operating system places the process
in a blocking state
34
Execution of a Program
Piece of process that contains the
logical address is brought into main
memory
Operating system issues a disk I/O Read
request
Another process is dispatched to run while
the disk I/O takes place
An interrupt is issued when disk I/O
complete which causes the operating
system to place the affected process in
the Ready state
35
Support Needed for
Virtual Memory
Hardware must support paging and
segmentation
Operating system must be able to
management the movement of pages
and/or segments between secondary
memory and main memory

36
Paging
Each process has its own page table
Each page table entry contains the
frame number of the corresponding
page in main memory
A bit is needed to indicate whether the
page is in main memory or not

37
Paging

38
39
Page Tables
The entire page table may take up too
much main memory
Page tables are also stored in virtual
memory
When a process is running, part of its
page table is in main memory

40
Page Table
Page number
Process identifier
Control bits
Chain pointer

41
42
Segmentation
May be unequal, dynamic size
Simplifies handling of growing data
structures
Allows programs to be altered and
recompiled independently
Lends itself to sharing data among
processes
Lends itself to protection
43
Segment Tables
Corresponding segment in main
memory
Each entry contains the length of the
segment
A bit is needed to determine if
segment is already in main memory
Another bit is needed to determine if
the segment has been modified since
it was loaded in main memory 44
Segment Table Entries

45
46
Combined Paging and
Segmentation
Paging is transparent to the
programmer
Segmentation is visible to the
programmer
Each segment is broken into fixed-
size pages

47
Combined Segmentation and
Paging

48
49

You might also like