Ch-4-Memory System Org and Arch

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 58

Chapter 4

MEMORY SYSTEM ORGANIZATION


AND ARCHITECTURE

1
STORAGE SYSTEMS
and their Technology

2
Earliest Technology
• In earlier computers the most common form of
main memory was employed as an array of
doughnut shaped ferromagnetic material
referred to as “core”. Hence the main memory
was referred to as cores.
• The advent of and advantages of micro-
electronics has replaced the magnetic core
memory.
• Today the use of semiconductor chips for the
main memory is almost universal.
– Major types of semiconductor memories are RAM,
ROM etc.
RAM(Random Access Memory)
• Characteristics of RAM:
• Possible to read data from memory and write data
into memory easily and rapidly.
• Reading /writing are accomplished through the use
of electrical signals
• It is volatile i.e. a RAM must be provided with a
constant power supply. If the power is interrupted
then the data are lost.
• It can be used as a temporary storage.
• The two traditional forms of RAM use DRAM and
SRAM.
SRAM

• In SRAM the binary values are stored using


flip-flop –logic gates configurations.
• A SRAM holds its data as long as power is
supplied to it.
• Unlike DRAM no refresh is needed to retain
data.
SRAM versus DRAM
• Both SRAM and DRAM are volatile.
• DRAM cells are simpler and smaller than
SRAM cells
• DRAM are less expensive than SRAM
• An extra circuit is required in this case of
DRAM for the refreshing purpose.
• SRAM’s are generally faster than DRAM’s.
• Commonly SRAM is used for cache memory
and DRAM is used for main memory.
DRAM (Dynamic RAM)
• Dynamic RAM is made with cells that store data as
charge on capacitors.
• The presence or absence of charge on capacitors is
interpreted as a binary 1 and 0.
• Because the capacitors have a natural tendency to
discharge (or leak away) even with continuous
power supply, DRAM requires periodic charge
refreshing to maintain its stored data.
• To maintain the charge (data), a special refresh
circuit is used, which periodically renews the charge
on all cells about once in every 16milliseconds.
• DRAMs are low cost devices.
Nonvolatile Memories
• DRAM and SRAM are volatile memories
– Lose information if powered off.
• Nonvolatile memories retain value even if powered off.
– Generic name is read-only memory (ROM).
– Misleading because some ROMs can be read and modified.
• Types of ROMs
– Programmable ROM (PROM)
– Eraseable programmable ROM (EPROM)
– Electrically eraseable PROM (EEPROM)
– Flash memory
• Firmware
– Program stored in a ROM
• Boot time code, BIOS (basic input/ouput system)
• graphics cards, disk controllers.
Some Variations of ROM
PROM (Programmable ROM)
• Like ROM, PROM is also non volatile.
• May be written into only once. After writing it
will become as a normal ROM.
• The writing or programming process is
performed with the help of a special
equipment and a special method called PROM-
programming
• Another variation on ROM is the RMM (Read
Mostly Memory), which is useful for read
operations far more frequent than write
operations.
Three Common Forms of RMM
EPROM, EEPROM & Flash Memory
EPROM
• In this type of memory erase and write operations can be
performed.
• Before a write operation, all the storage cells must be erased
with the help of an ultra violet radiation.
• EPROM can be altered multiple times and these are more
expensive than PROM.
EEPROM (Electrically EPROM)
• This type of RMM can be written into at any time without
erasing previous contents but the write operation will take
longer time than a read operation.
• EEPROMS are more expensive than EPROMS.
Flash memory
– Another form of semiconductor memory
– Named so because of the speed with which it can be
re-programmed.
– Flash memory is intermediate between EPROM and
EEPROM in both cost and functionality.
– Like EEPROM, flash memory uses an electrical
erasing technology.
– An entire flash memory can be erased in one or a
few seconds. Moreover it is possible to erase just
blocks of memory rather than an entire chip.
Coding, Data compression,
and Data integrity

12
Data compression implies sending or storing a smaller
number of bits. Although many methods are used for this
purpose, in general these methods can be divided into two
broad categories: lossless and lossy methods.
LOSSLESS COMPRESSION
•In lossless data compression, the integrity of the data is
preserved.
•The original data and the data after compression and
decompression are exactly the same because, in these
methods, the compression and decompression algorithms
are exact inverses of each other: no part of the data is lost
in the process.
•Redundant data is removed in compression and added
during decompression.
•Lossless compression methods are normally used when
we cannot afford to lose any data.
LOSSY COMPRESSION METHODS
•These methods are cheaper—they take less time and
space when it comes to sending millions of bits per
second for images and video.
•Several methods have been developed using lossy
compression techniques.
•JPEG (Joint Photographic Experts Group) encoding is
used to compress pictures and graphics,
•MPEG (Moving Picture Experts Group) encoding is
used to compress video, and
•MP3 (MPEG audio layer 3) for audio compression.
Memory Hierarchy

16
Memory Hierarchy
• At the highest level (closest to the CPU) are the
processor registers.
• Next comes one or more levels of cache.
• Then comes main memory which is usually made out
of dynamic RAM (DRAM). (All of these are considered
internal to the computer system.)
• The hierarchy continues with external memory, with
the next level typically being a fixed hard disk and
• One or more levels below that consisting of
removable media such as optical disks and flash disks
etc.
Memory Hierarchy 18
Characteristics
• Location
• Capacity
• Unit of transfer
• Access method
• Performance
• Physical type
• Physical characteristics
• Organisation
1. Location
– Location refers to whether memory is internal or
external to the computer.
– Internal memory is often equated with main
memory.
– But there are other forms of internal memory; like
registers in the CPU.
– External memory consists of peripheral storage
devices such as disk, optical devices, flash disk etc…
that ate accessible to the CPU via I/O controllers.
2. Capacity

• For internal memory, capacity is typically


expresses in terms of words (common word
lengths are 8, 16, 32 bits.).
• For external memory, this is expressed in
terms of bytes.
3. Unit of Transfer
• Internal
– Usually governed by data bus width
• External
– Usually a block which is much larger than a word
4. Method of accessing

• Sequential Accessing
• Direct Accessing
• Random Accessing
• Associative Accessing
Access Methods (1)
• Sequential
– Start at the beginning and read through in order
– Access time depends on location of data and
previous location
– e.g. tape
• Direct
– Individual blocks have unique address
– Access is by jumping to vicinity plus sequential
search
– Access time depends on location and previous
location
– e.g. disk
Access Methods (2)
• Random
– Individual addresses identify locations exactly
– Access time is independent of location or
previous access
– e.g. RAM
• Associative
– Data is located by a comparison with contents of
a portion of the store
– Access time is independent of location or
previous access
– e.g. cache
5. Performance
• Access time
– Time between presenting the address and getting
the valid data
• Memory Cycle time
– Time may be required for the memory to
“recover” before next access
– Cycle time is access + recovery
• Transfer Rate
– Rate at which data can be moved
6. Physical Types
• Semiconductor
– RAM
• Magnetic
– Disk & Tape
• Optical
– CD & DVD
7.Physical Characteristics
• Volatile/non-volatile.
–In a volatile memory, information stored is lost
when electric power is switched off.
–While in non-volatile memory, information once
recorded remains even when the power is
switched off. (until deliberately changed).
• Erasable /non erasable
–Non erasable memory cannot be altered.
–E.g: ROM. clearly these will be nonvolatile.
MAIN MEMORY
Organization and Operations

29
Main memory organization
• Simple (one-word-wide bus and cache)
– CPU, Cache, Bus
– Memory are same width (32-bit)
• Wide (more than one-word, 4-word)
– CPU/Mux 1 word;
– Mux/Cahe, Bus
– Memory N words (Alpha 64-bits and 256)
• Interleaved (multiple banks, each one-word)
– CPU, Cache, Bus 1 word
– Memory N modules (4 modules)
30
31
1. Wider Main Memory
• Alpha AXP 21064 : 256-bit wide L2, Memory Bus,
Memory
• Drawbacks
– expandability
• doubling the width needs doubling the capacity
– bus width
• need a multiplexer to get the desired word from a
block
– error correction - separate error correction every
32 bits
2. Independent Memory Banks
• Interleaved Memory-Faster Sequential Accesses;
• Independent Memory Banks - Faster Independent Accesses
• Motivation: Higher BW for sequential accesses by interleaving
sequential bank addresses - each bank shares the address line
• Memory banks for independent accesses - each bank has a bank
controller, separate address lines
– 1 bank for I/O, 1 bank for cache read, 1 bank for cache write,
etc.
– If 1 controller controls all the banks, it can only provide fast
access time for one operation
– Benefit of memory banks for Miss under Miss in Non-faulting
caches
Superbank: all memory banks active on one block transfer
Bank: portion within a superbank that is word interleaved
CACHE MEMORIES
(Address mapping, Block size,
Replacement and Store policy)

34
Cache memory
• If the active portions of the program and data are
placed in a fast small memory, the average memory
access time can be reduced,
• Thus reducing the total execution time of the
program
• Such a fast small memory is referred to as cache
memory
• The cache is the fastest component in the memory
hierarchy and approaches the speed of CPU
component
Cache operation
• When CPU needs to access memory, the cache
is examined.
• If the word is found in the cache, it is read
from the fast memory.
• If the word addressed by the CPU is not found
in the cache, the main memory is accessed to
read the word.
Typical Cache Organization
• In the cache organization the cache connects to CPU
via data, control and address lines.
• The data and address lines also attach to data and
address buffers, which attach to a system bus from
which main memory is accessed.
• When a cache “hit” occurs the data and address
buffers are disabled and communication is only
between CPU and cache with no system traffic.
• When a cache ‘miss’ occurs the desired address is
loaded onto the system bus and the data returned
through the data buffer to both cache and the CPU.
Typical Cache Organization
Cache memory
• The basic characteristic of cache memory is its fast
access time,
• Therefore, very little or no time must be wasted
when searching the words in the cache.
• The transformation of data from main memory to
cache memory is referred to as a mapping process,
there are three types of mapping:
– Associative mapping
– Direct mapping
– Set-associative mapping
Cache memory
• To help understand the mapping procedure,
we have the following example:
Associative mapping
• The fastest and most flexible cache
organization uses an associative memory
• The associative memory stores both the
address and data of the memory word
• This permits any location in cache to store any
word from main memory
• The address value of 15 bits is shown as a five-
digit octal number and its corresponding 12-
bit word is shown as a four-digit octal number
Associative mapping
Associative mapping
• A CPU address of 15 bits is places in the argument
register and the associative memory us searched for
a matching address
• If the address is found, the corresponding 12-bits
data is read and sent to the CPU
• If not, the main memory is accessed for the word
• If the cache is full, an address-data pair must be
displaced to make room for a pair that is needed and
not presently in the cache
Direct Mapping
• Associative memory is expensive compared to
RAM
• In general case, there are 2^k words in cache
memory and 2^n words in main memory (in
our case, k=9, n=15)
• The n bit memory address is divided into two
fields: k-bits for the index and n-k bits for the
tag field
Direct Mapping
Direct Mapping
Set-Associative Mapping
• The disadvantage of direct mapping is that two
words with the same index in their address but with
different tag values cannot reside in cache memory
at the same time

• Set-Associative Mapping is an improvement over the


direct-mapping in that each word of cache can store
two or more word of memory under the same index
address
Set-Associative Mapping
Set-Associative Mapping
• In the slide, each index address refers to two
data words and their associated tags
• Each tag requires six bits and each data word
has 12 bits, so the word length is 2*(6+12) =
36 bits
VIRTUAL MEMORY
(Page table, TLB)

50
Virtual memory
• Consider, the total memory required for a process is larger
than the amount of main memory available on the computer,
but only a fraction of this memory is actively being used at
any point in time.
• With demand paging, that job is left to the operating system
and the hardware.
• As far as the programmer is concerned he/she is dealing with
a huge memory, the size associated with disk storage.
• So main memory can act as a “cache” for secondary storage,
usually implemented with magnetic disk.
• This technique is called virtual memory.
Overview
• Physical main memory is not as large as the address
space spanned by an address issued by the processor.
232 = 4 GB, 264 = …
• When a program does not completely fit into the
main memory, the parts of it not currently being
executed are stored on secondary storage devices.
• Techniques that automatically move program and
data blocks into the physical main memory when
they are required for execution are called virtual-
memory techniques.
• Virtual addresses will be translated into physical
addresses.
Memory
Management
Unit
Address Translation
• All programs and data are composed of fixed-
length units called pages, each of which
consists of a block of words that occupy
contiguous locations in the main memory.
• Page cannot be too small or too large.
• The virtual memory mechanism bridges the
size and speed gaps between the main
memory and secondary storage – similar to
cache.
Address Translation
• The page table information is used by the
MMU for every access, so it is supposed to be
with the MMU.
• However, since MMU is on the processor chip
and the page table is rather large, only small
portion of it, which consists of the page table
entries that correspond to the most recently
accessed pages, can be accommodated within
the MMU.
• Translation Lookaside Buffer (TLB)
Virtual address from processor

TLB Virtual page number Offset

TLB

Virtual page Control Page frame


number bits in memory

No
=?

Yes

Miss

Hit

Page frame Offset

Physical address in main memory

Figure 5.28. Use of an associative-mapped TLB.


TLB
• The contents of TLB must be coherent with
the contents of page tables in the memory.
• Translation procedure.
• Page fault
• Page replacement
• Write-through is not suitable for virtual
memory.
• Locality of reference in virtual memory
END

59

You might also like