Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 51

Unit 4:

Memory Organization
What is Cache Memory
• Cache memory is crucial for speeding up data access in computer
systems.
• Cache memory is a high-speed buffer between RAM and CPU.

• Its purpose is to store frequently accessed data for faster retrieva.

• Think of it as a quick-access zone, optimizing overall system


performance.
Cache
Hierarchy
Overview
• Multi-level cache
hierarchy: L1, L2, L3, and
L4.
• Each level serves a
specific purpose in
optimizing data access.
• Next, we delve into the
characteristics of each
cache level.
• L1 cache is embedded directly on the CPU
chip.
L1 Cache - • Small but extremely fast, holding the most
Primary Cache frequently used instructions and data.
• It's the first line of defense for quick data
access.
• L2 cache is located between L1 cache and
RAM.
L2 Cache - • Larger than L1, providing more storage for
Secondary Cache frequently accessed data.
• Acts as a secondary layer to capture
additional important information.
• L3 cache is shared among multiple CPU
cores.
L3 Cache - Shared • Larger than L1 and L2, facilitating data
Cache sharing among cores.
• Works at a higher level to optimize data
access across the entire processor.
• L4 cache is shared at the system level,
L4 Cache - possibly across multiple processors.
• The largest cache, focusing on optimizing
System-Level data access at a broader system level.

Cache • Aims to enhance collaboration and


information sharing across the entire
system.
• Faster data access and reduced latency lead
Benefits of to improved system performance.

Cache • Cache memory contributes to power


efficiency by minimizing trips to slower main

Memory memory.
• An essential component for enhancing the
overall efficiency of computer systems.
• Definition: Secondary/Auxiliary Memory is a
Secondary/Auxiliary type of non-volatile, long-term storage that
Memory complements primary memory.
• Importance: Essential for storing large
amounts of data permanently, even when the
computer is powered off.
• Overview: Secondary memory acts as a
bridge between the CPU and long-term
storage, enabling efficient data management.
Characteristics of Secondary
Memory

• Non-volatile Nature: Retains data even when power is off.


• Larger Storage Capacity: Capable of storing vast amounts of data
permanently.
• Slower Access Times: Although slower than primary memory, it provides a
means for long-term data storage.
Types of Secondary Memory

• Hard Disk Drives (HDDs)


• Technology: Spinning disks with magnetic storage.
• Common Use Cases: Operating system installation, data storage.
• Solid State Drives (SSDs)
• Technology: Flash memory.
• Advantages: Faster access times, durability.
Types of Secondary Memory
(Contd.)

• Optical Drives
• CD, DVD, and Blu-ray drives.
• Read and Write Capabilities: Used for data distribution and storage.
• USB Drives (Flash Drives)
• Portable and Convenient: Easily transportable for data transfer.
• Applications: Backup, data sharing.
Magnetic Tape

• Overview: Historical use in data storage.


• Current Applications: Commonly used for data backup and archival purposes.
• Advantages: Cost-effective for large-scale data storage.
Cloud Storage

• Definition: Storage of data on remote servers accessible through the internet.


• Advantages: Accessibility from anywhere, scalability based on needs.
• Popular Services: Examples such as Google Drive, Dropbox, and Microsoft
OneDrive.
Challenges and Considerations

• Security Concerns: Risks of unauthorized access and data breaches.


• Data Retrieval Speed: Depending on the type of secondary memory, retrieval
speed may vary.
• Storage Lifespan: Considerations for data degradation and longevity.
Resistive RAM (ReRAM/RRAM)

• Resistive RAM (ReRAM, also known as RRAM) works by changing the


resistance of materials.

• An electric current is applied to a material, changing the resistance of that


material.

• The resistance state can then be measured. This principle is called ‘memrister’
and relies on the principle of hysteresis.
Phase Change Memory (PCM):

• PCM is a non-volatile memory technology leveraging phase change properties.


• Mechanism involves transitions between amorphous and crystalline states for
data storage.
• Advantages include high speed, long endurance, non-volatility, and scalability.
• Applications span Storage Class Memory (SCM), in-memory computing, and
IoT devices.
Spin-Transfer Torque RAM (STT-RAM)

• STT-RAM is a type of non-volatile memory technology based on the spin of


electrons.
• Operation involves manipulating the spin to write and read data.
• Distinguished by fast access times, low power consumption, and scalability.
Spin-Transfer Torque Effect: Utilizes electron spin to
transfer angular momentum for writing data.

Key Features Fast Access Times: Enables quick read and write
and operations, improving overall system performance.

Advantages Low Power Consumption: Consumes less power


compared to traditional memory technologies.

Scalability: Well-suited for scaling down to smaller


technology nodes.
An SSD, or solid-state drive, is a type of storage device used
in computers. This non-volatile storage media stores
persistent data on solid-state flash memory. SSDs replace
traditional hard disk drives (HDDs) in computers and
perform the same basic functions as a hard drive. But SSDs
SSD (solid- are significantly faster in comparison. With an SSD, the
device's operating system will boot up more rapidly,
state drive) programs will load quicker and files can be saved faster.

A traditional hard drive consists of a spinning disk with a


read/write head on a mechanical arm called an actuator. An
HDD reads and writes data magnetically. The magnetic
properties, however, can lead to mechanical breakdowns.
SSDs read and write data to an underlying set of interconnected flash
memory chips. These chips use floating gate transistors (FGTs) to hold
an electrical charge, which enables the SSD to store data even when it
is not connected to a power source. Each FGT contains a single bit of
data, designated either as a 1 for a charged cell or a 0 if the cell has no
electrical charge.

SSDs use three main types of memory: single-, multi- and triple-level
cells. Single-level cells can hold one bit of data at a time -- a one or
zero. Single-level cells (SLCs) are the most expensive form of SSD, but
are also the fastest and most durable. Multi-level cells (MLCs) can hold
two bits of data per cell and have a larger amount of storage space in
the same amount of physical space as a SLC. However, MLCs have
slower write speeds. Triple-level cells (TLCs) can hold three bits of data
in a cell. Although TLCs are cheaper, they also have slower write speeds
and are less durable than other memory types
Memory Address Map is a pictorial
representation of assigned address
space for each chip in the system

Memory To demonstrate an example, assume


that a computer system needs 512 bytes
Address Map of RAM and 512 bytes of ROM

The RAM have 128 byte and need seven


address lines, where the ROM have 512
bytes and need 9 address lines
• Address space assignment to each memory chip
• Example: 512 bytes RAM and 512 bytes ROM
Memory • - RAM and ROM chips are connected to a CPU

Connection • through the data and address buses

to CPU • - The low-order lines in the address bus select


• the byte within the chips and other lines in the
• address bus select a particular chip through
• its chip select inputs
• The hexadecimal address assigns a range of hexadecimal
equivalent address for each chip

Memory • Line 8 and 9 represent four distinct binary combination to


specify which RAM we chose
Address Map
• When line 10 is 0, CPU selects a RAM. And when it’s 1, it
selects the ROM
Memory
Connectio
n to CPU
• Content-addressed or associative memory refers to a
memory organization in which the memory is accessed by
its content (as opposed to an explicit address).
ASSOCIATIV • Also called Content Addressable Memory (CAM)
E MEMORY • Searching becomes easy.
Block representation of an Associative
memory
• The functional registers like the argument register A and key
register K each have n bits, one for each bit of a word. The
match register M consists of m bits, one for each memory
word.
• The words which are kept in the memory are compared in
parallel with the content of the argument register.
• The key register (K) provides a mask for choosing a particular
field or key in the argument word. If the key register contains
a binary value of all 1's, then the entire argument is compared
with each memory word.
• The cells present inside the memory array are marked by the
letter C with two subscripts. The first subscript gives the word
number and the second specifies the bit position in the word.
For instance, the cell Cij is the cell for bit j in word i.

• A bit Aj in the argument register is compared with all the bits


in column j of the array provided that Kj = 1. This process is
done for all columns j = 1, 2, 3......, n.
• If a match occurs between all the unmasked bits of the
argument and the bits in word i, the corresponding bit Mi in
the match register is set to 1. If one or more unmasked bits of
the argument and the word do not match, Mi is cleared to 0.
Selecting memory bank
The important points to consider when selecting a memory
bank:

Application Requirements:
• Understand the specific needs of the application or system.
• Consider capacity, speed, and type of memory based on
application demands.
Capacity and Size:
• Determine required storage capacity.
• Consider physical size constraints of the memory.
Speed and Performance:
• Assess the speed requirements for data access and transfer.
• Balance speed with power consumption.
Volatility and Persistence:
• Decide on volatile or non-volatile memory based on data retention needs.
Power Consumption:
• Evaluate power requirements and energy efficiency.
Reliability and Endurance:
• Assess data integrity, reliability, and the number of read/write cycles.
Principle of Locality
• The Principle of Locality is a fundamental concept in
computer science and computer architecture. It refers to the
tendency of a program to access a relatively small portion of
its address space at any given time. There are two main
aspects of the Principle of Locality:
Temporal Locality:
• Definition: Temporal locality, also known as the "recently
used" principle, states that if a particular memory location is
accessed, it is likely to be accessed again in the near future.
• Example: In a loop, if a variable is accessed in one iteration,
it's highly probable that it will be accessed in subsequent
iterations.
Spatial Locality:
• Definition: Spatial locality, also
known as the "closeness" principle,
states that if a particular memory
location is accessed, nearby memory
locations are also likely to be
accessed in the near future.
• Example: When iterating through an
array, accessing one element makes
it likely that neighboring elements
will be accessed soon.
The transformation of data from main memory
to cache memory is referred to as a mapping
process, there are three types of mapping:
Associative mapping
Cache
mapping
techniques Direct mapping

Set-associative mapping
The fastest and most flexible cache
organization uses an associative memory

The associative memory stores both the


address and data of the memory word
Associative
mapping This permits any location in cache to store ant
word from main memory

The address value of 15 bits is shown as a


five-digit octal number and its corresponding
12-bit word is shown as a four-digit octal
number
A CPU address of 15 bits is places in the
argument register and the associative
memory us searched for a matching address

If the address is found, the corresponding

Associative 12-bits data is read and sent to the CPU

mapping If not, the main memory is accessed for the


word

If the cache is full, an address-data pair must


be displaced to make room for a pair that is
needed and not presently in the cache
Associative
mapping
Direct Mapping
• Associative memory is expensive compared to RAM
• In general case, there are 2^k words in cache memory
and 2^n words in main memory
• The n bit memory address is divided into two fields: k-
bits for the index and n-k bits for the tag field
Block
diagram of a
direct-mapped
cache.
The disadvantage of direct mapping is
that two words with the same index in
their address but with different tag
values cannot reside in cache memory
Set- at the same time
Associative
Mapping Set-Associative Mapping is an
improvement over the direct-mapping
in that each word of cache can store
two or more word of memory under
the same index address
Each index address refers to two
data words and their associated
tags

Each tag requires six bits and each


data word has 12 bits, so the word
length is 2*(6+12) = 36 bits
Set-Associative
Mapping
• All the memory accesses are directed first to
Cache
• If the word is in Cache; Access cache to provide it
to CPU

PERFORMAN • If the word is not in Cache; Bring a block (or a


line) including
CE OF • that word to replace a block now in Cache
CACHE • - How can we know if the word that is required
• is there ?
• - If a new block is to replace one of the old
blocks,
• which one should we choose ?
• Performance of Cache Memory System

• Hit Ratio - % of memory accesses satisfied by


Cache memory system
• Te: Effective memory access time in Cache
memory system
• Tc: Cache access time
• Tm: Main memory access time

• Te = Tc + (1 - h) Tm

• Example: Tc = 0.4 s, Tm = 1.2s, h =


0.85%
• Te = 0.4 + (1 - 0.85) * 1.2 = 0.58s
CACHE WRITE
caches on
the
Processor
Chip.

You might also like