Download as pdf or txt
Download as pdf or txt
You are on page 1of 39

Advanced Computer Architecture

CSD-411

Department of Computer Science and Engineering


National Institute of Technology Hamirpur
Hamirpur, Himachal Pradesh - 177005
ABOUT ME: DR. MOHAMMAD AHSAN

• PhD – National Institute of Technology


Hamirpur (H.P.)
• M.Tech – National Institute of Technology
Hamirpur (H.P.)
• Qualified UGC NET June-2015 and UGC NET
Nov-2017 for Assistant Professor
• Qualified GATE 2012, GATE 2013 and GATE
2021.
• Experience: NIT Hamirpur and NIT Andhra
Pradesh.
Advanced Computer Architecture


Advanced Computer Architecture

•.
Advanced Computer Architecture

Cache Memory: A small, fast memory that acts as a buffer for dynamic
random access memory (DRAM).
• Cache is built using a different memory technology, static random access
memory (SRAM).
• SRAM is faster and less dense than DRAM.
• It is a volatile memory – retains data only if it is receiving power.
• It is invented by “Maurice Wilkes”.
• Need of cache memory: to balance speed gap between CPU and RAM.
Advanced Computer Architecture

Memory Hierarchy Design


Advanced Computer Architecture

• Why do we have these many memories? Why not we can have one
memory which is biggest and fastest?
Advanced Computer Architecture

• Why do we have these many memories? Why not we can have one
memory which is biggest and fastest?

For effective data access we need these many memories.


Advanced Computer Architecture

• What is stored in the cache?


Advanced Computer Architecture


Advanced Computer Architecture

• During the execution of the program, if CPU encounters the memory-reference


instructions then it generates the memory request and refer to the cache memory.
• If the data is available in the cache memory, operation is hit and the respective
data is transferred to the CPU in terms of words.
• If the operation is miss in the cache memory then the reference is forwarded to
the main memory (MM).
• When the operation is hit in the MM, it transfers the data to the cache memory in
the form of blocks and cache memory to CPU in the form of words. Otherwise,
the reference is forwarded to the secondary memory (SM).
Advanced Computer Architecture

• The operation is always hit in the SM and data is transferred from SM to


MM in the form of pages, MM to cache memory in the form of blocks and
cache memory to CPU in the form of words.
• According to the Hierarchy design, the data is transferred from the higher
levels to the lower levels. So, the data of lower levels is always a subset of
the data of higher levels. This property is called as inclusion.
• Note: Even the CPU request a single word, complete block is transferred to the cache
memory.
Advanced Computer Architecture

• Cache operation is based on the principle of locality.


• Principle of Locality
• Temporal locality: it states that recently accessed items are likely to be
accessed in the near future.
• Spatial locality: it says that items whose addresses are near one another
tend to be referenced close together in time.
Advanced Computer Architecture

Cache Pre-fetch Policy


• Fetch: loading content from memory when requested/required
• Pre-fetch: loading content before requested by the CPU.
• Before CPU request, how the required content can be fetched by the cache
memory?
• Cache memory predicts that what may be requested by the processor in
the near future.
• It predicts by using locality of reference.
• If CPU is requesting word “X” currently, same word “X” may be requested
again (temporal locality) or the words nearby “X” may be requested in the
near future (spatial locality).
Advanced Computer Architecture

• If CPU requested content supplied by the cache: hit


• otherwise: cache miss/fault
• C3 : compulsory miss, capacity miss, and conflict miss.
Advanced Computer Architecture


Advanced Computer Architecture


Advanced Computer Architecture

Placement Policy/Mapping Techniques:


Advanced Computer Architecture

• Mapping: The technique of bringing data from main memory to cache


memory is known as cache mapping.
• The mapping techniques can be classified as:
i. Direct Mapping (DM)
ii. Associative Mapping (AM)
iii. Set-Associative Mapping (SAM)
Advanced Computer Architecture

• Direct Mapping: In this technique, the required block is copied from main
memory to cache memory based on a mapping function.
• Mapping function:

• The direct cache controller interprets the CPU generated request as:
Advanced Computer Architecture

• The line offset is directly mapped with the address logic of the cache
memory and corresponding line is enabled.
• The existing tag in the enabled line is compared with the CPU generated
tag.
• If address TAG is equal to cache TAG, then DM cache generates hit and
based on the word offset the requested data is passed to the CPU.
• Hit Latency: very less
• Conflict misses: high
Advanced Computer Architecture

• Diagrammatically:
Advanced Computer Architecture

•.
Advanced Computer Architecture

•.
Advanced Computer Architecture

•.
Advanced Computer Architecture

•.
Advanced Computer Architecture

•.
Advanced Computer Architecture

•.
Advanced Computer Architecture

•.
Advanced Computer Architecture

•.
Advanced Computer Architecture

•.
Advanced Computer Architecture

•.
Advanced Computer Architecture

•.
Advanced Computer Architecture

•.
Advanced Computer Architecture

•.
Advanced Computer Architecture

•.
Advanced Computer Architecture

•.
Advanced Computer Architecture

•.
Advanced Computer Architecture

•.

You might also like