Professional Documents
Culture Documents
Computer Architecture
Computer Architecture
Lecture 4
Memory
• Memory has many types
• No one individual memory can fulfil all the requirements of computer system
• Memory exhibit
• Widest Range of Technology
• Organization
• Performance
• Cost
Memory System
Hierarchy
• Trade off
• Faster access time = greater cost
• Greater capacity = smaller cost
• Greater capacity = slower access time
• Solution: Rely on more than one memory technology & employ memory hierarchy
Memory System
Levels
• Cache memory is designed to combine
expensive memory (high speed) with
large size (less expensive)
• Logical:
• Stores data in virtual address
• Processor access directly
• Faster cache
• Must be flush on each switch
• Physical:
• Stores data using Memory
Management Unit
Memory System
Cache Size
• Cost
• More cache more expensive
• Speed
• More cache faster speed
• Cache searching takes time
Memory System
Mapping Function
Set
Direct Associative
Associative
• Simplest technique • Permits each main • Compromise between
• Maps each block of memory block to be previous two
main memory into loaded into any line of techniques
only one possible cache
cache line • To determine whether
a block is in cache or
not, cache control
logic must examine
every line
Memory System
Example: Direct Mapping
Address: 000101
6-bit address
• Pros:
• Simple
• Inexpensive
• Cons
• Fixed location to given block
• If program access 2 blocks that map the same line repeatedly then
caches misses are very high
• Victim Cache
• Remembers what is fetch and use again
Memory System
Associative Mapping
Tag: 000000h
Address Binary:
0000 0000 0000 0000 0000 0000
Tag: 0140AAh
Address Binary:
0000 0001 0100 0000 1010 1010
• Too expensive
• Tag is too large
Memory System
Set Associative Mapping
Main memory
divided into 0 TAG
blocks 0
1
SET
Decode a memory address number TAG
0
1
0 1
1
• If I want to look up memory address
• Write through
• Simplest technique
• All write operations to be made in main memory & cache
• Generates substantial memory traffic & may cause bottleneck
• Write back
• Minimized memory writes
• Updates are made only in cache
• Requires complex circuitry
Memory System
Line Size
• Block of data is retrieved & placed in cache (it contains required word
+ near by location data also)
• Large block size means more useful data
• Larger block size mean increase in hit ratio
• Even larger block size will decrease hit ratio due to less probability of
reusing information
• Two ways
• Larger block size with reduce number of blocks
• Block become bigger each additional word become farther
Memory System
Multilevel Caches
• Dedicated cache for both instructions and data (Split) or one cache shared by
both instruction and data (unified)
• Advantages of unified
• Higher hit rate
• Balances load of instructions and data
• Simple design (as only one cache)
Intel ARM
Memory System
Useful Links
• https://www.youtube.com/watch?v=B4P9UNoEwRQ&ab_channel=venkatesanramachandran
• https://www.youtube.com/watch?v=VePK5TNgQU8&ab_channel=GateLecturesbyRavindrababuR
avula
• https://www.youtube.com/watch?v=U6gf2PRBmQY&ab_channel=JacobSchrum
• https://www.youtube.com/watch?v=B-EMkzv2AHE&ab_channel=JacobSchrum
• https://www.youtube.com/watch?v=OGDEsD3hdbk&list=RDCMUCCKhH1p0tj1frvcD70tEyDg&ind
ex=2&ab_channel=JacobSchrum
• https://course.ccs.neu.edu/com3200/parent/NOTES/cache-
basics.html#:~:text=The%20tag%20is%20kept%20to,not%20need%20to%20access%20RAM.