Download as pdf or txt
Download as pdf or txt
You are on page 1of 15

Lecture 26

The Complete Memory Hierarchy

The Virtual Memory System


Virtual page number number
Valid 1 1 1 1 0 1 1 0 1 1 0 1 Virtual page Page table Physical page or disk address Physical memory

Disk storage

Page Size, Page Table and Disk Space


Large pages help to amortize disk access time. Large pages may lead to waste of memory. Small pages make the page table bigger. Large Virtual Address space means more space on disk may be used for Virtual Memory.

Handling Page Faults


Note: a cache miss is handled by the Control Unit (hardware), while a page fault is handled by an exception handler (software). Algorithm:
1. 2. 3. 4. 5. 6. Exception is raised. Instruction address is loaded into EPC. Locate page on disk. Is there space for loading a new page into physical memory? If not, choose a page to replace. Write replaced page to disk if necessary. Read new page. Restart instruction.

Implementing Page Replacement


Schemes:
FIFO Least Recently Used Most Recently Used Random ...
Hard to implement, but can be approximated in a number of ways. Second-chance is a commonly cited approximation.

Address Translation for Virtual Memory


Virtual address Virtual address
31 30 29 28 27 15 14 13 12 11 10 9 8 Page offset 3210

Virtual page number

Translation 29 28 27 15 14 13 12 Physical page number Physical address 11 10 9 8 Page offset 3210

Example: Page size = 4KB = 2^12 Bytes Physical memory size = 2^18 pages = 2^30 Bytes = 1 GB Virtual memory size = 2^32 Bytes = 4 GB

Mapping Mechanism for Virtual Memory


Page table register

Virtual address
31 30 29 28 27 15 14 13 12 11 10 9 8 Virtual page number 3 2 1 0 Page offset 12 Physical page number

20
Valid

Page table
(each program has its own)

Page table

18 If 0 then page is not present in memory 29 28 27 15 14 13 12 11 10 9 8 Physical page number Physical address 3 2 1 0 Page offset

Translation Lookaside Buffer


Some important points to notice:
Page tables reside somewhere in the Physical Memory. Physical Memory accesses are slow, thats why we have cache. To translate every Virtual Address into a Physical Address we need to access the page table.

To improve performance, create another cache:


The TLB is a cache containing frequently accessed entries in the page table. To translate a Virtual Address, we look for the page table info first in the TLB, if its not there, something like a cache miss happens.

If the TLB is a cache, how does one determine its degree of associativity?

Virtual page number

TLB TLB
Valid 1 1 1 1 0 1 Page table Physical page Validor disk address 1 1 1 1 0 1 1 0 1 1 0 1 Tag

Physical page address

Physical memory

Disk storage

Virtual address Virtual address 31 30 29 20 15 14 13 12 11 10 9 8 Virtual page number Page offset 12 3210

ValidDirty TLB TLB hit

Tag

Physical page number

20

Physical address tag 16

Physical page number Page offset Physical address Cache index 14 2

Byte offset

Valid

Tag

Data

Cache

32 Cache hit Data

10

Processing a Memory Reference


Virtual address TLB access

TLB miss exception

No

TLB hit?

Yes Physical address

No

Write?

Yes

Try to read data from cache

No

Write access bit on?

Yes

Cache miss stall

No

Cache hit?

Yes

Write protection exception

Write data into cache, update the tag, and put the data and the address into the write buffer

Deliver data to the CPU

11

Cache TLB Virtual Memory


Miss Miss Miss Miss Hit Hit Miss Miss Hit Hit Miss Miss Miss Hit Miss Hit Miss Hit

Possible?
TLB misses, then page fault, then the page is in, but the data is not in the cache. TLB misses, but the page is in memory, though not in the cache. IMPOSSIBLE: cant have a TLB hit if the page is not in memory. The data is not in the cache, but the page is in memory and we have a TLB hit. IMPOSSIBLE: the data cant be in the cache if its not in memory! TLB misses, but the entry is in the page table; when we retry we find the cached data. IMPOSSIBLE: cant have a TLB hit if the page is not in memory. TLB hit, so the page is in memory, and the data is in the cache.

Hit Hit

Hit Hit

Miss Hit

12

Cache, VM, Processes and Context Switch


Facts: There is only one TLB, shared by all running processes. There is only one cache memory, shared by all running processes. All running processes share the CPU (and FPU) registers.
When the CPU switches from one process to another, we must:

Save register contents, Update page table register, Flush the TLB (unless TLB entries are tagged with process id), Flush the cache (remember the cache stores physical addresses).

13

The Memory Hierarchy


Bus Split Cache Instruction Cache Data Cache Physical Memory

Level 2 Cache Disk Processor Harvard Architecture

14

high

Fine Tuning a Program for Memory Performance for (i=0; i<rows; i++)
A[2][1] A[2][0]

A[1][1] A[1][0]

for (j=0; j<columns; j++) A[i][j] = A[i][j] + i; for (j=0; j<columns; j++) for (i=0; i<rows; i++) A[i][j] = A[i][j] + i;
If you know how data is organized in memory, you can write your code in a way that minimizes cache misses and page faults.

A[0][1] A[0][0]
low

15

You might also like