Professional Documents
Culture Documents
Design of Reconfigurable Cache Memory Using Verilog HDL
Design of Reconfigurable Cache Memory Using Verilog HDL
Design of Reconfigurable Cache Memory Using Verilog HDL
December 2018
Abstract—Verilog Hardware Description Language is used to Random Cache Replacement policy is used in this design
design cache memory which involves direct mapping and set [3]. Whenever, cache miss happens cache memory respective
associative cache. Further set associative cache involves two-way, line is replaced with new main memory corresponding address
four-way and eight-way. In this design of cache memory block data.
architecture, the mapping technique can be varied using controller
unit. To increase accessing speed and optimize power by disable
unused cache memory set blocks.
Main Memory Design:
To map main memory of 512 MB with block size of 64 byte
Keywords—Cache memory, Mapping Controller unit, hit. to cache memory.
• Total main memory lines is given by Main memory
I. INTRODUCTION
size / block size = = 8388608 lines.
In Today’s current computers speed of the memory is very
important and need for it is in demand. Cache memory plays • To represent main memory lines from 0 to 8388607
an important role in increasing the speed. The block diagram lines requires 23 main memory index bits i.e., CPU
of Cache Memory is shown in the Fig.1. Cache is placed in address bits [28:6].
between processor and main memory. It will also contribute to • Remaining 6 address bits [5:0] is used as offset bits.
the performance of CPU.
This paper explains cache memory architecture with the
varied mapping technique using cache way controller unit.
The introduction part of Section I explains the basic working
of the cache memory. In Section II describes the working of
different cache mapping technique. Section III explains
implementation of cache way controller unit. Simulation
results of each mapping technique is showed in Section IV.
II. CACHE MAPPING TECHNIQUES
The different ways of mapping of main memory to cache
memory is as follows:
Fig.1: Basic Cache Memory Block Diagram.
A. Direct mapping
To reduce speed gap between CPU and main memory B. Two way mapping
cache memory was introduced, because main memory data C. Four way mapping
transfer speed is less compared with CPU processing speed D. Eight way mapping.
[3]. For preload data from main memory to cache memory
address before accessing data from cache is called locality of
reference [1]. A. Direct Mapping
In direct mapping technique, to map main memory of 512
Different ways of mapping techniques is used to access data MB with block size of 64 byte to 512 Kb cache memory
from cache to speed up the performance of CPU [4] & [5]. In requires 29 bits of CPU address.
set associative mapping, cache set selection bit is used to
access particular cache memory set [6].
Authorized licensed use limited to: VIT University. Downloaded on February 23,2022 at 16:12:37 UTC from IEEE Xplore. Restrictions apply.
The address bits of the Direct Cache Memory is shown in the
Fig.2. Offset address [5:0] bits is used to represent byte
address of respective cache line index address [18:6]. Tag and
Fig.2: Direct Cache Memory Address bits. cache index line is used to represent mapped main memory
address line.
1172
Authorized licensed use limited to: VIT University. Downloaded on February 23,2022 at 16:12:37 UTC from IEEE Xplore. Restrictions apply.
2 - way
Cache Memory Set Cache memory Tag
Selection Bank Selection Bank
CPU Address bit [18]
0 Cache memory - 0 Tag - 0
Table 1. Two-Way Mapping Cache Set selection bit.
Memory bank [256 Kb]
1 Cache memory - 1 Tag - 1
Memory bank [256 Kb]
1173
Authorized licensed use limited to: VIT University. Downloaded on February 23,2022 at 16:12:37 UTC from IEEE Xplore. Restrictions apply.
001 Cache memory - 1 Tag - 1
Memory bank [64 Kb]
010 Cache memory - 2 Tag - 2
Memory bank [64 Kb]
Fig.7: Four way Cache Memory Address bits.
011 Cache memory - 3 Tag - 3
Memory bank [64 Kb]
Write and Read operation is similar to two way mapping. If
100 Cache memory - 4 Tag - 4
CPU address bits [28:17] matches with stored cache tag bits, Memory bank [64 Kb]
then hit and data is read from cache memory. Else miss then 101 Cache memory - 5 Tag - 5
data is read from main memory and simultaneously update Memory bank [64 Kb]
cache memory with missed memory block. 110 Cache memory - 6 Tag - 6
Memory bank [64 Kb]
Each Cache Set memory is selected using CPU address bits 111 Cache memory - 7 Tag - 7
Memory bank [64 Kb]
[18:17] shown in Table 2.
Table 2. Four-Way Mapping Cache Set selection bits. Here, in each CPU address only one Cache Set size of 64
Kbyte is enabled and Searching speed is fast compared to two
4 - way and four way mapping because of less Cache Set size.
Cache Memory Set Cache memory Tag
Selection Bank Selection Bank
CPU Address bits [18:17]
00 Cache memory - 0 Tag - 0 III. IMPLEMENTATION
Memory bank [128 Kb]
01 Cache memory - 1 Tag - 1 The Fig.9 shows Cache Way controller unit. Way controller
Memory bank [128 Kb] unit is having two control bits is used to change mapping
10 Cache memory - 2 Tag - 2
Memory bank [128 Kb]
techniques according to designer requirement.
11 Cache memory - 3 Tag - 3
Memory bank [128 Kb]
Here, in each CPU address only one Cache Set size of 128
Kbyte is enabled and Searching speed is fast compared to two
mapping because of less cache set size compared to two way.
In eight way mapping, each Cache memory is selected Fig.9: Cache Way controller unit.
using address bits [18:16] is shown in Table 3.
Mapping is controlled using Way Controller Unit is
Table 3. Eight Way Mapping Cache Selection bits.
shown in Table 4.
8 - way
Cache Memory Set Cache memory Tag
Selection Bank Selection Bank
CPU Address bits [18:16]
000 Cache memory - 0 Tag - 0
Memory bank [64 Kb]
1174
Authorized licensed use limited to: VIT University. Downloaded on February 23,2022 at 16:12:37 UTC from IEEE Xplore. Restrictions apply.
Table 4. Cache Mapping Selection bits. Two Way Cache Mapping: The two separate cache memory
is used each size of 256Kbyte in Fig.11.
Way Control Bits Way of Mapping
00 Direct
01 Two way
10 Four way
11 Eight way
1175
Authorized licensed use limited to: VIT University. Downloaded on February 23,2022 at 16:12:37 UTC from IEEE Xplore. Restrictions apply.
Four Way Cache Mapping: In Fig.14 the four separate two way set associative mapping 512 Kbyte is divided into
cache memory uses of each cache set of size 128Kbyte is two equal cache sets of 256 Kbyte [2*256-Kbyte]. Similarly,
simulated. 4-way [4*128-Kbyte], and 8-way [8*64-Kbyte] mapping.
Main advantage of set associative is that only one Cache-set is
enabled and other Cache-sets are disabled. So that, the
accessing speed of the data is fast and keep the unused Cache
memory set blocks in stand-by mode to optimize power.
REFERENCES
V. CONCLUSION
1176
Authorized licensed use limited to: VIT University. Downloaded on February 23,2022 at 16:12:37 UTC from IEEE Xplore. Restrictions apply.