2019 Least Recently Frequently Used Replacement Policy in Named Data Network

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 5

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/339021619

Least Recently Frequently Used Replacement Policy in Named Data Network

Conference Paper · July 2019


DOI: 10.1109/ICWT47785.2019.8978218

CITATIONS READS

3 114

3 authors, including:

Nana Rachmana Syambas Made Adi Paramartha Putra


Bandung Institute of Technology Kumoh National Institute of Technology
99 PUBLICATIONS 472 CITATIONS 36 PUBLICATIONS 103 CITATIONS

SEE PROFILE SEE PROFILE

All content following this page was uploaded by Made Adi Paramartha Putra on 10 September 2021.

The user has requested enhancement of the downloaded file.


Least Recently Frequently Used
Replacement Policy in Named Data Network

Syambas, Nana Rachmana Situmorang, Hamonangan Putra, Made Adi Paramartha


School of Electrical Engineering School of Electrical Engineering and School of Electrical Engineering
and Informatics Informatics and Informatics
Bandung Institute of Technology Bandung Institute of Technology Bandung Institute of Technology
Bandung, Indonesia Bandung, Indonesia Bandung, Indonesia
nana@stei.itb.ac.id monang@stei.itb.ac.id mdparamartha95@students.itb.ac.id

Abstract-Internet users growth in last few years rapidly Named Data Network (NDN) is a new evolution of the
increased, which affected the number of content variation that ICN network. NDN improvement is the naming system for
requested by users. Network performance takes an important address, and router caching capability. By enabling name
aspect to provided the best performance while sending the address in NDN, solution for any devices address widely open.
content requested by users. Named Data Network (NDN) is a Currently naming system in NDN using the hierarchical
new improvement in Information-Centric Network (ICN). NDN system.
gives a new paradigm into networking by enabling cache system
in NDN router and using the name as a network address, instead Compare with IP based network, NDN using name prefix
of using an IP address. Router caching effectively improved the instead of IP prefix. Moreover, NDN support stateful
network performance due to the user instantly request the forwarding while IP based support stateless forwarding [1].
content into the nearest router that connected in same NDN NDN utilize content store to provides caching capabilities.
network, instead of request the content directly to the producer. NDN caching mechanism strictly controlled by replacement
Caching policy in NDN router controlled by a replacement policy, thus replacement policy should be managed wisely. In
policy. In this paper, we implement LRFU algorithm as a new this paper, we compare the replacement policy in NDN using
replacement policy in ndnSIM, then we compare LRFU ndnSIM.
replacement policy with LRU and Priority-FIFO replacement
policy based on content store size, interest rate, and the size of There are five sections detailed in this paper, in the first
grid topology variation. From the simulation result, we found section will be Introduction. Related work in the NDN
that the LRFU replacement policy achieved 3.36% higher hit research area will be described in the second section. System
rate than LRU and 5.78% higher compared with Priority-FIFO design of the LRFU algorithm and its implementation detailed
replacement policy. in the third section, finally the result will be described in the
fourth section and conclusion in the fifth section.
Keywords—component; Named Data Network; LRFU;
Content Store; Replacement Policy
II. RELATED WORK
I. INTRODUCTION As a new network paradigm, NDN research area wide
open. In this section will detail some related research about
Massive network demand over the world affects the total NDN and LRFU replacement policy.
allocation of IP based network getting smaller. This
phenomenon caused by the high growth of internet user, and Performance comparison between NDN and IP based
invention of the Internet of Things (IoT) that need an network was detailed in [2]. The comparison using Indonesian
allocated IP address. Based on the growth for the past decade, palapa ring topology and compare the performance based on
the IP address network could not able to support every request area. Results show that NDN achieved higher throughput and
generated by a user or any devices in the IoT network. lower delay compared with an IP based network.
Information Centric Based (ICN) introduced as a new Evaluate the delay CDF value based on content store size
generation of internet, due network paradigm and architecture scaling in NDN router described in [3]. The results show that
of ICN differ from IP based network. ICN router enables to bigger content store size produces lower delay CDF.
cache requested data by user, and able to use it to fulfill Moreover, for router without content store (𝐶𝑠$%&' = 0)
another user request. Currently, the implementation of ICN achieved 80% higher delay CDF.
become dominant as YouTube, Netflix, Amazon, and iTunes
Replacement policy in NDN based hierarchical popularity
using IP based ICN. However, ICN over IP network is
proposed by L. YingQi et al., [4]. Replacement policy
inefficient and insecure due information-centric overlay is a
comparison using Popularity Content, LRU and LFU
poor match to the IP based.
algorithm with popularity content generated using Zipf

978-1-7281-4796-3/19/$31.00 ©2019 IEEE

Authorized licensed use limited to: KUMOH NATIONAL INSTITUTE OF TECHNOLOGY. Downloaded on September 10,2021 at 12:22:53 UTC from IEEE Xplore. Restrictions apply.
Mandelbrot. Results show Popularity Content replacement
policy achieved a higher hit rate ratio compared with LRU and
LFU.
There is some research related to the LRFU algorithm, D.
Lee et al., [5] proposed the LRFU algorithm as a new caching
policy with CRF as its weighting function. The proposed
algorithm compared with LRU and successfully achieved a
better hit rate ratio. Furthermore, in 2001 D. Lee et al., [6]
improved the algorithm and comparing LRFU with LRU and
LFU using cache block size variation and variation of 𝜆 as a
control parameter for weighting function.
Fig. 2. NDN router process [1]
Moreover, LRFU replacement policy compared with LRU
based on content store size variation detailed in [7]. The A process occurs inside the NDN router detailed in fig.2.
results evaluate hit rate ratio and time complexity of algorithm The user sending interest to the nearest router, then CS will
and show that LRFU achieved higher hit rate ratio compared check whether any data match to requested interest. If the data
with LRU. However, due complex weighting function of do not available in CS, interest packet will be forwarded into
LRFU affect higher time complexity. LRFU replacement PIT and create a new PIT entry in order to add interest into
policy time complexity value is 𝑂(log 0 𝑑234'$3567 ), and 𝑂(1) FIB. After the requested data received, PIT entry related to
for LRU replacement policy. interest will be deleted and stored in CS, and finally, forward
In this paper, our main contribution is implement LRFU as the data to the user.
a replacement policy in NDN using ndnSIM and evaluate the In this paper, we focused to content store characteristic in
hit rate ratio based on content store size, interest rate, and NDN. Content store ruled by a replacement policy, as a
topology grid size compared with LRU and Priority-FIFO decision maker for any stored data in the content store.
replacement policy. Currently, implementation of replacement policy for NDN
Forwarding Daemon (NFD) content store in ndnSIM only
III. SYSTEM DESIGN supports for LRU and Priority-FIFO algorithm [9].
In this section will described about named data network,
how LRFU algorithm work and LRFU implementation in B. LRFU Algorithm
ndnSIM. Least Recently Frequently Used replacement policy
introduced by D. Lee et al., in 1996 [5], and tested by using
A. Named Data Network FreeBSD. LRFU is a combination of LRU and LFU algorithm
Funded by U.S National Science Foundation (NSF), NDN since decision making in LRFU using recency factor from
becomes one of five research project under NSF future LRU and frequency characteristic from LFU. LRFU algorithm
internet architecture program in 2010. NDN is an evolution of weighting function called as Combined Recency Frequency
CCN, which started by Van Jacobson at Xerox. The NDN (CRF) value.
project code base originally comes from CCNx and forked to
support NSF architecture in 2013 [8]. < >@
𝐹(𝑥) = ; = (1)
0

Based on Formula (1), shows the weighting function of


LRFU CRF value. For 𝜆 = 0, the weighting function will
change into 𝐹(𝑥) = 1, which is LFU extreme. However,
< @
for 𝜆 = 1 will change the function into 𝐹(𝑥) = ;0= or
LRU extreme. As detailed before, LRFU is a combined
replacement policy from LRU and LFU with 0 < 𝜆 < 1
and 𝑥 as the time difference between referenced interest.
LRFU algorithm structured by heap list and linked list,
those CRF value for each data in CS will be compare in heap
Fig. 1. NDN router strcuture [8] list. Inside the linked list, the comparison is based on FIFO,
which mean if a content not referenced for a long time has the
Router caching capability and name base address are the highest chance to be deleted among the linked list.
main advantages provided by NDN compared with IP address
network. Refers to Fig.1. NDN router divided into 3 main
functions such as Pending Interest Table (PIT) as a list for
unsatisfied interest, Forwarding Information Base (FIB) as a
routing and forwarding function, and Content Store (CS) to
store interest data.

978-1-7281-4796-3/19/$31.00 ©2019 IEEE

Authorized licensed use limited to: KUMOH NATIONAL INSTITUTE OF TECHNOLOGY. Downloaded on September 10,2021 at 12:22:53 UTC from IEEE Xplore. Restrictions apply.
Fig. 3. LRFU replacement policy scenarios [5]

Fig.3. detailed three scenarios that could occur in LRFU


replacement policy. The first scenario in Fig.3.a occurs if new Fig. 4. LRFU replacement policy module
data entering the full CS will change the data with the lowest
CRF value in the heap list. The lowest CRF value in the heap Module correlation for implemented LRFU replacement
list will be moved into a linked list and removing the oldest policy in ndnSIM detailed in Fig.4. Cs-policy-lrfu.cpp linked
content that never referenced for the longest time duration. with cs.cpp in order to synchronize incoming interest
generated by the user. Cs-policy-lrfu.hpp linked with cs-
The second scenario can be seen in Fig.3.b, this condition policy.hpp for interest handling function and also linked with
could occur if the referenced data located in the heap list. The
ndn::time to get the time difference (𝑥) for CRF calculation.
referenced data will compare its CRF value with every data in Moreover, core/logger used to trace the entire process in the
the heap list after updating its own CRF value. The final step
LRFU replacement policy.
is restoring the heap list structure based on CRF.
The last scenario detailed in Fig.3.c if referenced data D. Simulation Parameters & Scenarios
located in the linked list. The step is update referenced data In order to calculate replacement policy performance, we
CRF value, swap referenced data with lowest CRF value in set some parameters to NDN link and router.
heap list and restore heap list. In the second and third scenario,
the process not removing any data due to the referenced data TABLE 2. Simulation parameters
located inside the linked list.
No Parameters Value
C. LRFU Implementation in ndnSIM 1 Data Rate 1 Mbps
NFD content store only supports LRU and Priority-FIFO 2 Link Delay 10 milliseconds
replacement policy. In order to use LRFU, we implement 3 Forwarding Strategy Best-route
every logic of LRFU inspired by D. Lee et al. [5], a new 4 Routing Strategy ndn Global Routing
replacement policy and synchronize the policy with the entire 5 Consumer Helper Zipf Mandelbrot
ndnSIM module. 6 Payload Size 1024 bytes
7 Simulation Time 300 seconds
TABLE 1. LRFU pseudocode overview 8 LRFU 𝜆 value 0.1

1. if it already in content store cache


Parameters that used to simulation in this paper detailed in
2. then Table 2, we simulate the NDN using 3x3 grid topology with
3. 𝐶𝑅𝐹C'D (𝑖𝑡) = 𝐹(0) + 𝐹H𝑡I − several scenarios, such as content store size variation, interest
𝐿𝐴𝑆𝑇(𝑖𝑡)O ∗ 𝐶𝑅𝐹C'D (𝑖𝑡) rate variation and size variation of grid topology.
4. 𝐿𝐴𝑆𝑇(𝑖𝑡) = 𝑡𝑐
5. RestoreHeap () TABLE 3. Simulation scenarios
6. else No Variation Value
7. 𝐶𝑅𝐹C'D (𝑖𝑡) = 𝐹(0)
1 Content Store Size 10, 20, 30, 40, and 50
8. 𝐿𝐴𝑆𝑇(𝑖𝑡) = 𝑡𝑐 2 Interest Rate 10, 30, 50, 75, and 100
9. RestoreHeap () 3 Grid Size 3x3, 4x4, and 5x5
10. if CsSize > CsSizeLimit
11. then We divided simulation scenarios based on three
12. evictOne( ) parameters, such as content store size, interest rate and grid
13. end if size in Table 2. First scenario condition based on content store
14. end if size variation using 3x3 grid topology with 10 interest rate.
The second scenario based on interest rate variation with 10
A general description of the pseudocode of the LRFU content store size and 3x3 grid topology. Last scenario based
replacement policies applied in this paper is detailed in Table on grid size variation with 10 content store size and 10 interest
1. CRF value of new data in LRFU set to 𝐹(0) = 1. When the rate.
content store size higher than its limit, evictOne() function will
be executed to remove data that never referenced again.

978-1-7281-4796-3/19/$31.00 ©2019 IEEE

Authorized licensed use limited to: KUMOH NATIONAL INSTITUTE OF TECHNOLOGY. Downloaded on September 10,2021 at 12:22:53 UTC from IEEE Xplore. Restrictions apply.
IV. RESULTS approach using C++ programming language. In this research,
There are three simulation scenarios evaluated in this we also found that LRFU performance depend on allocated
paper. Following section will described hit rate ratio achieved content store size, if content store size higher than 50 blocks,
by LRFU compared with LRU and Priority-FIFO replacement LRFU hit rate gap with LRU and Priority-FIFO getting lower,
policy with the content store, interest rate, and grid size with 47.53% hit rate achieved by LRFU, 46.40% and 44.34%
variation. achieved by LRU and Priority-FIFO respectively.
Hit Rate - Content Store
Interest rate value scenario shows that LRFU replacement
60 policy outperforms LRU with 4.1% higher hit rate and 6.69%
50 compared with Priority-FIFO. As interest rate value increase
40 gives a higher hit rate for all tested replacement policy. For the
Percentage(%)

30 LRFU scenario with the variation of grid size, LRFU replacement


20
LRU
FIFO
policy also achieved a 4.85% higher hit rate compared with
10 LRU and 7.43% with Priority-FIFO. Higher size of grid
0
10 20 30 40 50
topology produces a slightly lower hit rate ratio.
Content Store Size
For future research, our focus simulating LRFU algorithm
Fig. 5. Content store variation result with a bigger scale of NDN topology, and calculate delay CDF
achieved by the network.
Result of first parameter variation shows in Fig.5. LRFU
outperforms LRU with 1.12% higher hit rate and 3.18% REFERENCES
higher than Priority-FIFO replacement policy. For the higher
size of the content store affect the hit rate gap of three
replacement policy getting closer. Refer to this data, LRFU [1] D. Saxena, V. Raychoudhury, N. Suri, C. Becker, and J. Cao, “Named
algorithm achieved better network performance with a Data Networking: A survey,” Comput. Sci. Rev., vol. 19, pp. 15–55,
relatively small content store. 2016.
Hit Rate - Interest Rate [2] M. N. D. Satria, F. H. Ilma, and N. R. Syambas, “Performance
45
comparison of named data networking and IP-based networking in
40
35 palapa ring network,” Proc. - ICWT 2017 3rd Int. Conf. Wirel. Telemat.
30
Percentage(%)

25
2017, vol. 2017–July, pp. 43–48, 2018.
LRFU
20
15
LRU [3] H. Situmorang, N. R. Syambas, and T. Juhana, “The effect of scaling the
FIFO
10 size of Topology and Content Stored on the Named Data Networking,”
5
0 Proceeding 2016 10th Int. Conf. Telecommun. Syst. Serv. Appl. TSSA
10 30 50 75 100
Interest Rate 2016 Spec. Issue Radar Technol., pp. 16–21, 2017.
[4] L. YingQi, Y. Meiju, and L. Ru, “A Cache Replacement Strategy Based
Fig. 6. Interest rate variation result
on hierarchical Popularity in NDN,” Int. Conf. Ubiquitous Futur.
Simulation scenario based on interest rate variation results Networks, pp. 159–161, 2018.
detailed in Fig.6. Higher rate of interest produces a higher hit [5] D. Lee et al., “LRFU ( Least Recently / Frequently Used ) Replacement
rate achieved by LRFU, LRU, and Priority-FIFO replacement Policy : A Spectrum of Block Replacement Policies,” IEEE Trans.
policy. For all interest rate variation generated by the user, Comput., vol. 50, no. 12, Technical Report, pp. 1352–1361, 1996.
LRFU replacement policy successfully gained 41.85% hit
[6] D. Lee et al., “LRFU: A Spectrum of Policies that Subsumes the Least
rate, 37.75% achieved by LRU and 35.16% by Priority-FIFO.
Recently Used and Least Frequently Used Policies,” IEEE Int. Reliab.
Hit Rate - Grid Size
Phys. Symp. Proc., vol. 2004–Janua, no. January, pp. 84–94, 2004.
45
40
[7] G. Petrus B.K., “Simulation and Study of Replacement Strategy in
35
30 NDN,” Master’s Thesis, Bandung Inst. Technol., 2018.
Percentage(%)

25
LRFU
20 [8] “Named Data Networking: Motivation & Details.” [Online]. Available:
LRU
15
10
FIFO https://named-data.net/project/archoverview/. [Accessed: 22-Nov-
5
2018].
0
3x3 4x4 5x5
Grid Size
[9] ndnSIM, “ndnSIM 2.6 documentation.” [Online]. Available:
http://ndnsim.net/2.6/doxygen/index.html. [Accessed: 21-Mar-2019].
Fig. 7. Grid size variation result

Similarly, based on grid size variation, LRFU achieved


4.85% higher hit rate than LRU and 7.43% higher compared
with Priority-FIFO replacement policy. However, the bigger
size of grid topology slightly mitigate hit rate ratio.

V. CONCLUSION
Based on the simulation results, we conclude that the
LRFU replacement policy achieved higher hit rate ratio
compared with LRU and Priority-FIFO replacement policy.
These results support our last research on LRFU algorithm

978-1-7281-4796-3/19/$31.00 ©2019 IEEE

Authorized licensed use limited to: KUMOH NATIONAL INSTITUTE OF TECHNOLOGY. Downloaded on September 10,2021 at 12:22:53 UTC from IEEE Xplore. Restrictions apply.
View publication stats

You might also like