Download as pdf or txt
Download as pdf or txt
You are on page 1of 9

A SCALABLE MULTICASTING WITH

CACHE CONSISTENCY MAINTENANCE IN


MANET
Gowdhami.D1, Janagiraman.S2, Prabhakaran.D3, Deepan.M4
1
Final Year Student of M.E Network Engineering, Department of Information Technology, VelTech MultiTech
Dr Rangarajan Dr Sakunthla Engineering College, Chennai-600062. gme.gowdhami@gmail.com
2
Final Year Student of M.E Network Engineering, Department of Information Technology, VelTech MultiTech
Dr Rangarajan Dr Sakunthla Engineering College, Chennai-600062. Jani28cse@gmail.com
3
Final Year Student of M.E Network Engineering, Department of Information Technology, VelTech MultiTech
Dr Rangarajan Dr Sakunthla Engineering College, Chennai-600062. prabha619@gmail.com
4
Final Year Student of M.E Network Engineering, Department of Information Technology, VelTech MultiTech
Dr Rangarajan Dr Sakunthla Engineering College, Chennai-600062. deepudeepan7@gmail.com

ABSTRACT -- A cache consistency scheme based on a previously proposed architecture for caching
database data in MANETs. The original scheme for data caching stores the queries that are submitted
by requesting nodes in special nodes, called query directories (QDs), and uses these queries to locate
the data (responses) that are stored in the nodes that requested them, called caching nodes (CNs). The
consistency scheme is server-based in which control mechanisms are implemented to adapt the
process of caching a data item and updating it by the server to its popularity and its data update rate at
the server. The system implements methods to handle disconnections of QD and CN nodes from the
network and to control how the cache of each node is updated or discarded when it returns to the
network. Estimates for the average response time of node requests and the average node bandwidth
utilization are derived in order to determine the gains (or costs) of employing our scheme in the
MANET. Moreover, ns2 simulations were performed to measure several parameters, like the average
data request response time, cache update delay, hit ratio, and bandwidth utilization. The results
demonstrate the advantage of the proposed scheme over existing systems.

Index Terms—Data caching, cache consistency, invalidation, server-based approach, MANET.

1 INTRODUCTION the maintenance of data consistency between


IN a mobile ad hoc network (MANET), the client cache and the server [5]. In a
data caching is essential as it reduces MANET, all messages sent between the
contention in the network, increases the server and the cache are subject to
probability of nodes getting desired data, network delays, thus, impeding
and improves system performance [1], [5]. The consistency by download delays that are
major issue that faces cache management is
considerably noticeable and more severe in However, TTL-based algorithms, like client
wireless mobile devices. polling algorithms, are weakly consistent, in
All cache consistency algorithms are contrast to server invalidation schemes that
developed with the same goal in mind: to are generally strongly consistent.
increase the probability of serving data According to [11], with strong
items from the cache that are identical to those consistency algorithms, users are served
on the server. A large number of such strictly fresh data items, while with weak
algorithms have been proposed in the algorithms, there is a possibility that
literature, and they fall into three groups: users may get inconsistent (stale) copies of
server invalidation, client polling, and time the data. This work describes a server-based
to live (TTL). With server invalidation, the scheme implemented on top of the COACS
server sends a report upon each update to caching architecture we proposed in [2]. In
the client. Two examples are the Piggyback COACS, elected query directory (QD) nodes
server invalidation [19] and the Invalidation cache submitted queries and use them as
report [31] mechanisms. In client polling, indexes to data stored in the nodes that
like the Piggyback cache validation of [20], initially requested them (CN nodes). Since
a validation request is initiated according to a COACS did not implement a
schedule. If the copy is up to date, the server consistency strategy, the system described
informs the client that the data have not in this paper fills that void and adds
been modified; else the update is sent to the several improvements: 1) enabling the server
client. Finally, with TTL algorithms, a server- to be aware of the cache distribution in the
assigned TTL value (e.g., T ) is stored MANET, 2) making the cached data items
alongside each data item d in the cache. consistent with their version at the server,
The data d are considered valid until T time and 3) adapting the cache update process
units pass since the cache update. Usually, the to the data update rate at the server relative
first request for d submitted by a client to the request rate by the clients. With these
after the TTL expiration will be treated as a changes, the overall design provides a
miss and will cause a trip to the server to fetch complete caching system in which the
a fresh copy of d. Many algorithms were server sends to the clients selective updates
proposed to determine TTL values, including that adapt to their needs and reduces the
the fixed TTL approach [17], adaptive TTL average query response time. Next, Section 2
[11], and Squid’s LM-factor [29]. TTL- describes the proposed system while Section
based consistency algorithms are popular due 3 analyzes its performance using key
to their simplicity, sufficiently good measures. Section 4 discusses the experimental
performance, and flex- ibility to assign TTL results, and Section 5
values for individual data items [4].
TABLE 1
Packets Used in COACS After Integrating SSUM

2 SMART SERVER UPDATE Basic Operations


MECHANISM (SSUM) Before detailing the operations of SSUM, we
SSUM is a server-based approach that avoids list in Table 1 the messages used in the
many issues associated with push-based COACS architecture as we refer to them in
cache consistency approaches. Specifically, the paper. Given that no consistency
traditional server-based schemes are not mechanism was implemented in COACS, it
usually aware of what data items are was necessary to introduce four additional
currently cached, as they might have been messages.
replaced or deleted from the network due to In SSUM, the server autonomously sends data
node disconnections. Also, if the server data updates to the CNs, meaning that it has to
update rate is high relative to the nodes request keep track of which CNs cache which data
rate, unnecessary network traffic would be items. This can be done using a simple
generated, which could increase packet table in which an entry consists of the id of a
dropout rate and cause longer delays in data item (or query) and the address of the
answering node queries. SSUM reduces CN that caches the data. A node that desires
wireless traffic by tuning the cache update rate a data item sends its request to its nearest QD.
to the request rate for the cached data. If this QD finds the query in its cache, it
forwards the request to the CN caching the
item, which, in turn, sends the item to
the requesting node (RN). Otherwise, it
forwards it to its nearest QD, which has and update scenarios that are described below.
not received the request yet. If the request In the figure, the requesting nodes (RNs)
traverses all QDs without being found, a miss submit queries to their nearest QDs, as shown
occurs and it gets forwarded to the server in the cases of RN1 , RN2 , and RN3 . The
which sends the data item to the RN. In the query of RN1 was found in QD1 , and so the
latter case, after the RN receives the latter forwarded the request to CN1 , which
confirmation from the last traversed QD that it returned the data directly to the RN.
has cached the query, it becomes a CN for this However, the query of RN2 was not found
data item and associates the address of this in any of the QDs, which prompted the last
QD with the item and then sends a Server searched (QD1 ) to forward the request to the
Cache Update Packet (SCUP) to the server, server, which, in turn, replied to RN2 that
which, in turn, adds the CN’s address to became a CN for this data afterward. The
the data item in its memory. This setup figure also shows data updates (key data
allows the server to send updates to the pairs) sent from the server to some of the
CNs directly whenever the data items are CNs.
updated. Fig. 1 illustrates few data request

Fig. 1. Scenarios for requesting and getting data in the COACS architecture.
3 ANALYSIS We evaluate our scheme in influenced by the number of data requests
terms of bandwidth and queryresponse time issued by requesting nodes relative to the
gains. These are the differences between the number of data updates that occur at the
corresponding measures when no cache server. In the remainder of the paper, we refer
updating is inplace and when SSUM is to no cache updating as NCU.
employed. Requests for data in thead hoc
network and data updates at the server are
assumed to be random processes.bandwidth
gain Gb and response time gain Gt are

Fig. 3. Illustration of the behavior of SSUM

Starting with C2S2, SSUM suspends the served from the cache. To compute the gain,
server updates to the RN, thus, acting like we take a time period that includes K update
NCU and resulting in a zero bandwidth gain. periods such that K >M. The cache entry will
The cache entry would have been outdated be updated, and hence, an update packet is sent
upon getting the RN’s next request, thus to the CN from the server, plus the request sent
causing a cache miss. This causes a traversal from the RN to the QD caching the query and
of all QDs, then a transmission to and from the then forwarded to the CN holding the data, and
server, a transmission back to the requesting finally, the reply from the CN.
node, and then a packet sent to a QD asking it Thus, the bandwidth gains for C2S1 and C2S2
to cache the query. In C2S1, the requests are are stated as shown in (2)
TABLE 2
Summary of Simulation Parameters’ Values

4 PERFORMANCE EVALUATION COACS (with the parameters’ values given


We used the ns2 software to implement the above). It should be noted though that several
SSUM system, and to also simulate the differences between the implementation of
Updated Invalidation Report (UIR) mechanism UIR in our simulation environment and that
[10] in addition to a version of it that is used in [10] exist. This resulted in producing
implemented on top of COACS so that a different results than those reported in [10].
presumably fairer comparison with SSUM is Some of the differences are: the number of
done. To start with, we list the default values data items at the server was set to 1,000 in [10]
of the main simulation parameters in Table 2. and 10,000 in our simulations. The used value
for the zipf parameter was 1 in [10] and 0.5 in
Compared Systems our case. We did not divide the database into
In addition to SSUM, we simulated two other hot and cold subsets, but instead, we relied on
systems: UIR and C_UIR. C_UIR is a version Zipf to naturally determine the access
of UIR [10] that we implemented on top of frequency of data. The size of the data item
was randomly varied in our simulations vary some of the key parameters of the
between 1 and 10 KB, while in [10], it was simulation environment and observe the
fixed to 1 KB. Also, we set the IR interval L to effects on the performance of the three
10 s and replicated the UIR four times within systems.
each IR interval, while in [10], L was set to 20
s. Hence, the simulation parameters in our 5 LITERATURE REVIEW
environment put more stress on the network Several cache consistency (invalidation)
and are more realistic. We analyze the schemes have been proposed in the literature
performance of SSUM, UIR, and C_UIR from for MANETS. In general, these schemes fall
four different perspectives. The first two are into three categories: 1) pull- or client-based,
the query delay and the update delay. In this where a caching node (CN) asks for updates
regard, it is important to note that in SSUM, from the server, 2) push- or server-based,
the server unicasts the actual data items to the where the server sends updates to the CN, and
CNs, while in UIR and C_UIR, it broadcasts 3) cooperative, where both the CN and the
the IDs of the updated items, not their server cooperate to keep the data up-to-date. In
contents. Another major difference is that in general, pull-based strategies achieve smaller
C_UIR, if the CN receives the request from query delay times at the cost of higher traffic
the RN after the update takes place at the load, whereas push-based strategies achieve
server but before the CN receives the update lower traffic load at the cost of larger query
report, the RN will receive a stale copy of the delays. Cooperative-based strategies tend to be
data item, while, as was explained in Section halfway between both ends. In this section, we
2, SSUM implements a provision that fixes restrict our literature survey to server-based
this issue. We stress that the update delay of strategies so as to provide a comparative study
UIR and C_UIR as presented in Sections 4 is in relationship to the approach of our proposed
the delay until the item’s ID reaches the node, system.
not the item itself. If we were to incorporate
the delay of fetching the data item from the 6 CONCLUSIONS AND FUTURE
server, the update delay will greatly increase. WORKS
The third performance measure is the hit ratio, In this paper, we presented a novel mechanism
and finally, the fourth measure is the average for maintaining cache consistency in a Mobile
bandwidth consumption per node. The latter Ad hoc Network. Our approach was built on
was computed by recording the total number top of the COACS architecture for caching
of messages of each packet type issued, data items in MANETs and searching for
received, or passed by a node, multiplying it them. We analyzed the performance of the
by the respective message size, and then system through a gainloss mathematical model
divided by the number of nodes and the and then evaluated it while comparing it with
simulation time. In the following sections, we the Updated Invalidation Report
mechanism.The presented results illustrated network becomes the CN for it. This simple
the advantage of our proposed system in most strategy may not necessarily be optimal, but if
scenarios. it is more likely for that same node or other
The evaluation results confirmed our analysis neighboring nodes to request the same data
of the scalability of the system (Section 2). item in the future, then our strategy could be
That is, they indicate that even when the node advantageous. In any case, other strategies,
density increases or the node request rate goes like trying to distribute the cache data as
up, the query request delay in the system is uniformly as possible in the network, could be
either reduced or remains practically studied. Another important topic for future
unaffected, while the cache update delay consideration is the security of the system. The
experiences a moderate rise. Yet, even if a design we presented in this paper did not take
higher update delay means that the probability into consideration the issues of node and
of nodes getting stale data increases, the network security. Since nodes in SSUM need
proposed system includes a provision for to cooperate with each other in order to answer
keeping track of such nodes and supplying a query, it is important that they trust each
them with fresh data within a delta time limit, other. Also, since QDs are the central
as described in Section 2. Hence, it can be component of the system, they could be the
concluded that the system can scale to a target of malicious attacks. For this reason, we
moderately large network even when nodes are present some ideas that improve the security of
requesting data frequently. Moreover, the the system and leave their implementation to a
increase in the node speed and disconnection future paper.
rate also only affected the cache update delay,
and to a lesser extent, the traffic in the REFERENCES
network. This reflects on the robustness of the
[1] H. Artail, H. Safa, K. Mershad, Z. Abou-
architecture and its ability to cope with Atme, and N. Sulieman, “COACS: A
Cooperative and Adaptive Caching System
dynamic environments. For future work, we
for MANETS,” IEEE Trans. Mobile
propose investigating the effects of other cache Computing, vol. 7, no. 8, pp. 961- 977, Aug.
2008.
placement strategies and cache replication on
[2] H. Artail and K. Mershad, “MDPF:
performance. The COACS system, and its Minimum Distance Packet Forwarding for
Search Applications in Mobile Ad Hoc
successor, SSUM, did not implement cache
Networks,” IEEE Trans. Mobile Computing,
replication: Only one copy of the version at the vol. 8, no. 10, pp. 1412- 1426, Oct. 2009.
[3] N.A. Boudriga and M.S. Obaidat, “Fault
server is maintained in the MANET. By
and Intrusion Tolerance in Wireless Ad Hoc
adding replication to the design, the query Networks,” Proc. IEEE Wireless Comm.
And Networking Conf. (WCNC), vol. 4, pp.
delay is expected to go down but the overhead
2281-2286, 2005.
cost will surely go up. Concerning cache [4] J. Cao, Y. Zhang, L. Xie, and G. Cao,
“Consistency of Cooperative Caching in
placement, as was described in [2], the node
Mobile Peer-to-Peer Systems over
that first requests a data item not found in the MANETs,” Proc. Third Int’l Workshop
Mobile Distributed Computing, vol. 6, pp.
573- 579, 2005.
[5] H. Jin, J. Cao, and S. Feng, “A Selective
Push Algorithm for Cooperative Cache
Consistency Maintenance over MANETs,”
Proc. Third IFIP Int’l Conf. Embedded and
Ubiquitous Computing, Dec. 2007.
[6] W. Li, E. Chan, Y. Wang, and D. Chen,
“Cache Invalidation Strategies for Mobile
Ad Hoc Networks,” Proc. Int’l Conf.
Parallel Processing, Sept. 2007.
[7] M.N. Lima, A.L. dos Santos, and G.
Pujolle, “A Survey of Survivability in
Mobile Ad Hoc Networks,” IEEE Comm.
Surveys and Tutorials, vol. 11, no. 1, pp. 66-
77, First Quarter 2009.
[8] W. Stallings, Cryptography and Network
Security, fourth ed. Prentice Hall, 2006.
[9] D. Zhou and T.H. Lai, “An Accurate and
Scalable Clock Synchronization Protocol for
IEEE 802.11-Based Multihop Ad Hoc
Networks,” IEEE Trans. Parallel and
Distributed Systems, vol. 18, no. 12, pp.
1797-1808, Dec. 2007.

You might also like