Professional Documents
Culture Documents
Performance of Route Caching Strategies in Dynamic Source Routing
Performance of Route Caching Strategies in Dynamic Source Routing
Department of Electrical & Computer Engineering and Computer Science University of Cincinnati Cincinnati, OH 45221 E-mail: mmarina,sdas@ececs.uc.edu Abstract
On-demand routing protocols for mobile ad hoc networks utilize route caching in different forms in order to reduce the routing overheads as well as to improve the route discovery latency. For route caches to be effective, they need to adapt to frequent topology changes. Using an ondemand protocol called Dynamic Source Routing (DSR), we study the problem of keeping the caches up-to-date in dynamic ad hoc networks. Previous studies have shown that cache staleness in DSR can signicantly degrade performance. We present and evaluate three techniques to improve cache correctness in DSR namely wider error notication, route expiry mechanism with adaptive timeout selection and the use of negative caches. Simulation results show that the combination of the proposed techniques not only result in substantial improvement of both application and cache performance but also reduce the overheads. overhead to a large extent. Replies from caches also bring down the route discovery latency. This is similar to processor caches where caches reduce the access latency as well as bandwidth demand on the memory bus. The route caches are distributed across different nodes over the entire network. Leveraging caches in the mobile ad hoc networks brings up the challenge of keeping the distributed caches up-to-date even with frequent route changes. Utilizing cached information without robust mechanisms to keep it up-to-date can actually degrade performance and thus making caches counter-productive. Our goal in this paper is to develop and analyze effective caching strategies for the best overall performance. The majority of work related to route caches in mobile ad hoc networks focussed on Dynamic Source Routing (DSR)[10, 1], an on-demand protocol that uses source routing and makes aggressive use of route caches. However, the current specication of DSR lacks a mechanism to determine the relative freshness among routes in the route caches, or even to purge all stale routes from route caches effectively. Some performance studies[11, 6, 3] have observed that caches in DSR can report invalid routes frequently which affects performance negatively. In this paper, we reason further on the effects of caching on the performance of DSR. We present and evaluate three techniques to keep caches up-to-date in DSR. The remainder of the paper is organized as follows. Following section gives an overview of DSR. In section 3, we rst point out some drawbacks of the existing caching model in DSR and then we present techniques to overcome those. We perform an evaluation of the proposed techniques in section 4 and follow it with related work and conclusions.
1 Introduction
A mobile ad hoc network is a mobile, multi-hop wireless network with no stationary infrastructure. Dynamic topologies due to mobility and limited bandwidth and battery power make the routing problem in ad hoc networks more challenging than traditional wired networks. A key to designing efcient routing protocols for such networks lies in keeping the routing overhead minimal. A new class of on-demand routing protocols (e.g., DSR[10], AODV[13], TORA[12]) attempt to reduce routing overhead by only maintaining routes between nodes taking part in data communication. In these protocols, the source discovers routes on-demand by initiating a route discovery process. This process typically involves networkwide ooding of a route request and waiting for a route reply. Caching provides a mechanism for generating a route reply from an intermediate node en route to the destination. An intermediate node generating a reply also quenches the route request ood at that node and helps reduce the routing
When a node in the ad hoc network attempts to send a data packet to a destination for which it does not already know the route, it uses a route discovery process to dynamically determine such a route. Route discovery works by ooding the network with route request (also called query) packets. Each node receiving the request for the rst time, rebroadcasts it, unless it is the destination or it has a route to the destination in its route cache. Such a node replies to the request with a route reply packet that is routed back to the original source. Route request and reply packets are also source routed. The request builds up the path traversed so far. The reply routes itself back to the source by traversing this path backwards. The route carried back by the reply packet is cached at the source for future use. If any link on a source route is broken (detected by a link-layer feedback resulting from the failure of an attempted data transmission over a link), a route error packet is generated. Route error is unicast back to the source using the part of the route traversed so far, erasing all entries in the route caches along the way that contain the broken link. A new route discovery must be initiated by the source, if this route is still needed and no alternate route is available in the cache. Several optimizations to this basic protocol have been proposed and have been evaluated to be very effective by the authors of the protocol [11]. They are as follows. (i) Salvaging: An intermediate node can use an alternate route from its own cache, when a data packet meets a broken link on its source route. (ii) Gratuitous route repair: A source node receiving an error packet, piggybacks the error in the following route request. This helps cleaning up the caches of other nodes in the network that may have the broken link in one of the cached source routes. (iii) Promiscuous listening: When a node overhears a packet not addressed to itself, it checks if the packet could be routed via itself to gain a shorter route. If so, the node sends a gratuitous reply to the source of the route with this new, better route. Aside from this, promiscuous listening helps a node to learn different routes without directly participating in the routing process. (iv) Non-propagating route requests: A node can perform a non-propagating (one-hop) route discovery before resorting to a network-wide ood. This optimization can reduce the routing overhead in cases where the neighbor caches have a route to the intended destination or when the destination is a neighbor.
source learns many alternate routes to the destination that are cached. Alternate routes are useful in case the primary (shortest) route breaks. In addition, any intermediate node on a route learns routes to the source and destination as well as other intermediate nodes on that route. Thus, a large amount of routing information is gathered and cached with just a single query-reply cycle. These cached routes may be used in replying to subsequent route queries. Replies from caches provide dual performance advantages. First, they reduce route discovery latency. Second, without replies from caches the route query ood will reach all nodes in the network (request storm). Cached replies quench the query ood early, thus saving on routing overheads. However, without an effective mechanism to remove stale cache entries, caches may contain routes that are invalid. Then, route replies may carry stale routes. Attempted data transmissions using stale routes incur overheads, generate additional error packets and can potentially pollute other caches when a packet with a stale route is forwarded or snooped on. Previous studies (e.g., [11, 6, 3]) have identied the stale route problem and its harmful effects. In the following, we identify three main trouble spots with the DSR protocol that are the root cause of the stale cache problem.
Incomplete error notication: When a link breaks, route errors are not propagated to all caches that has an entry with the broken link. Instead, the route error is unicast only to the source whose data packet is responsible for identifying the link breakage via a link layer feedback. Thus only a limited number of caches are cleaned. The failure information is, however, propagated by piggybacking it onto the subsequent route requests from the source. But as the route requests may not be propagated network-wide (because of replies from caches), many caches may remain unclean. No expiry: There is no mechanism to expire stale routes. If not cleaned explicitly by the error mechanism, stale cache entries will stay forever in the cache. Quick pollution: There is no way to determine the freshness of any route information. For example, even after a stale cache entry is erased by a route error, a subsequent in-ight data packet carrying the same stale route can put that entry right back in. This possibility increases at high data rates, as there will be a large number of in-ight data packets upstream carrying the stale route to un-erase the route. This problem is compounded by liberal use of snooping. Stale routes are picked up by any other node overhearing any transmission. Thus, cache pollution can propagate fairly quickly.
Wider Error Notication - This technique is based on the idea that bad news should be propagated fast and wide. In order to increase the speed and the extent of error propagation, route errors are now transmitted as broadcast packets at the MAC (medium access control) layer. Initially, the node that determines the link breakage (via a link layer feedback, e.g.) broadcasts the route error packet containing the broken link information. Upon receiving a route error, a node updates its route cache so that all source routes containing the broken link are truncated at the point of failure. A node receiving a route error propagates (rebroadcasts) it further only if there exists a cached route containing the broken link and that route was used before in the packets forwarded by the node. Note that using this scheme route errors reach all the sources in a tree fashion starting from the point of failure. In effect, route error information is efciently disseminated to all the nodes that forwarded packets along the broken route and to the neighbors of such nodes that may have acquired the (broken) route through snooping.
node. When a cached route breaks due to link breakage (known via link layer feedback) or upon receipt of a route error, the lifetime of the broken route is computed as the time elapsed since it was last entered in the cache. Average route lifetime is obtained using the lifetimes of all broken routes in the past. Time of latest link breakage seen by a node is also maintained. Using this information, the timeout period is calculated as follows.
Timer-based Route Expiry - Recall that link breakage is detected only by a link layer feedback, when an attempted data transmission fails. Thus loss of a route will go undetected if there is no attempt to use this route. A more proactive timer-based approach will be able to clean up such routes. A timer-based approach is based on the hypothesis that routes are only valid for a specic amount of time (timeout period) from their last use. Each node in a cached route now has an associated timestamp of last use. This timestamp is updated each time the cached route or part thereof is seen in a unicast packet being forwarded by the node. Portions of cached routes unused in the past interval are pruned.
When route breaks occur uniformly in time, average route lifetime itself provides a good estimate of . However when many route breaks occur in short bursts with a large separation in time, the average route lifetime does not accurately predict during the periods of no route breaks. Hence, the second term in the above equation is used to correct the estimates during time intervals when there are no route breaks. The value of is computed periodically and is used to expire stale entries from the cache. In our experiments, is computed every half a second and route cache is then checked for stale entries. We use an of and the minimum is limited to second.
Benet of this approach depends critically on the proper selection of the timeout period . A very small value for the timeout may cause many unnecessary route invalidations, while a very large value may defeat the purpose of this technique. Although well-chosen static values can be obtained for a given network, a single timeout for all the nodes may not be appropriate in all scenarios and for all network sizes. Therefore, a dynamic mechanism is desirable that allows each node to choose timeout values independently based on its observed route stability.
Negative Caches - To improve error handling in DSR, caching of negative information has already been suggested [1]. We make use of this idea in the following way. Every node caches the broken links seen recently via the link layer feedback or route error packets. Within a interval of creating this entry if a node is to forward a packet with a source route containing the broken link, (i) the packet is dropped and (ii) a route error packet is generated. In addition, the negative cache is always checked for broken links before adding a new entry in the route cache. Essentially, route cache and negative cache are mutually exclusive with respect to the links present in them. This prevents the cache pollution problem. In our experiments, we entries with FIFO reused a negative cache size of placement policy and entries are expired after = 10 seconds.
4 Performance Evaluation
We use a detailed simulation study to evaluate the effectiveness of the caching techniques described in the last section. Their performances are compared with the base DSR protocol. In the following, we rst describe the simulation environment and the performance metrics used, and then present and analyze the simulation results.
We propose a heuristic for adaptive selection of timeouts locally at each node based on the average route lifetime and the time between link breaks seen by the
active until the end. Simulations are run for 500 simulated seconds. Each data point represents an average of ve runs with identical trafc models, but different randomly generated mobility scenarios. Identical mobility and trafc scenarios are used across all protocol variations.
0.84 0.82
Packet delivery fraction
0.8 0.78 0.76 0.74 0.72 0.7 0.68 1 No Timeout Static Timeout Adaptive Timeout 10 Timeout period (sec) 100
1.1 1 0.9
Avg. delay (sec)
0.8 0.7 0.6 0.5 0.4 0.3 0.2 1 10 Timeout period (sec) 100
46 44
Normalized overhead
42 40 38 36 34 32 30 28 26 1
100
performance and the next two sets of results study the effect of mobility and load on different caching strategies. Fig. 1 shows the packet delivery fraction, average delay and normalized overheads for various static timeout values ranging from 1 to 50 seconds. These results correspond to a pause time of 0 seconds (constant mobility) and a xed packet rate of 3 packets/sec. The performance of the base DSR (no timeout) and the adaptive timeout mechanism are also shown. As expected, the performance with very small timeout of 1 second is signicantly worse, worse than without any timeout! Performance of all the three metrics improves with increase in timeout period till 10 seconds (which is the optimal timeout period for this network) and drops later with further increase in timeout periods. Base DSR performs worse in all cases except when timeout is too small. Performance of the adaptive timeout selection is similar to that with a well chosen static timeout period and its effectiveness is thus validated. In the rest of the performance comparisons, we show results only for the adaptive timeout selection when considering the timer-based route expiry technique. We now study the performance of the three caching techniques (see section 3) as a function of mobility. Pause time is varied between 0 seconds and 500 seconds. Note that pause time of 0 seconds indicates constant mobility while a pause time of 500 seconds means no mobility. Packet rate is xed at 3 packets/sec as before. We look at the performance of DSR with wider errors, adaptive route expiry and negative caches independently in comparison with the base DSR. We also consider the variant of DSR with all three techniques combined (referred to as DSR+ in the plots). Fig. 2 shows the delivery fraction, delay and overheads with varying pause times. DSR performs worse in all metrics in comparison with the other variants except at high pause times while DSR+ delivers superior performance overall. At constant mobility, in particular, DSR+ gives an improvement of about 16% and 22% in packet delivery and overheads respectively and about 40% reduction in the average delay. These performance improvements show that the combination of the three techniques are able to effectively remove stale cache entries. Still at most 85% of the packets are delivered at lower pause times. This fact is justied to a large extent given the very high rate of link breaks at low pause times and the fact that majority of the packet losses are due to the packets dropped at the intermediate nodes for lack of a route. Recall that in the DSR model we used, only trafc sources buffer packets and a packet is dropped at the intermediate node if the source route in the packet breaks and there is no alternate route in the local cache to salvage from. Among the three caching techniques, when taken independently, adaptive route expiry shows the most improvement in performance while negative caches gives the least
Protocol Base DSR Negative Caches Wider Errors Adaptive Route Expiry DSR+
Base DSR Wider Errors Negative Caches Adaptive Route Expiry DSR+ 0 100 200 300 Pause time (sec) 400 500
0.7 0.6
Avg. delay (sec)
Base DSR Wider Errors Negative Caches Adaptive Route Expiry DSR+
400
500
benet and wider errors performance lies in between. The performance of negative caches is as expected since they try to prevent cache pollution which is prevalent only at high data rates. About the relative performance of wider errors and adaptive route expiry, the former removes only entries containing broken links discovered via link layer feedback while the latter through its proactive mechanism can also remove unused stale route entries. Table 3 shows the behavior of different techniques in terms of cache-related metrics corresponding to the results for the pause time of 0 seconds shown in Fig. 2. Cache performance of different protocol variations show similar relative behavior as before. Effect of load on various caching strategies at constant mobility is shown in Fig 4. As before, DSR+ outperforms the base DSR and the performance of the individual techniques lie in between these two extremes.
5 Related Work
36 34
Normalized overhead
32 30 28 26 24 22 20 18 16 0 100
Base DSR Wider Errors Negative Caches Adaptive Route Expiry DSR+
200
300
400
500
A few performance studies have looked at the effects of route caches on DSRs performance. Maltz et al[11] evaluated the benet of replies from caches on the route discovery latency and routing overhead. In their simulation scenarios, they found the cache hit rates to be around 55%. They argued that this low hit rate was consistent with the on-demand philosophy of maintaining only routes which are needed. They also observed that the percentage of good replies to be around 59% in a 50 node network. Since the packet delivery ratios in their scenarios were above 90%, they claimed that despite having low percentage of good cache replies, the route maintenance in DSR was able to cope up and deliver good performance. However, later performance studies have indicated that this is not always the case. Performance impact of stale routes is largely dependent on the type of trafc (e.g., TCP), load in the network, network size etc. For instance, it was shown in [6, 7] that stale routes in DSR can signicantly degrade TCP performance. For a single TCP connection they
260 240
Throughput (Kbits/sec)
220 200 180 160 140 120 100 80 60 50 100 Base DSR Wider Errors Negative Caches Adaptive Route Expiry DSR+ 150 200 250 300 350 Offered load (Kbits/sec) 400 450
(a) Throughput
1.2 1
Avg. delay (sec)
Base DSR Wider Errors Negative Caches Adaptive Route Expiry DSR+
even found the TCP throughput to be much better without replies from caches. Mechanisms similar to those studied in this paper like suitable timeouts, negative caches etc were suggested to overcome stale route problem in DSR[6]. Similar observations were made in [3], where the authors noted that the lack of mechanisms to determine the freshness of routes and the absence of cache expiry mechanisms were important factors for the poor performance of DSR in large networks and in high load situations. More recently, the effects of cache structure, cache capacity, cache timeouts and mobility patterns on the performance of DSR were studied in [8]. Main difference between our work and [8] is that we focus on the performance impact of cache correctness in DSR while they put more emphasis on the cache organization. The performance of DSR using a range of timeout values were studied and an observation similar to ours was made i.e., well-tuned static timers perform as well as those with adaptive timers. The route expiry mechanism they investigated differs from our technique in the underlying cache structure. They used an expiry mechanism based on link caches, while our scheme uses a path cache1 used in [2].
6 Conclusions
Most of the on-demand protocols for mobile ad hoc networks employ some form of route caching to reduce route discovery latency as well as the routing overhead. In dynamic ad hoc networks, route changes can occur frequently due to node mobility. Unless route caches adapt well to the frequent route changes, they may affect performance adversely. We used Dynamic Source Routing Protocol (DSR) to study the effect of cache correctness on the performance of routing protocol. This work was motivated by prior studies that observed performance degradation of DSR due to stale caches. DSR is a good candidate protocol to study caching strategies as it uses route caching aggressively. However, we expect any other protocol that uses caching moderately also to benet from our study. An example is AODV[13] that uses caching indirectly when intermediate nodes generate route replies. We evaluated three techniques to improve caching performance in DSR namely wider error notication, route expiry mechanism with adaptive timeout selection and negative caches. The techniques primarily focus on effective removal of stale cache entries and preventing cache pollution. Our simulation results show that the combination of these techniques improve the packet delivery by about 15% at high mobility relative to the base DSR and consistently outperform the base DSR in terms of average delay and
1 Link caches store a set of individual links, organized as a graph data structure. In contrast, path caches store a set of complete paths (sequence of links), each starting at the caching node.
100
400
450
48 46
Normalized overhead
44 42 40 38 36 34 32 30 28 50 100 150
Base DSR Wider Errors Negative Caches Adaptive Route Expiry DSR+
200
250
300
350
400
450
overheads. A signicant improvement (around 70%) in the quality of cache replies is also achieved by these techniques. Our future work will concentrate on modifying the caching model in DSR so that the relative freshness of cached routes can be determined. We will also explore the possibility of incorporating techniques proposed in this paper to other ondemand routing protocols.
Acknowledgements
This work is partially supported by NSF CAREER grant ACI-0096186, NSF Networking research grant ANI0096264, and Ohio Board of Regents computing research enhancement funds in University of Cincinnati. We would also like to thank the reviewers for their comments.
[12] V. D. Park and M. S. Corson. A highly adaptive distributed routing algorithm for mobile wireless networks. In Proceedings of IEEE INFOCOM97 Conf., April 1997. [13] C. E. Perkins and E. M. Royer. Ad hoc on-demand distance vector routing. In Proceedings of the 2nd IEEE Workshop on Mobile Computing Systems and Applications, pages 90 100, Feb 1999. [14] B. Tuch. Development of WaveLAN, an ISM band wireless LAN. AT&T Technical Journal, 72(4):2733, July/Aug 1993.
References
[1] J. Broch, D. Johnson, and D. Maltz. The dynamic source routing protocol for mobile ad hoc networks. http://www.ietf.org/internet-drafts/draft-ietf-manetdsr-03.txt, Oct 1999. IETF Internet Draft (work in progress). [2] J. Broch, D. Maltz, D. Johnson, Y.-C. Hu, and J. Jetcheva. A performance comparison of multi-hop wireless ad hoc network routing protocols. In Proceedings of IEEE/ACM MOBICOM98, pages 8597, October 1998. [3] S. R. Das, C. E. Perkins, and E. M. Royer. Performance comparison of two on-demand routing protocols for ad hoc networks. In Proceedings of the IEEE INFOCOM 2000 Conference, March 2000. [4] I. S. Department. Wireless LAN medium access control (MAC) and physical layer (PHY) specications, IEEE standard 802.111997, 1997. [5] K. Fall and K. V. (Eds.). ns notes and documentation, 1999. available from http://www-mash.cs.berkeley.edu/ns/. [6] G. Holland and N. H. Vaidya. Analysis of TCP performance over mobile ad hoc networks. In Proceedings of IEEE/ACM MOBICOM99, pages 219230, Seattle, August 1999. [7] G. Holland and N. H. Vaidya. Impact of routing and link layers on TCP performance in mobile ad hoc networks. In Proceedings of the IEEE WCNC 1999, September 1999. [8] Y.-C. Hu and D. Johnson. Caching strategies in on-demand routing protocols for wireless ad hoc networks. In Proceedings of IEEE/ACM MOBICOM00, pages 231242, August 2000. [9] P. Johansson, T. Larsson, N. Hedman, and B. Mielczarek. Routing protocols for mobile ad-hoc networks - a comparative performance analysis. In Proceedings of IEEE/ACM MOBICOM99, pages 195206, August 1999. [10] D. Johnson and D. Maltz. Dynamic source routing in ad hoc wireless networks. In T. Imielinski and H. Korth, editors, Mobile computing, chapter 5. Kluwer Academic, 1996. [11] D. Maltz, J. Broch, J. Jetcheva, and D. Johnson. The effects of on-demand behavior in routing protocols for multi-hop wireless ad hoc networks. IEEE Journal on Selected Areas in Communication, 17(8), August 1999.