Improving Performance of Wireless Networks Using Network Coding and Back-Pressure Scheduling

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 42

Improving Performance of Wireless Networks using Network Coding and Back-Pressure scheduling

A Senior Honours Thesis by Shyam Sunder Kumar

Submitted to the Office of Honours Programs Texas A&M University In partial fulfilment of the requirements of the

UNIVERSITY UNDERGRADUATE RESEARCH FELLOWS

April 2009

Major: Electrical Engineering

ABSTRACT

Improving Performance of Wireless Networks using Network Coding and Back-Pressure scheduling

Shyam Sunder Kumar Department of Electrical and Computer Engineering Texas A&M University Fellows Advisor: Dr. Alexander Sprintson Department of Electrical and Computer Engineering and Graduate Mentor: Mr. Vinith Reddy Department of Electrical and Computer Engineering

Since the seminal ideas of network coding were first introduced by R Ahlswede et al, in [1], there has been a lot of interest in the topic. There have been a wide variety of discussions on the potential for the use of this idea. However, there have been few cases of studies based on practical implementations of the idea. This lead to the proposal of COPE as a network module to fit into existing wireless LAN modules by S. Katti et al [5]. The practical nature of COPE has been received with relative optimism in the research community. In our own research we tested COPE for ourselves to look for possible enhancements. We observed from simulations that COPE had a particular scope for improvement in finding more coding opportunities. To achieve this, we evaluated the benefits of utilising the max-weight scheduling schemes in conjunction with COPE. This thesis presents our implementation of this system and demonstrates that max-weight scheduling indeed aids COPE find coding opportunities.

Table of Contents

LIST OF FIGURES

CHAPTER 1
1.1 1.2
1.3

INTRODUCTION
Current Wireless Architecture and Innovations Motivation Contribution

CHAPTER 2
2.1 2.2 2.3 2.4

BACKGROUND
Network Coding
2.1.1 On the Practicality of Network Coding The Layers in the Node Model What COPE Achieves

OPNET
2.2.1 2.3.1

COPE Architecture Back-Pressure

CHAPTER 3
3.1 3.2

SYSTEM IMPLEMENTATION IN OPNET


Implementation of COPE in OPNET
3.1.1 3.1.2 3.2.1 3.2.2 3.2.3 Coding Opportunities Shortcomings of COPE in OPNET Data Structures Packet Headers Coding Combinations

The Back-Pressure System Embedded in COPE

CHAPTER 4
4.1 4.2 4.3

SIMULATIONS AND ANALYSES


Discovering the Simulation Environment Cross
4.2.1 Analysing the results

Chain
4.3.1 Run-1

4.3.2 4.3.3 4.3.4

Analysing Run-1 Run-2 Analysing Run-2

CHAPTER 5
APPENDIX A APPENDIX B APPENDIX C REFERENCES

CONCLUSIONS AND FUTURE WORK


The Workings of XOR Packet Combination and Decoding On Opportunistic Listening

List of Figures

2.1 Comparison of contemporary forwarding techniques versus network coding. 2.2 Comparison of Node Models with and without COPE 2.3 Position of COPE and Header Details 2.4 Discussion on Backpressure 3.1 Backpressure: Contents of the Destination Table 3.2 Backpressure: Contents of the Peer Table 4.1 - Cross Topology 4.2 Cross BP Enabled Coding Opportunities (Node Set 0-1-2) 4.3 Cross BP disabled Coding Opportunities (Node Set 0-1-2) 4.4 - Cross BP Enabled Coding Opportunities (Node Set 3-1-4) 4.5 Cross - BP disabled Coding Opportunities (Node Set 3-1-4) 4.6 Chain Topology 4.7 Chain BP Enabled Coding Opportunities (Node Set 0-1-2) 4.8 Chain BP disabled Coding Opportunities (Node Set 0-1-2) 4.9 Chain BP Enabled Coding Opportunities (Node Set 4-0-1) 4.10 Chain BP disabled Coding Opportunities (Node Set 4-0-1) 4.11 Chain BP Enabled Coding Opportunities (Node Set 0-1-2) 4.12 Chain BP disabled Coding Opportunities (Node Set 0-1-2) 4.13 Chain BP Enabled Coding Opportunities (Node Set 0-1-2) A .1 The Truth Table for XOR C.1 Illustrates Opportunistic Listening

CHAPTER 1 Introduction:
Wireless communication plays an important role in modern society. Individuals, businesses, and government agencies depend on wireless networks for reliable communication, including voice and data transmission. Due to the low cost of wireless infrastructure, wireless networks have become very popular in the developing world, with a special interest in rural area development. Traditionally, wireless technology has been employed to bridge the gap between mobile users and established communication infrastructures. Products based on the IEEE 802.11 standard connect users to local area networks and to the Internet, while GSM and CDMA systems link mobile users to the switched voice networks.

1.1

Current Wireless Architecture and Innovations


The 802.11 standard was developed using many wired network principles as the

basis; the main focus of the standard was to utilise air as a transmission medium. However, this standard lacks the use of some basic features of a wireless medium that have been demonstrated to be useful [1]. One particular feature is the broadcast nature of such networks. Contemporary research suggests that wireless networks can be improved using this front [1,5,7]. Another important thrust is to modify current scheduling techniques used to better suit a wireless medium. The problem with current scheduling techniques is that it does not effectively take care of interference that occurs when two nodes transmit simultaneously effectively [13]. This results in loss or corruption of information.

Over the past few years, there have been some new ground-breaking research to address some of these issues. One such example has been the advent of the theoretical concept of network coding. Network coding focuses on combining packets together and cleverly routeing them to satisfy multiple clients [1]. Although network coding is a promising theoretical idea, there have been few attempts on taking this concept to practice. One practical implementation was the proposal of a new layer called COPE [5]. Using network coding as the basis, COPE provides opportunities to increase network throughput. COPE is indeed a promising first step with potential for building on.

1.2

Motivation:
To study COPE, we implemented a simulation environment in OPNET. Our tests

suggested that COPE fails to smoothly interact with certain aspects of the current wireless transmission scheduling schemes. This thesis presents our work in improving this predicament. To help COPE interact more fruitfully with the current set-up, we make use of the back-pressure / max-weight scheduling scheme. The technical reasons for pursuing this scheme is explained in later sections (sections 2.4 and 3.2). However, it is advantageous to take a quick look at it from an overall system point of view. Constrained by current scheduling techniques, COPE is unable to meet its goals because of its inability to find multiple packets that need to be sent to multiple destinations. This is a result of the fact that currently, information is sent out as and when an opportunity is found. But for COPE to be successful, packets destined to different

places need to be collected at one point and then combined together. An important note is that this should be achieved without causing congestion. Back-pressure scheduling is an effective scheduling scheme that does just this. It innately stops packets from being transmitted until there is sufficient load faced by the output queues. We present a detailed reasoning in section 3.2.

1.3

Contributions:
The thesis shows that embedding COPE with back-pressure scheduling enhances

the working of COPE. Simulation studies are presented to show that COPE finds more opportunities to combine packets together. The implications of this (theoretically at first) are significant. Combining packets in a transmission increases the average number of packets sent per transmission. This results in fewer transmissions for the same amount of data which results in faster networks. Furthermore, we can also easily see that fewer transmissions results in the reduction in the consumption of power by the antennae.

CHAPTER 2 Background
2.1 Network Coding:
Network Coding was first introduced in the paper titled Network Information Flow [1] presented by Ahlswede et al. Since then the idea has been of interest to various research groups working on anything from wireless networks [3] to improving routeability in VLSI circuits [2]. The innovative idea of Network Coding is to combine data in intermediate nodes in a flow such that the receiving nodes can decode this information. The following example briefly explains the relevant features of network coding (see figure 2.1). In this example, Alice and Bob are two wireless network users and want to exchange packets. Alice has packet A and wants packet B and Bob has packet B and wants packet A. Both Alice and Bob can hear the router and the router can hear them. However, packets from the two nodes need to pass through the router to reach their respective destinations. Contemporary transmission techniques require Alice and Bob to send these packets to the router serially and the router forwards these packets to A and B in two transmissions (as illustrated in figure 2.1.a). Network Coding demonstrates that the router could instead combine (using XOR represented as ) these two packets and transmit this combined packet. This results in the two packets A and B being sent out in one transmission. This capability alone improves the throughput (average rate of successful packet delivery) of wireless networks.

Traditional Method
Alice A
Router

Bob Alice

Network Coding
Bob A
Router

Alice

Router

Bob B Alice
Router

Bob B

Alice A
Router

Bob A Alice

Bob AxorB
Router

Alice B

Router

Bob B

AxorB

Figure 2.1.a

Figure 2.1.b

Figure 2. 1 Comparison of contemporary forwarding techniques versus network coding.

There are some assumptions made in this example. Firstly, both Alice and Bob are assumed to keep copies of A and B in their packet pools (even after they have transmitted them) which would allow them to read packets B and A respectively, from A B. This is a reasonable assumption to make as Alice and Bob just sent out these packets. Appendices-A and B discuss the XOR operator and illustrate how XOR enables combining and decoding of packets. Secondly, we assume that both Alice and Bob can hear transmissions from the router simultaneously. However, this feature, also called broadcasting, is intrinsically available in our target networks wireless networks and therefore this assumption is valid.

2.1.1

On the Practicality of Network Coding: As discussed in the previous section, network coding promises significant

throughput improvements in contemporary wireless communications. The idea of network coding is very simple packets required by more than one client are combined together and distributed to satisfy multiple clients. However, a practical implementation of this concept raises a variety of issues [6,8]. Firstly, in order to ensure the successful decoding a coded packet, transmitting nodes need to have comprehensive information about the intended destinations. But this problem can be dealt with by using packet pools to store information about packet repositories in other nodes in the network. But, we have to limit the maximum size to take into account the limit of memory available. Given this information, there are many possible coding schemes that can be employed. This leads to the issue of how transmitting nodes will decide the best possible combination of coded packets [6]. To maximise the amount of information (coded-packet with most number of packets that can be successfully decode by its recipients) sent out, requires the transmitting node to run through all the possible coding schemes. This process takes exponential amounts of time to be completed [7]. There have been many heuristic algorithms that have been suggested to compute this ideal coded packet combination [7,9]. Other issues include compatibility with current communications systems. Later sections (section - 2.3) will discuss the introduction of a novel experimental network module called COPE that was designed explicitly to apply network coding ideas to

contemporary wireless systems. The next section deals with introducing the simulator that is used in our experiments.

2.2

OPNET:

The choice of simulator for running experiments and testing ideas can be a tricky issue. There are a set of features (like efficiency, ease of use, and others) that a simulator needs to have. The following are the qualities of OPNET that stand out. OPNET is a powerful commercial network simulation environment created by OPNET Technologies. OPNET is used by various commercial enterprises such as Comcast Cable, Deutsche Telekom and others [11]. OPNET has node models for many network devices such as servers, routers, workstations and other components manufactured by the likes of Cisco, 3M and others. It has facilities to model transmitter / receiver models for antennae, satellite communication models for inter-city global communications, adversary models for wireless channel security analysis and many more network functionalities. OPNETs feature rich simulator also has an extremely versatile kernel platform which enables users to manipulate node models to test very specific cases. Users can also define custom flow models hence giving a very comprehensive control over simulation studies. The feature of interest in our project is the fact that, standard network node models implemented by OPNET may be modified to implement novel, untested and theoretical node models to test for compliance, operability and potential benefits when added on-to current standards. For our purposes, this is the feature that is also most useful. We use the current wireless-workstation model using 802.11 standards to embed COPE. However,

modifying the existing 802.11 implementation provides the user with the task of

understanding the working of the layers immediately attached to COPE (see figure 2.2 for a structure of the node models in OPNET). In our case, these layers are the MAC and the ARP layers.

Figure 2.2.a

Figure 2.2.b

Figure - 2.2 - Comparison of Node Models with and without COPE The model in 2.2.a represents the Wireless_Lan_Workstation_Adv model available as a standard OPNET node model. Model 2.2.b represents the same with the COPE layer embedded. Observe that the names of the ARP and MAC process in the original model have been changed. This was done to enable the ARP and MAC layers to recognise the new COPE layer.

2.2.1

The Layers in the Node Model:

Application: This layer controls the generation of data. This is where data begins its journey and this also where information ends. Application level is usually controlled

by protocols like HTML, SMTP and others. In OPNET, this layer simulates these controls. TCP: OPNET uses TCP for wireless with option to use one of the various TCP flavours such as TCP Reno and Vegas. IP: The IP layer in OPNET implements the wireless standard for IP_v4.

The remaining layers have been modified specifically to house COPE and Back-Pressure modules. Implementations of these are discussed comprehensively in section-3.

2.3

COPE Architecture:
COPE was designed as a practical incarnation of the up-to then largely theoretical

network coding idea. COPE was proposed by Sachin Katti et al. in their paper XORS in The Air: Practical Wireless Network Coding [5]. In this paper, the authors discuss this new layer in the 802.11 wireless standard. The authors also discuss the results of their experimental implementation of COPE in MITs roof-top network. In this thesis we include a brief discussion on the various features that COPE offers, the intricacies of the model that are relevant to the goals of the project and a summary of their results. This discussion is relevant to this thesis as our attempt was to improve the performance of our own simulator for COPE. As mentioned earlier, we achieved this by using ideas from the max-weight scheduling process discussed by Shakkotai and Srikant in [10]. 2.3.1 What COPE Achieves: As a practical set-up, COPE primarily aims to incorporate network coding within the current 802.11 wireless standard. To achieve this, it utilises the broadcast nature of wireless networks. As a result of adding broadcasting to the toolbox, COPE brings in

opportunistic listening and opportunistic coding (these concepts are explained in Appendix-B) of packets as well. Furthermore, to solve the problem of determining the best coding combination, COPE uses coding combinations to be decoded after every hops. This means that every node using COPE would need to store information about packets available at nodes in its clique set only peers that can hear the source-nodes transmissions. It is important to note that this single-hop coding would require packet-decoding to take place at every node in its clique set. However, it is possible that in certain scenarios a neighbour is not meant to receive information and in this case it will discard the packet only if it is unable to decode the information; but if it is capable of extracting any information, uses this new information to update its packet pool. Additionally, to implement a practical set-up for a new theoretical idea in a preexisting architecture requires many changes or additions to the packet structure. To this end, COPE was primarily implemented as a new layer between the IP and MAC layers of 802.11. COPE has its own packet header (see figure 2.3 for the packet format for this module). In this header, COPE stores information in four blocks. The first block houses information about the total number of packets stored in the XORed packet, Packet_IDs of each of the packets stored and also their respective next_hops. The second block plays a crucial role in the working of COPE. This block is used to send reception reports of overheard packets that exist in the packet pool of the source node. This information will be added to the packet pool of the receiving node. This

information will also be used during the coding-combination process to determine which packets should be coded together.

Cope Packet Format


Packets available in the XORed packet
ENCODED_NUM PKT_ID NEXT_HOP Higher Layers (Appln. Level et al.)

Where lies COPE?

(Similar entries for other packets)

Reception Reports (RR) of Overheard Pkts


Number of RRs SRC_IP LAST_PKT BIT-MAP

TCP / UDP

(Similar entries for other sources)

IP

Acknowledgements
NUMBER OF ACKNOWLEDGEMENTS PKT SEQUENCE NUMBER NEIGHBOR LAST_ACK ACK-MAP

COPE

MAC

(Similar entries for other neighbours) Physical Layers (Transmitter/Receiver) PACKET INFORMATION

Figure 2.3 Position of COPE and Header Details

The third block contains acknowledgements for packets received from the different intended neighbours. These acknowledgements are asynchronous in nature and are stored together in an ACK pool. These ACKs are then compiled together and sent to the TCP. ACKs are not directly sent to the TCP else the TCP will interpret non-sequential ACKs as network congestion. The fourth and final block contains the information, may it be in coded or uncoded format.

2.4

Backpressure:
Wireless networks, unlike their wire-line counterparts, have always faced an issue

in scheduling links. In wireless networks, when a node is transmitting, another node may not be available to transmit due to interference of their signals. This creates a lot of collision in wireless networks. The need for better scheduling techniques to address this issue was first pointed out by Tassiulas et al in [13]. The Max-Weight Scheduling algorithm (Back-Pressure Scheduling) is shown to be an optimal scheduling technique [13, 10]. Now we demonstrate how Back Pressure (BP) controls transmission scheduling in networks using a simple example. Consider three nodes node1, node2, node3. Let there be two flows f1 and f2 such that, f1 flows from node1 to node3, and, f2 flows from node3 to node1. Also, node1 and node3 are not connected by a wireless link and must relay information through node2. This results in the following scenario (figure 2.4)

Figure - 2.4 Discussion on Backpressure This figure shows the various queues in the different nodes that are required for backpressure calculations. The two flows in this scenario are f1 from node1 to node_3 and f2 from node3 to node_1. Note that node_2 simply acts as a relay for the two flows and is not a source by itself.

Now, node 1 will have an output queue with packets destined to node3 and conversely, node3 has packets destined to node1 in its output queue. But observe that node2 has packets destined to both nodes node1 and node2 in its output queue; Given the queue length information, Back Pressure (a.k.a. max-weight ) is calculated thus:
d Back Pr essure, BPsd = Q s Q d p

here, d stands for the destination s is the source node p is the next-hop for that packet
a Qb is the queue length of packets destined to node a from node b

Observe that in a larger network, each source node will have multiple BP values, one for each destination. The Back Pressure values of each of these queues reflect on the back-log of packets each node needs to transmit. Higher the Back Pressure value for a queue, more the information that needs to be sent. Using this fact, nodes with higher Back Pressure value in a clique can be allocated more contention slots at the MAC level. Higher contention slots results in that node having a higher chance of capturing the medium and hence reducing the load in its queues. Conversely, a node with relatively empty queues will have lower BP values. This will result in the node acquiring fewer contention window slots; this will directly affect the chances of the node capturing the medium. Thus, BP manages to intrinsically control the scheduling of nodes in a wireless network. This is a distributed congestion control scheme. The equation governing contention window slots as proposed in [14] is given by the following transformation:

BPmax BP CW avg = (CWmax CWmin ) * BP max BPmin

+ CW min

here, CWmin and CWmax are user defined upper and lower bounds for the number of contention window slots BPmax and BPmin are the maximum and minimum backpressure values in the clique of the transmitting node, while, BP is the backpressure of the transmitting node Observe that this transform is inversely related to the backpressure value of the node. This transformation function is used in our implementation of back pressure embedded COPE module to control how packets are transmitted.

CHAPTER 3 System Implementation in OPNET


3.1 Implementation of COPE in OPNET:
COPE was implemented in OPNET as a research project by Vinith Reddy in the summer of 2008. This was done in order to have a more accessible simulation environment to test new ideas or improvements to the COPE module originally presented in [5]. With this aim, COPE was built with particular attention given to maintaining the principles and design of the original COPE module. However, a few changes for the sake

of convenience were made. The change that affects the functioning of the back-pressure addition has been listed. While selecting packets from the virtual output queue, COPE maintains a probability of packets available in the packet pool (recall that the packet pool stores information about over-heard packets). This probability enables to COPE to calculate the ability of the receiving nodes to successfully decode the coded packets. In [5], it is proposed to calculate the overall coded packet probability as the product of individual packet probability. The final coded packet would be the combination of all the packets with highest probabilities as long as this coded packet, has a probability of successful decoding above a threshold value. However, the current implementation of COPE in OPNET does not dynamically change these probabilities on any factor. COPE instead allows for a potentially future addition of this idea but for the present, uses a constant probability of 1. This implies that any packet in the packet pool, would always be considered for coding. 3.1.1 Coding Opportunities: The term coding opportunities is the metric we use to measure the success of our experiments. Coding opportunities is the number of chances the COPE layer in the particular node gets to combine packets. As we discussed above, if the output queue does not have packets that can satiate multiple recipients, or if the packet pools indicate successful decoding is not probable for any coding-combination, then the COPE layer will act like a normal wireless node; it discards the opportunistic coding philosophy and it will send packets to one node at a time. A high value of coding opportunities indicates that COPE actively finds ways to combine packets so as to satisfy multiple peers. This

higher value in return will reduce the overall number of transmissions required to get the same number of packets transmitted. This indirectly implies a rise in the throughput of the network. failure 3.1.2 Shortcomings of COPE in OPNET and Aim of Experiments: This implementation of COPE in OPNET was simulated under constant bit-rate circumstances. Constant bit-rate ensures a constant availability of packets in the output queues; this will in theory provide nodes with opportunities to combine packets. However, simulations (graphs shown under the simulations section) have shown that this COPE implementation provides too few coding opportunities to be worth the CPU processing required to code and decode packets. The reason behind this failure, as introduced earlier, is the fact that current 802.11 scheduling schemes attempt to flush out packets from the queues as and when a chance arrives. This results in relay nodes lacking packets destined to multiple recipients in their queues. This means we need an over-haul of the scheduling scheme such that packets controlled so as to benefit COPE. The following section discusses the max-weight scheduling scheme, a potential fix to this problem.

3.2

The Back Pressure System Embedded in COPE:

This section focuses on the details of the Back pressure implementation in conjunction with COPE. We implemented the Back Pressure system in OPNET using principles from a previous implementation made in NS2 by Anthony Halley in [14]. We use this implementation due to the fact that it can be easily added on-to COPE; particularly, this implementation enables us to put both COPE and backpressure in the same layer rather

than having a separate layer for back pressure. This simplifies a lot of the inter-layer communication issues that we would face otherwise. 3.2.1 Data-Structures:

There are two major data structures that we maintain to store information. This information is ultimately used to calculate specific node-destination backpressure values. Firstly, we have a structure to store information about a particular destination. This structure contained the IP address of the destination, a queue containing Packet IDs of each packet destined to it and finally the BP value associated with this sourcedestination combination. Individual objects of this data structure is stored in a table (figure 3.1).

Destn. IP

Qlength

Packet_ID Q

BP

Figure 3.1 Backpressure: Contents of the Destination Table

MAC addr.

Destn ID-Qlength Q

Max BP

Figure 3.2 Backpressure: Contents of the Peer Table

A second necessity for calculating back pressure values is the queue length of packet queue to the same destination in the next-hop. For this purpose, we create another data structure containing the MAC address, a queue containing the various destination queue lengths and the destination IDs and additionally the maximum back pressure value of all node-destination combinations in the next-hop. To clarify, the maximum back

pressure value, will be used to compute the minimum and maximum clique back pressure values which is ultimately required for setting contention window sizes in the MAC layer. These objects are collated in a peer table (figure 3.2). 3.2.2 Packet Headers: When packets are transmitted, an additional block of header information is added to the packet. This header information will then be extracted by the receiving node to compile its peer table. Therefore, elements in this header block will consist of destination IDs, the corresponding queue length information and also the maximum BP of the source node. This will modify the over all COPE packet as shown in figure 3.4.

Figure 3.4 Backpressure in COPE and the new COPE header format

3.2.3

Coding Combinations: Obtaining the right Coding Combinations is probably the most important part of

the whole exercise. If the coding combinations are not suitable, then this will result in poor packet decoding rates and fall in throughput. It is therefore important for backpressure to use the right scheme. It can be trivially shown that the general back pressure scheduling scheme can be successfully extended to network coding. In this extension, a coded packet serving multiple recipients may be considered a hyperlink. Applying the utility maximisation frameworks outlined in [10], result is interpreted as: The back pressure associated with the coded packet is the sum of the back pressure values for each individual node-destination pair that the coded packet serves. This means, to maximise the back pressure of a given node, the packet combination should consist of packets going to all the destinations with high BP values associated with it, constrained only by the threshold probability of successful decoding. However, since the COPE module does not have an active probability update scheme, the back pressure add-on is implemented in a modified manner. In this implementation, the packet from the destination queue associated with the highest BP is de-queued. This packet is then passed on to the COPE encoding process which uses information from the packet pools to decide the coding combination. This process at no point checks to remain better than the successful decode threshold

probability. However, the maximum BP for the source will still be the sum of all the constituent node-destination BP values.

CHAPTER 4 Simulations and Analyses:


4.1 Simulation Environment:

The simulation Environment consists of two main items. One is the network topology and the second is the flows. The flow type used for these simulations are Constant Bit-Rate (CBR) flows that mimic UDP behaviour. There are two topologies used. Other relevant metrics to keep track are: 1. Contention Window: For certain experiments, we change the contention window sizes to observe the differences in results. 2. Inter-request Time: The amount of time between requests. For all simulations, this value was set to a constant of 0.5s 3. Packets per Request: The number of packets sent per request was set to a constant of 1 packet per request.

4. Packet Size: This is the size of the packet given up-on request. This is set at a constant of 512 bytes per packet. 5. Output Queue Size: This gives the number of packets an output queue in the COPE layer can store. If more packets arrive, they are dropped. 6. Packet Pool Size: This determines the number of packets that can be put in the overheard packet pool. This quantity affects the rate of successful decoding. 7. CBR: This is the flow model that we use. CBR or Constant Bit-Rate provides a steady flow of packets in the nodes with a constant information packet size, constant number of requests per second and constant inter-request time. 8. Coding Centre: This is the node were coding can take place. Coding centres need to have at least two peer nodes. In the following sections, we discuss the flows, and the results with and without active back pressure calculations.

4.2

Cross:

This scenario consists of five wireless-stationary nodes. The nodes are placed in the formation of a cross as shown in figure 4.1. There is a node at the centre of the cross and there are four corner nodes. The corner nodes can hear transmissions from only the centre node. There are a total of four active information flows in the scenario. The flows in the network are depicted by coloured arrows. Output Queue Size: 32 packets Packet Pool Size: 100 packets Contention Window: 31255 slots

Figure - 4.1 - Cross Topology The flows are shown by the arrows. Observe that there is only on coding centre (node1). Also, note that, the corner nodes can hear transmissions from the central node only.

4.2.1

Analysing the results:

The graphs in figure 4.2 and figure 4.4 show that when the back pressure calculations are enabled, Node 1 is able to find around an time_average of 0.4 coding opportunities that node0 and node2 can successfully decode and similarly, 0.3 coding opportunities that node3 and node4 can successfully decode. Figure 4.3 and figure 4.5 on the other hand, graph the same values for scenarios where back pressure calculations are disabled. The decidable coding opportunities fall dramatically and in effect 0 for all four nodes. This shows that Back Pressure is successful in providing COPE with more opportunities to combine packets in a cross topology.

Figure 4.2 Cross BP Enabled Coding Opportunities (Node Set 0-1-2) The graph shows the curves for successful decoded packets (red and yellow) and the failed decoded attempts (cyan and dark blue). These quantities are plotted as an average per time interval. The green line, on top, is 1 whenever there is a coding opportunity in the central node.

Figure 4.3 Cross BP disabled Coding Opportunities (Node Set 0-1-2) The graph shows the curves for successful decoded packets (red and yellow) and the failed decoded attempts (cyan and dark blue). These quantities are plotted as an average per time interval. The green line, on top, is 1 whenever there is a coding opportunity in the central node. As we can see, when backpressure

calculations are not used, there are fewer coding opportunities in the central node and the success in decoding in the corner nodes per time is similarly low.

Figure 4.4 - Cross BP Enabled Coding Opportunities (Node Set 3-1-4) The graph shows the curves for successful decoded packets (yellow and green) and the failed decoded attempts (cyan and red). These quantities are plotted as an average per time interval. The blue line, on top, is 1 whenever there is a coding opportunity in the central node. In this case, failures average higher than successes! This most likely due to small packet-pool sizes.

Figure 4.5 Cross - BP disabled Coding Opportunities (Node Set 3-1-4) The graph shows the curves for successful decoded packets (red and yellow) and the failed decoded attempts (cyan and dark blue). These quantities are plotted as an average per time interval. The green line, on top, is 1 whenever there is a coding opportunity in the central node. As we can see, when backpressure

calculations are not used, there are fewer coding opportunities in the central node and the success in decoding in the corner nodes per time is similarly low.

4.3

Chain:

This scenario consists of five wireless-stationary nodes. The nodes are placed in the formation of a chain as shown in figure 4.6. There is a node at the centre of the chain and two on either side. Nodes can only hear adjacent nodes. Output Queue Size: 32 Packet Pool Size: 100

Chain

Node 0 Node 4 Node 3

Node 1

Node 2

Figure 4.6 Chain Topology The flows are shown by the arrows. Also, there are three coding centres Node 0, Node 1, Node 2

In the cross topology, there was only one coding centre (node 1). This scenario is aimed to test the COPE-BP pair for its robustness in cases where there are more than one coding centres next to each other. With two runs with different contention window settings, this topology is also aimed at testing the effect of larger contention windows on coding opportunities as well as successful decoding rates. Analysis of the two runs follows.

4.3.1 Run1: Contention Window: 31..255

Figure 4.7 Chain BP Enabled Coding Opportunities (Node Set 0-1-2) The graph shows the curves for successful decoded packets (yellow and red) and the failed decoded attempts (cyan and blue). These quantities are plotted as an average per time interval. The red line, on top, is 1 whenever there is a coding opportunity in the node1.

Figure 4.8 Chain BP disabled Coding Opportunities (Node Set 0-1-2) The graph shows the curves for successful decoded packets (red and yellow) and the failed decoded attempts (cyan and dark blue). These quantities are plotted as an average per time interval. The green line, on top, is 1 whenever there is a coding opportunity in the central node. As we can see, when backpressure calculations are not used, there are fewer coding opportunities

4.3.2

Analysing Run-1 In this scenario, we focus on two details. One is to see how much more successful

decoding back pressure provides to the nodes. Secondly we also want to focus on the difference between successful and failed decoding attempts. The first set of nodes we focus on are node0, node 1 and node2. In this combination, the coding centre is node1. Comparing figure 4.9 and figure 4.10, it is quickly clear that back pressure maintains its ability to provide significantly more successfully decoded (~0.45) coding opportunities than COPE by itself. However, when we look at figure 4.9 closely, we notice that the number of failed decoding (0.4) attempts are not too far away from successful decoding attempts. As discussed in an earlier section, if the decoding process fails, it results in a decrease in throughput due to the need

to re-transmit the packets in sequence rather than in combination. This is a potential fallback as the losses seem to nullify the gains. The nodes node1, node0 and node4 with coding centre at node0 demonstrate a similar predicament. Here also, back pressure is more successful than COPE by itself. But once again, the number of failed decoding attempts are too many. So this is indeed a cause for concern. Another interesting fact is that the node2 can find no coding opportunities and does not assume the role of coding centre. There were no plots available here as there were absolutely no opportunities that node2 could find. In this run we learn that although the back pressure may provide an overall benefit in terms of successful coding opportunities, it is still hampered by in ability to decode. The second run tries to show that stretching the CW limits could possibly help.

Figure 4.9 Chain BP Enabled Coding Opportunities (Node Set 4-0-1) The graph shows the curves for successful decoded packets (yellow and red) and the

failed decoded attempts (cyan and green). These quantities are plotted as an average per time interval. The blue line, on top, is 1 whenever there is a coding opportunity in the central node.

Figure 4.10 Chain BP disabled Coding Opportunities (Node Set 4-0-1) The graph shows the curves for successful decoded packets (red and yellow) and the failed decoded attempts (cyan and dark blue). These quantities are plotted as an average per time interval. The green line, on top, is 1 whenever there is a coding opportunity in the central node. As we can see, when backpressure calculations are not used, there are fewer coding opportunities.

4.3.3 Run2: Contention Window: 31..1023

Figure 4.11 Chain BP Enabled Coding Opportunities (Node Set 0-1-2) The graph shows the curves for successful decoded packets (yellow and red) and the failed decoded attempts (cyan and blue). These quantities are plotted as an average per time interval. The red line, on top, is 1 whenever there is a coding opportunity in the node1.

Figure 4.12 Chain BP disabled Coding Opportunities (Node Set 0-1-2) The graph shows the curves for successful decoded packets (red and yellow) and the failed decoded attempts (cyan and dark blue). These quantities are plotted as an average per time interval. Once Again, when backpressure calculations are not used, there are fewer coding opportunities

Figure 4.13 Chain BP Enabled Coding Opportunities (Node Set 0-1-2) This graph shows the curves for coding opportunities in node0 and node2(blue and cyan [overlapped with blue]) , the successful decode attempts (green) and the failed decoded attempts (red) in node 1. This shows that all three nodes [0,1,2] have found coding opportunities with a larger contention window frame. This implies that in the previous run, there still were collisions that prevented node 2 from finding coding opportunities.

4.3.4 Analysing Run-2: Run-2 has just one different setting from run-1. Here the CW window limits are set as 31..1023 slots. In run-2 we aim to study the effect of a larger contention window setting on the successful to failed decoding ratio. Looking at figure 4.11, we can see that successful decoding stabilises at around 0.5 while failed decoding average stabilises at 0.3. This clarifies that increasing the window not only increases the successful decoding average (by 0.05) but it also decreases the failure rate (by 0.15). Additionally, we now have coding opportunities at node 2 (as implied by figure 4.13).

This overall change illustrates the contention-window limits play an important role in the success of backpressure.

Chapter 5 Conclusions and Future Work:


This work was aimed at testing the advantages that the max-weight scheduling scheme could bring to the COPE architecture. To test this, we embedded new functionality to the COPE layer to calculate backpressures at every node. Once we implemented this, we ran simulations to quantify any advantages it offers. We ran simulations under two test topologies with constant bit-rate flows. The test results clearly show that COPE embedded with back pressure provides a lot of promise to improve the throughput of a wireless network. The success rates attained in the chain and cross topologies are far greater than what was achieved earlier when using COPE by itself. However, there is still a lot more work that can be done to potentially offer more benefits to COPE. Firstly, we should aim at using a probability scheme to decide on coding combinations. This alone could reduce the failure rate. Furthermore, the current BP-COPE model is inclined to perform better under constant bit-rate (CBR) settings. CBR is UDP like setting. True TCP flows have not been tested and this avenue for testing will be an important next step. To this end, perhaps the TCP congestion schemes could be modified to be in synch with this new backpressure capability of the node.

Appendix-A
The Workings of XOR:
The Exclusive-OR (XOR) operator plays an important role in our work. It is the operator of choice in combining packets together. The following appendix discusses the functioning of XOR and how XOR is used to combine and decode packets. XOR is Boolean operator symbolised by . XOR is represented by the following truth-table:

A B XOR 0 0 1 1 0 1 0 1 0 1 1 0

Figure A .1 The Truth Table for XOR

Appendix-B
Packet Combination and Decoding:
XOR is a bit-by-bit operator. This means that given to Boolean variables A&B, XOR will be applied on the bits of these variables one-by-one. Packets in communication networks are a combination of 1s and 0s. The following example discusses packet combination and its decoding:

Let there be two Packets, A and B, each of size 4 bits, Let A = 1011 and B = 0100 Combined Packet Pcombined = A B = 1011 0100 = 1111 Let the receiver of Pcombined have Packet A Decoding Pcombined to re cov er Packet B assumes the availability of Packet A : To get Packet B, we do : Pcombined A = 1111 1011 = 0100 which is Packet B.

Appendix-C
On Opportunistic Listening
Opportunistic listening is a distinctive feature of wireless networks that is a consequence of the broadcast nature of wireless transmission. In a wireless network, several surrounding nodes can receive each packet transmitted by a source node [5]. Thus, broadcasting a packet to all neighbours requires the same amount of energy as transmitting a packet to an individual neighbour. The neighbours that are not meant to be the destination can cache these over-heard packets for short periods of time (figure C.1 ).

Figure C.1 Illustrates Opportunistic Listening shows how broadcasting allows neighbour nodes to listen on the packets to other nodes. This allows the neighbours to store this information for a temporary while. This storage can serve as caching of packets from other nodes allowing a better knowledge of other nodes packet pools

Opportunistic listening can increase the robustness of wireless networks. These nonreceiving neighbours, can re-transmit this in the case that the original source fails. Opportunistic listening is very useful in dynamic on-field sensor networks[12].

Reference: [1] R. Ahlswede, N. Cai, S. R. Li, and R. W. Yeung. Network Information Flow. In IEEE Transactions on Information Theory, 2000. K. Gulati, N. Jayakumar, S. Khatri, A. Sprintson. Network Coding for Routability Improvement in VLSI. In Proceedings of the 2006 IEEE/ACM international conference on Computer-aided design, 2006. M.A.R. Chaudhry, S.Y. El Rouayheb, A. Sprintson. On the Minimum Number of Transmissions in Single-Hop Wireless Coding Networks. In IEEE Information Theory Workshop, 2007. The Network Coding Home Page http://www.ifp.uiuc.edu/~koetter/NWC/ as of 20 April 2009. J. Crowcroft, W. Hu, D. Katabi, S. Katti, M. Medard, H. Rahul. XORs in The Air: Practical Wireless Network Coding. In ACM SIGCOMM 2006. C. Fragouli, J-Y. le Boudec, J. Widmer. Network Coding An Instant Primer. In ACM SIGCOMM Computer Communication Review, 2006. M. A. R. Chaudhry and A. Sprintson. Efficient algorithms for index coding. In Infocom08 student workshop, 2008. M. Wang, B. Li. How Practical is Network Coding?. In Proc. of 14th IEEE International Workshop on Quality of Service, June 2006. V. Aggarwal, M. Kim, M. Mdard, U-M. OReilly. IEEE Military Communications Conference, 2007. S. Shakkottai and R. Srikant. Network Optimization and Control. NOW publications - Vol. 2, No. 3 (2007) 271379 OPNET Technologies, Inc. - Clients: http://www.opnet.com/corporate/clients.html as of 19 April 2009 C. Westphal. Opportunistic Routing in Dynamic Ad Hoc Networks: the OPRAH Protocol. IEEE International Conference on Mobile Adhoc and Sensor Systems, October 2006. A. Ephremides, L. Tassiulas. Stability properties of constrained queuing systems and scheduling policies for maximum throughput in multihop radio networks. IEEE Transactions on Automatic Control, 1992.

[2]

[3]

[4]

[5]

[6]

[7]

[8]

[9]

[10]

[11]

[12]

[13]

[14]

A. J. Halley A Simple Distributed Backpressure-Based Scheduling And Congestion Control System. UIUC Graduate degree Thesis. 2006

You might also like