Professional Documents
Culture Documents
1.1 Introduction About The Manet
1.1 Introduction About The Manet
CHAPTER 1
INTRODUCTION
MANET nodes are equipped with wireless transmitters and receivers using antennas
which may be omnidirectional (broadcast), highly- directional (point-to-point), possibly steerable,
or some combination thereof. At a given point in time, depending on the nodes' positions and
their transmitter and receiver coverage patterns, transmission power levels and co-channel
interference levels, a wireless connectivity in the form of a random, multihop graph or "ad hoc"
network exists between the nodes. This ad hoc topology may change with time as the nodes
move or adjust their transmission and reception parameters.
Dynamic topologies: Nodes are free to move arbitrarily; thus, the network topology--
which is typically multihop--may change randomly and rapidly at unpredictable times, and
may consist of both bidirectional and unidirectional links.
3
Energy-constrained operation: Some or all of the nodes in a MANET may rely on batteries
or other exhaustible means for their energy. For these nodes, the most important system
design criteria for optimization may be energy conservation.
Limited physical security: Mobile wireless networks are generally more prone to physical
security threats than are fixed- cable nets. The increased possibility of eavesdropping,
spoofing, and denial-of-service attacks should be carefully considered. Existing link
security techniques are often applied within wireless networks to reduce security threats.
As a benefit, the decentralized nature of network control in MANETs provides additional
robustness against the single points of failure of more centralized approaches.
The major challenge of the MAC sub layer is the hidden terminal problem.
In the case of the hidden terminal problem, a packet collision happens at the
intended receiver if there is transmission from a hidden terminal. As shown in
Figure 1.4, when node A transmits a frame to node B, node C (a hidden terminal) is
not aware of the transmission due to its distance from node A. If node C
simultaneously transmits a frame to node B, a collision occurs at node B.
Many MAC protocols have been proposed to mitigate the adverse effects of
the hidden terminal problem through collision avoidance. Most collision avoidance
schemes (such as the carrier sense multiple access with collision avoidance
(CSMA/CA) employed by the distributed coordination function (DCF) component
of the MAC sub layer of the IEEE 802.11 standard) are sender-initiated, including
6
layer handles the addressing problem locally. If a packet passes the network
boundary, we need another addressing system to help distinguish the source
and destination systems. The network layer adds a header to the packet
coming from the upper layer that, among other things, includes the logical
addresses of the sender and receiver.
Routing. When independent networks or links are connected to create
the packet. As we will see in later chapters, router B uses its routing table to find
that the next hop is router E. The network layer at B, therefore, sends the packet to
the network layer at E. The network layer at E, in tum, sends the packet to the
network layer at F.
LIST OF ABBREVIATIONS:
9. ACK ACKNOWLEGEMENT
EXISTING WORK :
become available and so in this paper we investigate the behaviour of TCP traffic in a
The associate editor coordinating the review of this letter and approving it for All
systems are equipped with an Atheros 802.11a/b/g PCI card with an external antenna. The
system hardware configuration is summarised in Table I.All nodes, including the AP, use a
Linux 2.6.8.1 kernel and a version of the MADWiFi [6] wireless driver modified to allow
us to adjust the 802.11e , AIFS and TXOP parameters. Specific vendor features on the
wireless card, such as turbo mode, are disabled. All of the tests are performed using the
802.11b physical maximal transmission rate of 11Mbit/sec with RTS/CTS disabled. The
have increased the size of the TCP buffers to ensure that we see true AIMD behaviour
(with small TCP buffers TCP congestion control is effectively disabled as the TCP
congestion window is determined by the buffer size rather than the network capacity). We
have also carried out tests investigating the impact of the size of interface and driver
queues and obtain similar results for a range of settings. the unfairness between competing
TCP upload flows and between competing upload and download flows in 802.11 WLAN’s.
A.Gross unfairness between the throughput achieved by competing flows is evident. The
source of this highly undesirable behaviour is rooted in the interaction between the MAC
layer contention mechanism (that enforces fair access to the wireless channel) and the TCP
transport layer flow and congestion control mechanisms (that ensure reliable transfer and
match source send rates to network capacity). At the transport layer, to achieve reliable
data transfers TCP receivers return acknowledgement (ACK) packets to the data sender
confirming safe arrival of data packets. During TCP uploads, the wireless stations queue
data packets to be sent over the wireless channel to their destination and the returning TCP
ACK packets are queued at the wireless access point (AP) to be sent back to the source
station. TCP’s operation implicitly assumes that the forward (data) and paths between a
source and destination have similar packet transmission rates. The basic 802.11 MAC
13
layer, however, enforces station-level fair access to the wireless channel. That is, n stations
competing for access to the wireless channel are each able to secure approximately a 1/n
share of the total available transmission. Hence, if we have n wireless stations and one AP,
each station (including the AP) is able to gain only a share of transmission opportunities.
By allocating an equal share of packet transmissions to each wireless node, with TCP
uploads the 802.11 MAC allows of transmissions to be TCP data packets yet only (the
AP’s share of medium access) to be TCP ACK packets. For larger numbers of stations, n,
this MAC layer action leads to substantial forward/reverse path asymmetry at the transport
layer. Asymmetry in the forward and reverse path packet transmission rate is a known
source of poor TCP performance in wired networks, Asymmetry in the forward and
reverse path packet transmission rate that leads to significant queueing and dropping of
TCP ACKs can disrupt the TCP ACK clocking mechanism, hinder congestion window
growth and induce repeated timeouts. With regard to the latter, a timeout is invoked at a
TCP sender when no progress is detected in the arrival of data packets at the destination -
this may be due to data packet loss (no data packets arrive at the destination), TCP ACK
packet loss (safe receipt of data packets is not reported back to the sender), or both. TCP
flows with only a small number of packets in flight (e.g. flows which have recently started
or which are recovering from a timeout) are much more susceptible to timeouts than flows
with large numbers of packets in flight since the loss of a small number of data or ACK
packets is then sufficient to induce a timeout. Hence, when ACK losses are frequent a
situation can easily occur where a newly started TCP flow loses the ACK packets
associated with its first few data transmissions, inducing a timeout. The ACK packets
associated with the data packets retransmitted following the timeout can also be lost,
leading to further timeouts (with associated doubling of the retransmit timer) and so
14
creating a persistent situation where the flow is completely starved for long periods. B.
Unfairness between TCP upload and download flows Asymmetry also exists between
competing upload and download TCP flows that can create unfairness. This is illustrated
where it can be seen that upload flows achieve significantly greater throughput than
competing download flows. Suppose we have nu upload flows and nd download flows.
Since download flows must all be transmitted via the AP, we have that the download flows
roughly same rate as a single TCP upload flow. That is, roughly 1/(nu + 1) of the channel
bandwidth is allocated to the download flows and nu/(nu+1) allocated to the uploads. As
the number nu of upload flows increases, gross unfairness between uploads and downloads
can result.
competing over 802.11 WLANs work within the constraint of the current 802.11 MAC,
resulting in complex adaptive schemes requiring online measurements and, perhaps, per
packet processing. We instead consider how the additional flexibility present in the new
802.11e MAC might be employed to alleviate transport layer unfairness. To address TCP’s
performance problems, two issues must be addressed; namely, asymmetry between the
TCP data and TCP ACK paths that disrupts the TCP congestion control mechanism, and
network level asymmetry between TCP upload and download flows. Symmetry can be
restored between the TCP data and TCP ACK paths by configuring the MAC such that
TCP ACKs effectively have unrestricted access to the wireless medium. Recall that in
802.11e the MAC parameter settings are made on a per class basis. Hence, we collect TCP
ACKs into a single class (i.e. queue them together in a separate queue) and confine
prioritisation to this class1. The rationale for this approach makes use of the transport layer
15
behaviour. Namely, allowing TCP ACKs unrestricted access to the wireless channel does
not lead to the channel being flooded. Instead, it ensures that the volume of TCP ACKs is
regulated by the transport layer rather than the MAC layer. In this way the volume of TCP
ACKs will be matched to the volume of TCP data packets, thereby restoring
forward/reverse path symmetry at the transport layer. When the wireless hop is the
bottleneck, data packets will be queued at wireless stations for transmission and packet
drops will occur there, while TCP ACKs will pass freely with minimal queuing i.e. the
standard TCP semantics are recovered. In the case of competing TCP upload and download
flows, recall that the primary source of unfairness arises from the 1In our tests packet
classification is carried out based on packet size. fact that if we have nu uploads and nd
downloads then the download flows roughly win only a 1/(nu + 1) share of the available
transmission opportunities. This suggests that to restore fairness we need to prioritise the
consider using the 802.11e MAC parameters detailed in Table II. Here, the 802.11e AIFS
and CWmin parameters are used to prioritise TCP ACKs. A small value of AIFS and
CWmin yields near strict prioritisation of TCP ACKs at the AP. A larger value of CWmin is
used at the wireless stations in order to reduce contention between competing TCP ACKs.
The TXOP packet bursting mechanism in 802.11e provides a straightforward and fine
grained mechanism for prioritising TCP download data packets. By transmitting nd packets
opportunity it can be immediately seen that we restore the nd/(nu + nd) fair share to the
TCP download traffic. Note that the number nd of distinct destination stations can be
requirement for monitoring of the wireless medium activity itself. The effect is to
dynamically track the number of active TCP download stations and always ensure the
16
both bursty, short-lived traffic such as HTTP and long-lived traffic such as FTP in a
straightforward and consistent manner (see later for examples).Revisiting the impact of
the proposed prioritisation approach can be Evidently, fairness is restored between the
competing TCP flows. The 802.11e MAC parameter settings used in this example (with an
11Mbs PHY) for both TCP uploads and downloads are summarized . Although space
restrictions prevent us from including the additional results, we have measured similar
upload and download stations and situations where the number of uploads is not the same
as the number of downloads, confirming the effectiveness of the proposed solution. The
performance of the proposed approach with short-lived TCP flows is illustrated in here we
model a client server application where each user opens TCP upload flows (client
seconds. The average time to completion of a client-server session is plotted in versus the
number of wireless stations. As we would expect the MAC load to increase linearly with
the number of users, we normalise by dividing by the number of users. It can be seen that
in 802.11b the normalised completion time remains constant until about 15 users and then
increases rapidly.
17
include that the mean service rate is dependent on the level of channel contention and
packet inter-service times vary stochastically due to the random nature of CSMA/CA
configured with the Access Point (AP) acting as a wireless router between the WLAN and
the Internet. TCP traffic is of particular importance in such WLANs as it currently carries
the great majority (more than 90% [8]) of network traffic.Effects of buffer related issues in
LANs have received little attention in the literature. Exceptions include which shows that
appropriate buffer sizing can restore TCP upload/ download Unfairness, and in which TCP
performance with fixed AP buffer sizes and 802.11e is investigated. The present paper,
which extends our previous work on buffer sizing for voice traffic, is to our knowledge the
first work focussing on how to tune buffer sizes for TCP traffic in 802.11e WLANs .
Router buffers are traditionally sized with two primary objectives in mind.
Due to the nature of TCP, internet traffic tends to be bursty. Should too many
packets arrive in a sufficiently short interval of time, then a router may lack the capacity to
process all of the packets immediately. The first job of the router buffer is to mitigate
packet losses due to bursts by accommodating these packets in a buffer until they can be
serviced.
18
The AIMD congestion control Algorithm used by TCP reduces the number of
packets in flight by half on detecting network congestion. If router buffers are too small,
this backoff action will cause them to empty with a corresponding reduction in link
in 802.11e WLANs. Firstly, the mean service rate at a wireless station is strongly
dependent on the level of channel contention and thus on the number of active stations and
their load. Secondly, even when the network load is fixed, the packet inter-service times at
a station are not fixed but vary stochastically due to the random nature of the CSMA/CA
operation. These facts affect statistical multiplexing and buffer backlog behaviour, and thus
the choice of buffer sizes. In this paper we study the impact of these differences and find
that they lead naturally to a requirement for adaptation of buffer size in response to
changing network conditions. We propose an adaptive algorithm for 802.11e WLANs1 and
We consider scenarios where the Access Point (AP) acts as a wireless router
between the WLAN and the wired Internet. Upload flows are from wireless stations in the
WLAN to server(s) in the Internet, while downloads are from wired server(s) to stations in
the WLAN. At the MAC layer, we use IEEE 802.11g parameters as shown in Table I. The
bandwidth between the AP and server(s) is 100 Mbps. TCP Reno with SACK is used. We
note that in WLANs, TCP ACK packets without any prioritisation can be easily
queued/dropped due to the fact that the basic 802.11 DCF ensures that stations win a
roughly equal number of transmission opportunities. For example consider n stations each
carrying one TCP upload flow. The TCP ACKs are transmitted by the AP. While the data
19
packets for the n flows have an aggregate n/(n + 1) share of the transmission opportunities
the TCP ACKs for the n flows have only a 1/(n+1) share. Issues of this sort are known to
degrade TCP performance significantly as queuing and dropping of TCP ACKs disrupt the
TCP ACK clocking mechanism. Following 1802.11e has been approved as an IEEE
standard and much of the 802.11e functionality is already available in WLAN devices. The
proposed algorithm however is also applicable to legacy 802.11 DCF although without
TCP ACK prioritisation fairness between competing TCP flows can not be guaranteed.
As we address this problem using 802.11e. At the AP and each station we treat
TCP ACKs as a separate traffic class, collecting them into a queue which is assigned high
priority.In contrast to wired networks, the mean service rate at a wireless station is not
fixed but instead depends upon the level of channel contention and the network load. when
the number of competing upload flows (with one upload flow per wireless station) is
varied. Similarly to wired networks, the throughput always increases monotonically with
the buffer size, reaching a maximum above a threshold buffer size. It can also be seen that
the download throughput falls as the number of competing uploads increases. The variation
in throughput can be substantial, e.g., the maximum throughput changes from 14Mbps to
1.25Mbps as the number of competing uploads changes from 0 to while ensuring high
throughput, this comes at the cost of high latency. For example, it can be seen from Fig. 1
that when a fixed buffer size of 338 packets is used (which in this example ensures
latency experienced by the download flow is about 300ms with no uploads but rises to
around 2s with 10 contending upload stations. This occurs because TCP’s congestion
control algorithm probes for bandwidth until packet loss occurs and so download flows
will tend to fill buffers with any sizes. Moreover, the mean queueing delay of a buffer of
20
size Q with mean service rate B is Q/B. Hence, the queueing delay at the AP depends on
the service rate, which in turn depends on the number of contending wireless stations and
their offered load. For a fixed-size buffer, a decrease in the service rate of a factor b.
Conversely, sizing the buffer to achieve lower latency across all network conditions comes
at the cost of reduced throughput, e.g., a buffer size of 30 packets ensures latency of 200-
300ms for up to 10 contending upload stations but when there are no contending uploads
the throughput of a download flow is only about 75% of the maximum achievable. In
addition to variations in the mean service rate, we also note that the random nature of
802.11 operations leads to short time-scale stochastic fluctuations in service rate. This is
fundamentally different from wired networks and directly impacts buffering behaviour.
Stochastic fluctuations in service rate can lead to early queue overflow and reduced link
BDP of 31 packets. However, it can be seen that at this buffer size the achieved download
throughput is only about 60% of the maximum – a buffer size of at least 70 packets is
required to achieve 100% throughput. The stochastic fluctuations in service rate lead to a
requirement to increase the buffer size above the BDP in order to accommodate the impact
statistical arguments, but we do not pursue this further here due to space constraints. We
note, however, that a simple but effective approach is to over-provision by a fixed number
buffer size suited to a range of network conditions, we consider the use of an adaptive
buffer sizing strategy. We note that a wireless station can readily measure its own service
rate by observation of the inter-service time, i.e., the time between packets arriving at the
21
head of the network interface queue ts and being successfully transmitted te (which is
indicated by receiving correctly the corresponding MAC ACK.). Note that this
measurement can be readily implemented in real devices and incurs minor computation
burden.
This will decrease the buffer size when the service rate falls and increase the buffer
size when the service rate rises, so as to maintain an approximately constant queueing
delay of T seconds. This effectively regulates the buffer size to remain equal to the BDP as
the mean service rate varies. To account for the impact of the stochastic nature of the
service rate on buffer size requirements (see comments at the end of the previous section),
we modify this update rule to based on the measurements in and others, we have found that
a value of a equal to 40 packets works well across a wide range of network conditions. The
effectiveness of this simple adaptive algorithm. Here we plot the throughput percentage4
and smoothed RTT of download flows as the number of download and upload flows is
varied. It can be seen that the adaptive algorithm maintains high throughput efficiency
# =====================================================
#======================================================
22
$ns_ use-newtrace
# Creating Topology
create-god $val(nn)
# Node Creation
$node_($i) random-motion 0
source setdest50.tcl
# For mobility
set f 1
while {$f} {
if {$a==0} {
} else {
set f 0
set f 1
while {$f} {
25
if {$a==0} {
} else {
set f 0
# For NodeSpeed
}
26
return $cbr
global sink
if {$d<=250} {
if {$nd2!=$nd1} {
close $nbr
close $btp
source bcast.tcl
close $tmp
#-----------------------Graph-----------------------------#
if {$rec1!=0} {
# Finish Procedure
proc finish {} {
close $tp1
30
close $dp1
close $pr1
$ns_ flush-trace
close $namTracefile
exit 0
$ns_ run