Professional Documents
Culture Documents
Quality of Service Improvement in Wireless Sensor Networks: Lakshmi.C Sujaritha.S Emayawathy.R Deepika.D
Quality of Service Improvement in Wireless Sensor Networks: Lakshmi.C Sujaritha.S Emayawathy.R Deepika.D
Quality of Service Improvement in Wireless Sensor Networks: Lakshmi.C Sujaritha.S Emayawathy.R Deepika.D
SENSOR NETWORKS
A PROJECT REPORT
Submitted by
LAKSHMI.C
SUJARITHA.S
EMAYAWATHY.R
DEEPIKA.D
BACHELOR OF ENGINEERING
In
APRIL 2015
ANNA UNIVERSITY : CHENNAI 600 025
BONAFIDE CERTIFICATE
SIGNATURE SIGNATURE
PROFESSOR
precisely. Further, we modify one of the most prominent wireless sensor network’s
efficient cluster head replacement scheme and dual transmitting power levels. Our
cluster head formation, throughput and network life. Afterwards, hard and soft
ABSTRACT
LIST OF TABLES
LIST OF FIGURES
1 INTRODUCTION
1.1 Wireless Sensor Network
1.2 WSN Applications
1.3 WSN Architecture
1.4 WSN Challenges
1.5 QoS in WSN
2 LITERATURE REVIEW
2.1 Literature survey
2.2 Motivation
2.3 Proposed Idea
2.4 Organization of Thesis
5 SOFTWARE IMPLEMENTATION
5.1 Analysis in Qualnet
5.2 Implementation in Matlab
5.3 Code Specifications
REFERENCES
LIST OF TABLES
CHAPTER 5
CHAPTER 1
CHAPTER 5
CHAPTER 6
The WSN is built of “nodes” – from a few to several hundreds or even thousands,
where each node is connected to one (or sometimes several) sensors. Each such
sensor network node has typically several parts: a radio transceiver with an
internal antenna or connection to an external antenna, a microcontroller, an
electronic circuit for interfacing with the sensors and an energy source, usually
a battery or an embedded form of energy harvesting. A sensor node might vary in
size from that of a shoebox down to the size of a grain of dust, although
functioning “motes” of genuine microscopic dimensions have yet to be created.
The cost of sensor nodes is similarly variable, ranging from a few to hundreds of
dollars, depending on the complexity of the individual sensor nodes. Size and cost
constraints on sensor nodes result in corresponding constraints on resources such
as energy, memory, computational speed and communications bandwidth. The
topology of the WSNs can vary from a simple star network to an advanced multi-
hop wireless mesh network. The propagation technique between the hops of the
network can be routing or flooding. In computer science and telecommunications,
wireless sensor networks are an active research area.
Area monitoring:
In area monitoring, the WSN is deployed over a region where some phenomenon
is to be monitored. For example in military, the use of sensors is to detect enemy
intrusion.
The medical applications include body-area networks that can collect information
about an individual's health, fitness, and energy expenditure.
Wireless sensor networks have been deployed in several cities to monitor the
concentration of dangerous gases for citizens. These can take advantage of the ad
hoc wireless links rather than wired installations, which also make them more
mobile for testing readings in different areas.
A network of Sensor Nodes can be installed in a forest to detect when a fire has
started. The nodes can be equipped with sensors to measure temperature, humidity
and gases which are produced by fire in the trees or vegetation.
Wireless sensor networks can effectively act to prevent the consequences of natural
disasters, like floods. Wireless nodes have successfully been deployed in rivers
where changes of the water levels have to be monitored in real time.
Data logging:
Wireless sensor networks are also used for the collection of data for monitoring of
environmental information, this can be as simple as the monitoring of the
temperature in a fridge to the level of water in overflow tanks in nuclear power
plants.
Monitoring the quality and level of water includes many activities such as
checking the quality of underground or surface water and ensuring a country’s
water infrastructure for the benefit of both human and animal. It may be used to
protect the wastage of water.
The basic wireless sensor network architecture can be represented by the following
figure. The topology of WSNs can vary from a simple star network to an advanced
The major features of WSNs that challenge QoS provisioning is discussed below.
1. Resource Constraints
In WSNs, sensor nodes are usually low-cost, low-power, small devices that are
equipped with only limited data processing capability, transmission rate, battery
energy, and memory. Due to the limitation on transmission power, the available
bandwidth and the radio range of the wireless channel are often limited. In
particular, energy conservation is critically important for extending the lifetime of
the network, because it is often infeasible or undesirable to recharge or replace the
batteries attached to sensor nodes once they are deployed. Resource constraints
apply to sensors. In the presence of resource constraints, the network QoS may
suffer from the unavailability of computing and/or communication resources. As a
consequence, some data transmissions will possibly experience large delays,
resulting in low level of QoS. Due to the limited memory size, data packets may be
dropped before the nodes successfully send them to the destination. Therefore, it is
of critical importance to use the available resources in WSNs in a very efficient
way.
2. Platform Heterogeneity
Node mobility is an intrinsic nature of many applications such as, among others,
intelligent transportation, assisted living, urban warfare, planetary exploration, and
animal control. During runtime, new sensor nodes may be added; the state of a
node is possibly changed to or from sleeping mode by the employed power
management mechanism; some nodes may even die due to exhausted battery
energy. All of these factors may potentially cause the network topologies of WSNs
to change dynamically. Dealing with the inherent dynamics of WSNs requires QoS
mechanisms to work in dynamic and even unpredictable environments. In this
context, QoS adaptation becomes necessary; that is, WSNs must be adaptive and
flexible at runtime with respect to changes in available resources. For example,
when an intermediate node dies, the network should still be able to guarantee real-
time and reliable communication by exploiting appropriate protocols and
algorithms.
4. Mixed Traffic
Diverse applications may need to share the same WSN, inducing both periodic and
aperiodic data. This feature will become increasingly evident as the scale of WSNs
grows. Some sensors may be used to create the measurements of certain physical
variables in a periodic manner for the purpose of monitoring and control.
Meanwhile, some others may be deployed to detect critical events. Furthermore,
disparate sensors for different kinds of physical variables, e.g., temperature,
humidity, location, and speed, generate traffic flows with different characteristics.
This feature of WSNs necessitates the support of service differentiation in QoS
management.
Quality of Service (QoS) routing is one of the key factors for WSN applications to
get controllable differentiated service and achieve fully efficient path to transfer
information which can balance and extend power utilization. QoS routing has two
aspects:
It is envisioned that WSNs will become pervasive in our daily lives, for example,
in our homes, offices, and cars. They promise to revolutionize the way we
understand and manage the physical world, just as Internet transformed how we
interact with one another. Ultimately, they will be connected to the Internet in
order to achieve global information sharing. This technical trend is driving WSNs
to provide QoS support because they have to satisfy the service requirements of
various applications. From an end user’s perspective, real-world WSN applications
have their specific requirements on the QoS of the underlying network
infrastructure. For instance, in a fire handling system, sensors need to report the
occurrence of a fire to actuators in a timely and reliable fashion then, the actuators
equipped with water sprinklers will react by a certain deadline so that the situation
will not become uncontrollable.
Conceptually, QoS can be regarded as the capability to provide assurance that the
service requirements of applications can be satisfied. Depending on the type of
target application, QoS in WSNs can be characterized by reliability, timeliness,
robustness, availability, and security, among others. Some QoS parameters may be
used to measure the degree of satisfaction of these services, such as throughput,
network lifetime, delay, jitter, and packet delivery ratio.
Throughput is the effective number of data flow transported within a certain period
of time, also specified as bandwidth in some situations. In general, the bigger the
throughput of the network, the better is the performance of the system.
Network lifetime can be defined as the time until the first node dies. The easiest to
capture indicator of this metric is the maximum per-node load, where a node’s load
corresponds to the number of packets sent from or routed through the given node.
Clearly, the network setup that minimizes the maximum node load is the one that
will ensure the maximum network lifetime.
Packet Delivery Ratio (PDR) can be described as the ratio of the packets received
to the packets sent. The performance of the protocol having greater value of packet
delivery ratio is said to be better.
Thus, provision of QoS in WSNs is very challenging due to two main problems:
(1) the usually severe limitations of WSN nodes, such as the ones related to their
energy, computational and communication capabilities, in addition to the large-
scale nature of WSNs; (2) most QoS properties are interdependent, in a way that
improving one of them may degrade others. These negative facts force system
designers to try to achieve the best trade-offs between QoS metrics. In this project,
a mechanism that enables to improve a few QoS parameters of a WSN system at
the same time is proposed.
CHAPTER 2: LITERATURE REVIEW
Goran Horvat, Drago Zagar and Davor Vinko presented a simulation to show that
deployment parameters (coverage area, number of nodes and TX power)
substantially affected the QoS of a network. Thus, by choosing optimal
deployment parameters it was possible to maximize QoS in a network.
Bhupinder Kaur and Sakshi Kaushal analysed the existing routing protocols
namely AODV, AOMDV, DSDV and M-DART by using different metrics like
packet delivery ratio, throughput, end to end delay and energy efficiency to ensure
which routing protocol provided a better QoS guarantee. It was inferred that
AODV protocol outperformed all the other three protocols in terms of energy
efficiency because in AODV only a few nodes are active during the transmission
due to the reactive and multi-path nature of the protocol.
Junguo Zhang and Wenbin LI had simulated the LEACH protocol using NS2
network simulation and they identified the drawbacks of that protocol in clustering.
They deduced that if the number of cluster-heads were too small, the meaning of
layering would be lost and if the number was too large, the cluster heads would
directly communicate with the distal sink nodes. This would lead to higher
transmission power which could lead to higher energy consumption by the entire
network.
Jing Feng, Xiaoxing YU, Zijun LIU and Cuihan WANG aimed to improve the
power efficiency and QoS performance of WSN. To solve the high availability of
WSN, the idea of gradient and hierarchy in DD and LEACH protocols respectively
were combined. The combined protocol could achieve significant power utilization
and time performance in sensor networks.
2.2 MOTIVATION
The thesis is divided into seven chapters. The first chapter presents the introduction
to Wireless Sensor Networks (WSN) and QoS in WSNs. Chapter 2 is the literature
survey that was done prior to the start of the project. Chapter 3 gives an insight into
the hierarchical and routing protocols. Chapter 4 is the detailed description of our
proposed MODLEACH protocol. Chapter 5 discusses the challenges, approaches,
specifications and the platform used for implementing the project. Chapter 6
contains the comparative performance analysis of the modified protocols and the
result of our project. Chapter 7 provides the conclusion and suggests ideas for
related future work. Following these chapters are the references.
CHAPTER 3: HIERARCHICAL AND ROUTING PROTOCOLS
Routing protocols define a set of rules which governs the journey of message
packets from source to destination in a network. In MANET, there are different
types of routing protocols each of them is applied according to the network
circumstances.
Proactive routing protocols are also called as table driven routing protocols. In this
every node maintain routing table which contains information about the network
topology even without requiring it. This feature although useful for datagram
traffic, incurs substantial signaling traffic and power consumption. The routing
tables are updated periodically whenever the network topology changes. Proactive
protocols are not suitable for large networks as they need to maintain node entries
for each and every node in the routing table of every node. These protocols
maintain different number of routing tables varying from protocol to protocol.
There are various well known proactive routing protocols. Example: DSDV,
OLSR, WRP etc.
Route maintenance: Due to dynamic topology of the network cases of the route
failure between the nodes arises due to link breakage etc, so route maintenance is
done. Reactive protocols have acknowledgement mechanism due to which route
maintenance is possible.
One of the first and most popular clustering protocols proposed for WSNs was
LEACH (Low Energy Adaptive Clustering Hierarchy) . It is probably the first
dynamic clustering protocol which addressed specifically the WSNs needs, using
homogeneous stationary sensor nodes randomly deployed, and it still serves as the
basis for other improved clustering protocols for WSNs. It’s an hierarchical,
probabilistic, distributed, one-hop protocol, with main objectives (a) to improve the
lifetime of WSNs by trying to evenly distribute the energy consumption among all
the nodes of the network and (b) to reduce the energy consumption in the network
nodes (by performing data aggregation and thus reducing the number of
communication messages). It forms clusters based on the received signal strength
and also uses the CH nodes as routers to the BS. All the data processing such as
data fusion and aggregation are local to the cluster. LEACH forms clusters by
using a distributed algorithm, where nodes make autonomous decisions without
any centralized control. All nodes have a chance to become CHs to balance the
energy spent per round by each sensor node. Initially a node decides to be a CH
with a probability “p” and broadcasts its decision. Specifically, after its election,
each CH broadcasts an advertisement message to the other nodes and each one of
the other (non-CH) nodes determines a cluster to belong to, by choosing the CH
that can be reached using the least communication energy (based on the signal
strength of each CH message).
The role of being a CH is rotated periodically among the nodes of the cluster to
balance the load. The rotation is performed by getting each node to choose a
random number “T” between 0 and 1. The clusters are formed dynamically in each
round and the time to perform the rounds are also selected randomly. Generally,
LEACH can provide a quite uniform load distribution in one-hop sensor networks.
Moreover, it provides a good balancing of energy consumption by random rotation
of CHs. Furthermore, the localized coordination scheme used in LEACH provides
better scalability for cluster formation, whereas the better load balancing enhances
the network lifetime. However, despite the generally good performance, LEACH
has also some clear drawbacks. Because the decision on CH election and rotation
is probabilistic, there is still a good chance that a node with very low energy gets
selected as a CH. Due to the same reason, it is possible that the elected CHs will be
concentrated in one part of the network (good CHs distribution cannot be
guaranteed) and some nodes will not have any CH in their range. Also, the CHs are
assumed to have a long communication range so that the data can reach the BS
directly. This is not always a realistic assumption because the CHs are usually
regular sensors and the BS is often not directly reachable to all nodes. Moreover,
LEACH forms in general one-hop intracluster and intercluster topology where
each node should transmit directly to the CHs and thereafter to the BS, thus
normally it cannot be used effectively on networks deployed in large regions.
CHAPTER 4: LEACH PROTOCOL OPTIMIZATION
1. All nodes are assigned the same amplification energy. Amplification energy
is the amount of energy required for a transmission to occur between the
sender and the receiver. Thus the cluster heads are also given the same
energy. As the cluster heads have to perform data aggregation and transmit
all the information sent from the various nodes to the base station it requires
higher energy when compared to the node. But since same energy is given
for all nodes the cluster heads tend to lose their energy faster. Thus the life
of the network decreases.
2. Secondly in the case of Leach the cluster head is changed for every round.
But a node that is a cluster head cannot become the cluster head for the next
1/p rounds where p is the probability. Thus there are chances for a node with
lesser energy compared to the present cluster head to become the new cluster
head. This reduces the efficiency of the network.
Here two different levels of energy are assigned to the nodes. All the
nodes are assigned a particular energy level and all the cluster heads are
assigned a higher energy. This is because intra cluster communication requires
a lower energy for communication. Whereas for the communication between
the base station and the cluster head, the cluster head requires a higher energy.
Thus the routing overhead decreases and the number of packets transmitted
also increases. This in turn, increases the packet delivery ratio and also the
throughput which in turn increases the performance of the network.
In our project the concept of thresholding is obtained from the TEEN (Threshold
sensitive Energy Efficient sensor Network protocol)which is a new reactive
network protocol. It is targeted at reactive networks and is the first protocol
developed for reactive networks.
Functioning of TEEN
In this scheme, at every cluster change time, the cluster-head broadcasts to its
members at a particular time which is determined by two attributes hard threshold
and soft threshold.
Hard Threshold (HT ): This is a threshold value for the sensed attribute. It is the
absolute value of the attribute beyond which, the node sensing this value must
switch on its transmitter and report to its cluster head.
Soft Threshold (ST ): This is a small change in the value of the sensed attribute
which triggers the node to switch on its transmitter and transmit.
The nodes sense their environment continuously. The first time a parameter from
the attribute set reaches its hard threshold value, the node switches on its
transmitter and sends the sensed data. The sensed value is stored in an inter- nal
variable in the node, called the sensed value (SV). The nodes will next transmit
data in the current cluster period, only when both the following conditions are true:
1. The current value of the sensed attribute is greater than the hard threshold.
2. The current value of the sensed attribute differs from SV by an amount equal to
or greater than the soft threshold.
Whenever a node transmits data, SV is set equal to the cur- rent value of the sensed
attribute. Thus, the hard threshold tries to reduce the number of transmissions by
allowing the nodes to transmit only when the sensed attribute is in the range of
interest. The soft threshold further reduces the number of transmissions by
eliminating all the transmissions which might have other- wise occurred when
there is little or no change in the sensed attribute once the hard threshold.
Important Features
1. Time critical data reaches the user almost instantaneously. So, this scheme is
eminently suited for time- critical data sensing applications.
2. Message transmission consumes much more energy than data sensing. So, even
though the nodes sense continuously, the energy consumption in this scheme can
potentially be much less than in the proactive network, because data transmission
is done less frequently.
3. The soft threshold can be varied, depending on the criticality of the sensed
attribute and the target application.
4. A smaller value of the soft threshold gives a more accurate picture of the
network, at the expense of in- creased energy consumption. Thus, the user can
control the trade-off between energy efficiency and accuracy.
5. At every cluster change time, the attributes are broad- cast afresh and so, the
user can change them as required.
This protocol is best suited for time critical applications such as intrusion detection
and explosion detection.
CHAPTER 5: SOFTWARE IMPLEMENTATION
In this project the Qualnet simulator is mainly used for analyzing the
considered routing protocols that can be used for the development of an improved
LEACH protocol. The few routing protocols that are being analyzed here include
Bellman Ford, DSR, AODV and DYMO since these protocols have been shown to
give promising results considering the large scale deployment area pertaining to a
WSN scenario.
After setting the required parameters the Routing Protocol is varied from
DSR to AODV to DYMO and the simulation is ‘Run’ for different scenarios
containing varied number of sensor nodes (50, 100, …,etc.). In each case the
values of three main parameters named Throughput, End-to-End Delay and PDR
(Packet Delivery Ratio) are noted down.
From the results it was found that AODV and DYMO comparatively have higher
throughput and lesser end-to-end delay compared to the other two routing
protocols. Hence these two routing protocols are finalized for incorporation into
the LEACH protocol.
After including the routing block into the hierarchical protocol results were
obtained for the parameters PDR, Throughput and End-to-End delay and the
results are compared with corresponding parameters obtained for conventional
hierarchical protocol. And graphical plots were generated.
FIGURE 5 ‘No of Nodes Vs Throughput’ for Hierarchical and Flat protocol
The above results clearly suggests that hierarchical protocol with incorporated
AODV and incidentally DYMO has better performance than the Flat protocol.
5.2 MATLAB IMPLEMENTATION
The results obtained in Qualnet helped in deciding which routing protocol
would be more suited for the performance optimization of the LEACH protocol.
Initially both AODV and DYMO were combined with a basic hierarchical protocol
wherein the required hierarchical node-to-cluster head-to-base station routing was
configured using the Statics.in file available within the Qualnet software’s source
file. So the primitive design was necessarily a hierarchical routing achieved via
static routing. Proceeding to the next level in the protocol development involved
incorporating both AODV and DYMO protocols into the LEACH protocol
individually. Firstly this required developing a code for LEACH that could be
implemented in Qualnet and secondly to alter the ‘back-end’ coding of AODV and
DYMO available in the Qualnet source code file to combine them individually
with the LEACH protocol.
Though the code for LEACH was written in the back-end of Qualnet and the
combining of AODV & LEACH and DYMO & LEACH were completed, every
time the simulation was ‘run’ the software kept crashing. To sort out this mishap
the developers of the Qualnet software were contacted and asked for their views on
the project and their solution to the continued software crash. Their opinion was
that the implementation of the proposed combined protocol requires higher coding
capabilities and tampering with the original source could lead to eventual system
crash.
Comparing the Throughput plots it can be inferred that Modified LEACH with
Soft Threshold relatively shows a better throughput performance than the other
three protocols. Though there are fluctuating minor variations between LEACH,
MODLEACH and MODLEACH-HT, MODLEACH-LT shows progressively more
variation than the others and this variation further supports the high performance of
the MODLEACH-ST protocol.
The Cluster Head count Plot analyses clearly reveals that the number of cluster
heads formed per time instant is higher for MODLEACH-ST than any other
protocol. Also it’s duration of cluster head replacements lasts for more ‘rounds’ or
instants than all the other three protocols. The latter conclusion only corroborates
the point made earlier made the network life time of MODLEACH-ST being
higher than any of the other three protocols.
CHAPTER 7: CONCLUSION AND FUTURE WORK
7.1 CONCLUSION
In this work, we give a brief discussion on emergence of cluster based routing in
wireless sensor networks. We also propose MODLEACH, a new variant of
LEACH that can further be utilized in other clustering routing protocols for better
efficiency. MODLEACH tends to minimize network energy consumption by
efficient cluster head replacement after very first round and dual transmitting
power levels for intra cluster and cluster head to base station communication. In
MODLEACH, a cluster head will only be replaced when its energy falls below
certain threshold minimizing routing load of protocol. Hence, cluster head
replacement procedure involves residual energy of cluster head at the start of each
round. Further, soft and hard thresholds are implemented on MODLEACH to give
a comparison on performances of these protocols considering throughput and
energy utilization.
[4] M. Jiang, J. Li, and Y. C. Tay. “Cluster Based Routing Protocol”. Internet
Draft, 2012.