Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 82

ABSTRACT

Energy constraints pose great challenges to wireless sensor network (WSN)

with battery-powered nodes. But the reduction of energy consumption often

introduces additional latency of data delivery. Sleep/wake scheduling is an

essential consideration in sensor network applications and finding an optimal

sleep/wake scheduling strategy would minimize computation and communication

overhead. In this paper, a new distributed scheduling approach, artificial neural

network (ANN), is presented in order to reduce energy consumption and to achieve

low latency for WSN. The artificial neural network (ANN) was used as a classifier.

This pilot study consisted of three aims; the first aim was to utilize only the

wakeup signal for automatic sleep-wake stage detection. The second objective was

to investigate which features were the most effective in detecting the sleep-wake

phases in healthy and non-healthy nodes.


CHAPTER 1

INTRODUCTION

1.1 Overview of the Project

Due to recent technological advances, the manufacturing of small, low power,


low cost and highly integrated sensors has become technically and economically
feasible. These sensors are generally equipped with sensing, data processing and
communication components. Such sensors can be used to measure conditions in
the environment surrounding them and then transform these measurements into
signals. The signals can be processed further to reveal properties about objects
located in the vicinity of the sensors. The sensors then send these data, usually via
a radio transmitter, to a command center (also known as a “sink” or a “base
station”) either directly or via several relaying sensors. A large number of these
sensors can be networked in many applications that require unattended operation,
hence producing a wireless sensor network (WSN). Currently, there are various
applications of WSNs, including target tracking, health care, data collection,
security surveillance and distributed computing.

A sensor cannot receive or transmit any packets when it is sleeping, i.e., in


sleep state. A sensor in sleep state consumes very little energy.

A sensor can receive and transmit packets when it is awake, i.e., in wake-up
state. A sensor in wake-up state consumes much more energy compared to sleep
state.

Sensors adjust the sleeping time length and the awake time length in each
period in order to save energy and meanwhile guarantee the efficient transmission
of packets.
Typically, WSNs contain hundreds or thousands of sensors which have the
ability to communicate with each other. The energy of each sensor is limited and
they are usually unrechargeable, so energy consumption of each sensor has to be
minimized to prolong the life time of WSNs. Major sources of energy waste are
idle listening, collision, overhearing and control overhead. Among these, idle
listening is a dominant factor in most sensor network applications. There are
several ways to prolong the life time of WSNs, e.g., efficient deployment of
sensors, optimization of WSN coverage, and sleep/wake-up scheduling.

We present a sleep/wake schedule protocol for minimizing end-to-end delay for


event driven multi-hop wireless sensor networks. In contrast to generic sleep/wake
scheduling schemes, our proposed algorithm performs scheduling that is dependent
on traffic loads. Nodes adapt their sleep/wake schedule based on traffic loads in
response to three important factors, (a) the distance of the node from the sink node,
(b) the importance of the node's location from connectivity's perspective, and (c) if
the node is in the proximity where an event occurs. Using these heuristics, the
proposed scheme reduces end-to-end delay and maximizes the throughput by
minimizing the congestion at nodes having heavy traffic load. Simulations are
carried out to evaluate the performance of the proposed protocol, by comparing its
performance with S-MAC and Anycast protocols. Simulation results demonstrate
that the proposed protocol has significantly reduced the end-to-end delay, as well
as has improved the other QoS parameters, like average energy per packet, average
delay, packet loss ratio, throughput, and coverage lifetime.

1.2 Aim of the Project

In this paper, we focus on sleep/wake-up scheduling. Sleep/wakeup scheduling,


which aims to minimize idle listening time, is one of the fundamental research
problems in WSNs. Specifically, research into sleep/wake-up scheduling studies
how to adjust the ratio between sleeping time and awake time of each sensor in
each period. When a sensor is awake, it is in an idle listening state and it can
receive and transmit packets. However, if no packets are received or transmitted
during the idle listening time, the energy used during the idle listening time is
wasted. Such waste should certainly be minimized by adjusting the awake time of
sensors, which is the aim of sleep/wake-up scheduling. Recently, many
sleep/wake-up scheduling approaches have been developed. These approaches
roughly fall into three categories: 1) on-demand wake-up approaches; 2)
synchronous wake-up approaches; and 3) asynchronous wake-up approaches.

1.3 Wireless Sensor Network

Wireless sensor network (WSN) refers to a group of spatially dispersed and


dedicated sensors for monitoring and recording the physical conditions of the
environment and organizing the collected data at a central location. WSNs measure
environmental conditions like temperature, sound, pollution levels, humidity, wind,
and so on.

A mobile wireless sensor network (MWSN) can simply be defined as a


wireless sensor network (WSN) in which the sensor nodes are mobile. MWSNs are
a smaller, emerging field of research in contrast to their well-established
predecessor. MWSNs are much more versatile than static sensor networks as they
can be deployed in any scenario and cope with rapid topology changes. However,
many of their applications are similar, such as environment monitoring or
surveillance. Commonly, the nodes consist of a radio transceiver and a
microcontroller powered by a battery, as well as some kind of sensor for detecting
light, heat, humidity, temperature, etc.
These are similar to wireless ad hoc networks in the sense that they rely on
wireless connectivity and spontaneous formation of networks so that sensor data
can be transported wirelessly. Sometimes they are called dust networks, referring
to minute sensors as small as dust. Smart dust is a U C Berkeley project sponsored
by DARPA. Dust Networks Inc., is one of the early companies that produced
wireless sensor network products. WSNs are spatially distributed autonomous
sensors to monitor physical or environmental conditions, such as temperature,
sound, pressure, etc. and to cooperatively pass their data through the network to a
main location. The more modern networks are bi-directional, also enabling control
of sensor activity. The development of wireless sensor networks was motivated by
military applications such as battlefield surveillance; today such networks are used
in many industrial and consumer applications, such as industrial process
monitoring and control, machine health monitoring, and so on.

The WSN is built of "nodes" – from a few to several hundreds or even


thousands, where each node is connected to one (or sometimes several) sensors.
Each such sensor network node has typically several parts: a radio transceiver with
an internal antenna or connection to an external antenna, a microcontroller, an
electronic circuit for interfacing with the sensors and an energy source, usually a
battery or an embedded form of energy harvesting. A sensor node might vary in
size from that of a shoebox down to the size of a grain of dust, although
functioning "motes" of genuine microscopic dimensions have yet to be created.
The cost of sensor nodes is similarly variable, ranging from a few to hundreds of
dollars, depending on the complexity of the individual sensor nodes. Size and cost
constraints on sensor nodes result in corresponding constraints on resources such
as energy, memory, computational speed and communications bandwidth. The
topology of the WSNs can vary from a simple star network to an advanced multi-
hop wireless mesh network. The propagation technique between the hops of the
network can be routing or flooding.

In computer science and telecommunications, wireless sensor networks are


an active research area with numerous workshops and conferences arranged each
year, for example IPSN, SenSys, and EWSN.

Fig 1.1 Multi-hop wireless sensor network architecture

Area monitoring is a common application of WSNs. In area monitoring, the


WSN is deployed over a region where some phenomenon is to be monitored. A
military example is the use of sensors detect enemy intrusion; a civilian example is
the geo-fencing of gas or oil pipelines.

Broadly speaking, there are two sets of challenges in MWSNs; hardware and
environment. The main hardware constraints are limited battery power and low
cost requirements. The limited power means that it's important for the nodes to be
energy efficient. Price limitations often demand low complexity algorithms for
simpler microcontrollers and use of only a simplex radio. The major environmental
factors are the shared medium and varying topology. The shared medium dictates
that channel access must be regulated in some way. This is often done using a
medium access control (MAC) scheme, such as carrier sense multiple access
(CSMA), frequency division multiple access (FDMA) or code division multiple
access (CDMA). The varying topology of the network comes from the mobility of
nodes, which means that multihop paths from the sensors to the sink are not stable.

Routing

Since there is no fixed topology in these networks, one of the greatest


challenges is routing data from its source to the destination. Generally these
routing protocols draw inspiration from two fields; WSNs and mobile ad hoc
networks (MANETs). WSN routing protocols provide the required functionality
but cannot handle the high frequency of topology changes. Whereas, MANET
routing protocols can deal with mobility in the network but they are designed for
two way communication, which in sensor networks is often not required.

Protocols designed specifically for MWSNs are almost always multihop and
sometimes adaptations of existing protocols. For example, Angle-based Dynamic
Source Routing (ADSR), is an adaptation of the wireless mesh network protocol
Dynamic Source Routing (DSR) for MWSNs. ADSR uses location information to
work out the angle between the node intending to transmit, potential forwarding
nodes and the sink. This is then used to insure that packets are always forwarded
towards the sink. Also, Low Energy Adaptive Clustering Hierarchy (LEACH)
protocol for WSNs has been adapted to LEACH-M (LEACH-Mobile),[7] for
MWSNs. The main issue with hierarchical protocols is that mobile nodes are prone
to frequently switching between clusters, which can cause large amounts of
overhead from the nodes having to regularly re-associate themselves with different
cluster heads.

Another popular routing technique is to utilise location information from a


GPS module attached to the nodes. This can be seen in protocols such as Zone
Based Routing (ZBR), which defines clusters geographically and uses the location
information to keep nodes updated with the cluster they're in. In comparison,
Geographically Opportunistic Routing (GOR), is a flat protocol that divides the
network area into grids and then uses the location information to opportunistically
forward data as far as possible in each hop.

Multipath protocols provide a robust mechanism for routing and therefore


seem like a promising direction for MWSN routing protocols. One such protocol is
the query based Data Centric Braided Multipath (DCBM).

Furthermore, Robust Ad-hoc Sensor Routing (RASeR) and Location Aware


Sensor Routing (LASeR) are two protocols that are designed specifically for high
speed MWSN applications, such as those that incorporate UAVs. They both take
advantage of multipath routing, which is facilitated by a 'blind forwarding'
technique. Blind forwarding simply allows the transmitting node to broadcast a
packet to its neighbors, it is then the responsibility of the receiving nodes to decide
whether they should forward the packet or drop it. The decision of whether to
forward a packet or not is made using a network-wide gradient metric, such that
the values of the transmitting and receiving nodes are compared to determine
which is closer to the sink. The key difference between RASeR and LASeR is in
the way they maintain their gradient metrics; RASeR uses the regular transmission
of small beacon packets, in which nodes broadcast their current gradient. Whereas,
LASeR relies on taking advantage of geographical location information that is
already present on the mobile sensor node, which is likely the case in many
applications.

There are three types of medium access control (MAC) techniques: based on
time division, frequency division and code division. Due to the relative ease of
implementation, the most common choice of MAC is time-division-based, closely
related to the popular CSMA/CA MAC. The vast majority of MAC protocols that
have been designed with MWSNs in mind, are adapted from existing WSN MACs
and focus on low power consumption, duty-cycled schemes.

1.3.1 Characteristics

The main characteristics of a WSN include:

 Power consumption constraints for nodes using batteries or energy


harvesting. Examples of suppliers are ReVibe Energy and Perpetuum
 Ability to cope with node failures (resilience)
 Some mobility of nodes (for highly mobile nodes see MWSNs)
 Heterogeneity of nodes
 Homogeneity of nodes
 Scalability to large scale of deployment
 Ability to withstand harsh environmental conditions
 Ease of use
 Cross-layer design

1.3.4 NS2 – Wireless Networks

The wireless network can be created in NS2 using the components of a


mobile node and its configurations in every layer. Nodes can be deployed either
randomly or in a deterministic manner in flat-grid network space. Mobility model
of the nodes can be created and integrated in the simulation. Data communication
between nodes can be configured with transport and application layer agents that
are required to be attached to both sender and receiver nodes. Different types of
wireless networks such as Mobile Ad hoc Network (MANET), Vehicular ad hoc
Network (VANET), Wireless Sensor Network (WSN), Cognitive Radio Network
(CRN), Wireless Mesh Network, Cellular network, and Heterogeneous network
can be simulated subjective to their own protocol specifications and configurations.

At the beginning of a wireless simulation, we need to define the type for


each of these network components.

Additionally, we need to define other parameters like:

 The type of antenna


 The radio-propagation model
 The type of ad-hoc routing protocol used by mobile nodes etc.

The physical layer facilitates the configuration of channel, interfaces,


antenna type, signal propagation model, energy model, error model, and channel
fading scheme. In successive layers, different type of interface queues (DropTail,
Priority Queue, CMUPriority queue, RED), MAC layer protocols (IEEE 802.11,
IEEE 802.15.4, IEEE 802.11p), Network layer protocols (AODV, DSR, DSDV,
AOMDV, TORA, OLSR, ZRP, ZBR, FSR, CGSR, CBRP, LEACH, PEGASIS,
GPSR, POR), Transport layer protocols (TCP and its variants, UDP, XCP, SCTP,
RTP, RTCP, LossMonitor), application layer protocols (FTP, CBR, Exponential,
Pareto, RealAudio, Video) can be applied.

Application layer agents have the options for packet size, data rate, data
transmission interval, start and stop time of data transmission. The node mobility
model can be created with specification of target location and speed. Nodes with
different Communication range can be configured. Energy model can be created
with specification of initial energy, transmission, reception, idle and sleep power of
the nodes. Error model can be created with a random packet loss rate to simulate
network interference and fading.

1) Creation of Wireless Network using NS2 simulator

The wireless networking model can be created using Tool Command


Language (TCL) script with fixed number of nodes. The sample code discussed
below models the wireless network with 2 nodes. Nodes are configured with the
components of channel, networking interface, radio propagation model, Medium
Access Control (MAC) protocol, adhoc routing protocol, interface queue, link
layer, topography object, and antenna type. The wireless network with 2 nodes can
be viewed in the Network Animator (NAM) window after executing the file
sample1.tc

2) Creating Random/dynamic topology in NS2

The dynamic topology can be created using rand function in Tool Command
Language (TCL) script with fixed number of nodes. The nodes can be deployed in
the area of X & Y. Each node is assigned with a random location within X & Y
using rand function. In dynamic topology, the neighbors of each node vary with the
location of that particular node. The code segment in sample2.tcl file demonstrates
the dynamic topology in wireless network with 2 nodes that are deployed in the
area of 500m & 400m.

3) Dynamic wireless network using NS2 simulator

The dynamic wireless network in ns2 can be modeled using rand function in
Tool Command Language (TCL) script. The number of nodes in the network
varies dynamically during the runtime. The dynamic wireless network allocates the
dynamic location for each node. The sample3.tcl file shows the dynamic network
with dynamic number of nodes specified during the execution that are deployed in
the area of 500m  500m

5) Data transmission between the nodes

In a wireless network, nodes communicate using the communication model


that consists of TCP agent, TCP Sink agent, and FTP application. The sender node
is attached to the TCP agent while the receiver node is attached to the TCPSink
agent. The connection between TCP agent and TCPSink agent is established using
the keyword “connect”. Transport agent (TCP) and application (FTP) are
connected using the keyword “attach-agent”. TCP agent sends data to TCPSink
agent. On receiving the data packet, TCPSink agent sends the acknowledgement to
the TCP agent that in turn processes the acknowledgements and adjusts the data
transmission rate. The lost packets interpreted as a sign of congestion. The tcl
script in sample5.tcl demonstrates the communication between the nodes using
TCP protocol.

Links in NS are used to provide connectivity between Nodes (routers)

Links is not implemented as a class, but as a part of the Simulator object.

6) Mobility model

In a wireless network, the mobility of a node from a location to another


location can be enabled using the keyword “setdest” in Tool Command Language
(TCL) script. The specifications for a node’s target location include x-coordinate,
y-coordinate along with the speed. The target location of a node should lie within
the network area. The tcl script in sample6.tcl shows the mobility model of the
nodes in the area of 500m  500m.
7) Energy model

Initial Random Energy Binding to all Nodes. We have added energy


breakdown in each state in the traces to support detailed energy analysis. In
addition to the total energy, now users will be able to see the energy consumption
in different states at a given time. In a wireless network, the energy model is one of
the optional attributes of a node. The energy model denotes the level of energy in a
mobile node. The components required for designing energy model includes
initialEnergy, txPower, rxPower, and idlePower. The “initialEnergy” represents the
level of energy the node has at the initial stage of the simulation. “txPower” and
“rxPower” denotes the energy consumed for transmitting and receiving the
packets. If the node is a sensor, the energy model should include a special
component called “sensePower”. It denotes the energy consumed during the
sensing operation. The tcl script in sample7.tcl illustrates the energy model in the
wireless network with three nodes that are deployed in the area of 500m x 500m.
Each node is assigned to 10 Joules as “initialEnergy”.

8) Transmission Range

The communication range of a node in the wireless network is generally


represented as a circle. If the receiver node lies within the transmission range of a
sender node, it can receive all the packets. If not, it loses all the packets. Both the
sender and receiver node represent the transmission range as an ideal circle. If the
receiver node is located near the edge of the communication range of the sender, it
can only probabilistically receive the packets. The default transmission range of a
node in the wireless network is 250m. A user can set the transmission range based
on the application’s requirement. This can be achieved by assigning an appropriate
value of the receiving threshold (RXThresh_) during the execution. Sample9.tcl
script allows setting two different transmission ranges (500, 200) for two different
groups of nodes in the network.

6) Wireless Sensor Network (WSN) in NS2

A wireless sensor network (WSN) consists of a large number of small sensor


nodes that are deployed in the area in which a factor is to be monitored. In wireless
sensor network, energy model is one of the optional attributes of a node. The
energy model denotes the level of energy in a mobile node. The components
required for designing energy model includes initialEnergy, txPower, rxPower, and
idlePower. The “initialEnergy” represents the level of energy the node has at the
initial stage of simulation. “txPower” and “rxPower” denotes the energy consumed
for transmitting and receiving the packets. If the node is a sensor, the energy model
should include a special component called “sensePower”. It denotes the energy
consumed during the sensing operation. Apart from these components, it is
important to specify the communication range (RXThresh_) and sensing range of a
node (CSThresh_). The sample 18.tcl designs a WSN in which sensor nodes are
configured with different communication and sensing range. Base Station is
configured with highest communication range. Data Transmission is established
between nodes using UDP agent and CBR traffic.

7) Energy consumption of the network

The energy model represents the energy level of nodes in the network. The
energy model defined in a node has an initial value that is the level of energy the
node has at the beginning of the simulation. This energy is termed as initial
Energy_. In simulation, the variable “energy” represents the energy level in a node
at any specified time. The value of initial Energy_ is passed as an input argument.
A node loses a particular amount of energy for every packet transmitted and every
packet received. As a result, the value of initial Energy_ in a node gets decreased.
The energy consumption level of a node at any time of the simulation can be
determined by finding the difference between the current energy value and initial
Energy_ value. If an energy level of a node reaches zero, it cannot receive or
transmit anymore packets. The amount of energy consumption in a node can be
printed in the trace file. The energy level of a network can be determined by
summing the entire node’s energy level in the network.

1.4 On-Demand Wake-Up Approaches

In on-demand wake-up approaches, out-of-band signaling is used to wake up


sleeping nodes on-demand. For example, with the help of a paging signal, a node
listening on a page channel can be woken up. As page radios can operate at lower
power consumption, this strategy is very energy efficient. However, it suffers from
increased implementation complexity.

1.5 Synchronous Wake-Up Approaches

In synchronous wake-up approaches, sleeping nodes wake up at the same


time periodically to communicate with one another. Such approaches have to
synchronize neighboring nodes in order to align their awake or sleeping time.
Neighboring nodes start exchanging packets only within the common active time,
enabling a node to sleep for most of the time within an operational cycle without
missing any incoming packets. Synchronous wake-up approaches can reduce idle
listening time significantly, but the required synchronization introduces extra
overhead and complexity. In addition, a node may need to wake up multiple times
during a full sleep/wake-up period, if its neighbors are on different schedules.

1.6 Asynchronous Wake-Up Approaches


In asynchronous wake-up approaches, each node follows its own wake-up
schedule in the idle state. This requires that the wake-up intervals among neighbors
are overlapped. To meet this requirement, nodes usually have to wake up more
frequently than in synchronous wake-up approaches. The advantages offered by
asynchronous wake-up approaches include easiness of implementation, low
message overhead for communication, and assurance of network connectivity even
in highly dynamic networks.

Most current studies use the technique of duty cycling to periodically


alternate between awake and sleeping states. Here, duty cycle is the ratio between
the wake up time length in a predefined period and the total length of that period.
For example, suppose a period is 1 s and a node keeps awake for 0.3 s and keeps
asleep for 0.7 s in the period. Then, the duty cycle is 30% (or 0.3). The use of duty
cycling incurs a tradeoff between energy saving and packet delivery delay: a long
wake-up time may cause energy waste, while a short wake-up time may incur
packet delivery delay. However, in WSNs, both energy saving and packet delivery
delay are important. Because each node in WSNs is usually equipped with an un-
rechargeable battery, energy saving is crucial for prolonging the lifetime of WSNs.
Because delay is unacceptable in some applications of WSNs, e.g., fire detection
and tsunami alarm, reducing packet delivery delay is crucial for the effectiveness
of WSNs. An intuitive solution to this tradeoff is to dynamically determine the
length of wake-up time.

1.7 Sleep/Wake Scheduling

Sleep wake scheduling has been used to extend the network lifetime. Energy
efficiency has inherent tradeoff with delay, thus, generally in such sleep wake
scheduling strategies, maximization in network lifetime is achieved at the expense
of increase in delay. In many delay sensitive application where, real time response
is required, such delays could not be tolerated. Generally, WSNs operate for a long
time in idle mode and only occasionally send data. The energy consumption of
listening to the idle channel is equivalent to its energy consumption when sending
or receiving, and much larger than the energy consumption of the sleep mode. To
receive data, the receiver must be in high power state, for example, active/listen
state; as in sleep state, the radio is in low power mode with the receiving circuitry
switched off. If the receiver operates at 100% duty cycle, that is, its transceiver is
always on; then it would be able to receive the data at the cost of high energy
consumption. To reduce the power consumption low duty cycle operations are
required. This fact is exploited by sleep-wake scheduling techniques and effort is
made to reduce this energy wastage in idle mode by designing low duty cycle
operations. Variety of sleep/wake scheduling protocols has been proposed. Most of
them use a period sleep/wake interval and provide effective energy conservation at
the cost of delay and throughput. For example, for a source node to transmit data, it
has to know the sleep/wakeup schedule of the neighbor node and has to wait for
the neighbor to come into the active state. The same is repeated until the data
reaches the final destination thus resulting in unprecedented delays. This increase
in delay is equal to the product of the number of intermediate forwarders times the
length of the wakeup interval. Such increase in endto-end delay incurred due to
latency-energy tradeoff has the potential to become major problem in many
emerging delay-sensitive WSN applications, which require fast response and real-
time control. To extend the network lifetime by organizing the sensors into a
maximal number of set covers that are activated successively. Only the sensors are
from the current active set are responsible for monitoring all targets and for
transmitting the collected data , which all other nodes are in low-energy sleep
mode. To save power, schedule the wireless nodes to alternate between active and
sleep mode. The contribution of my paper is to introduce a new model of
maximizing the network lifetime of the target coverage problem by organizing the
sensor nodes and analyze the performance of my paper through simulation.

1.8 Sleep/Wake Schemes

Sleep/wake scheduling mechanisms can further be divided into three main


types: on-demand, scheduled rendezvous, and asynchronous. In on demand
protocols, a node wakes up only when some other node wants to talk to it. Low
power wakeup radio in addition to the main radio is used to address this issue but
is restricted by geographic scalability that is the range of the wake-up radio is very
limited. In the schedule rendezvous approach, the nodes are scheduled to wake-up
at the same time when their neighbor nodes wakeup. In this way, a node can have
communication with its neighbors as nodes in the same locality have the same
schedule/wake interval. The issue with this scheme is that nodes may have to
maintain multiple wake-up schedules. In the asynchronous approach, a node can
wake-up at any instance when it wants to communicate. Overlapping between
wake intervals of the communicating nodes is ensured.

Fig 1.2 Sensor Nodes

Sensor node in four states: transmit, receive, idle and sleep. The idle state is
when the transceiver is neither transmitting nor receiving, and the sleep mode is
when the radio is turned off. The receive and idle modes may require as much
energy as transmitting, while the sleep mode requires the less energy. To extend
the network lifetime by dividing the sensor nodes into a number of sets, such that
each set completely cover all the targets. These sensor sets are activated
successfully, such that at any time instant only one set is active. The sensors from
the active set are in active state (e.g. transmit, receive or idle) and all other sensors
are in sleep state. If, while meeting the coverage requirements, sensor nodes
alternate between the active state and sleep mode, that will result in increasing the
network lifetime.

1.9 Self Adaptive Activation

The solution proposed can dynamically determine the length of wake-up


time by transmitting all messages in bursts of variable length and sleeping between
bursts. That solution can save energy but it may exaggerate packet delivery delay,
because each node has to spend time to accumulate packets in its queue before
each node transmits these packets in bursts. Another solution, proposed, enables
senders to predict receivers’ wake-up times by using a pseudo-random wake-up
scheduling approach. In the future, if senders have packets to transmit, senders can
wake up shortly before the predicted wake-up time of receivers, so the energy,
which senders use for idle listening, can be saved. In this case, senders do not have
to make the tradeoff, because their wake-up times are totally based on receivers’
wake-up times. Receivers still face the tradeoff, however, since a receiver’s wake-
up time relies on a pseudo-random wake-up scheduling function and different
selections of parameters in this function will result in different wake-up intervals.
In addition, before a sender can make a prediction about a receiver’s wake-up time,
the sender has to request the parameters in the receiver’s wake-up scheduling
function. This request incurs extra energy consumption.
In this paper, a self-adaptive sleep/wake-up scheduling approach is
proposed, which takes both energy saving and packet delivery delay into account.
This approach is an asynchronous one and it does not use the technique of duty
cycling. Thus, the tradeoff between energy saving and packet delivery delay can be
avoided. In most existing duty cycling based sleep/wake-up scheduling
approaches, the time axis is divided into periods, each of which consists of several
time slots. In each period, nodes adjust their sleep and wake up time, i.e., adjusting
the duty cycle, where each node keeps awake in some time slots while sleeps in
other time slots. In the proposed self-adaptive sleep/wake-up scheduling approach,
the time axis is directly divided into time slots. In each time slot, each node
autonomously decides to sleep or wake up. Thus, in the proposed approach, there
is no ‘cycle’ and each time slot is independent.

Fig 1.3 Formulation of the problem.

The proposed approach is not designed incorporating a specific packet


routing protocol. This is because if the sleep/wakeup scheduling approach is
designed incorporation with a specific packet routing protocol, the scheduling
approach may work well only with that routing protocol but may work less
efficiently with other routing protocols. For example, sleep/wake-up scheduling
approach is designed incorporation with a packet routing protocol. Their
scheduling approach uses staggered wake-up schedules to create unidirectional
delivery paths for data propagation to significantly reduce the latency of data
collection process. Their approach works very well if packets are delivered in the
designated direction, but it is not efficient when packets are delivered in other
directions.

Generally, the radio transceiver in a sensor node has three modes of


operations (termed actions): 1) transmit; 2) listen; and 3) sleep. In transmit mode,
the radio transceiver can transmit and receive packets. In listen mode, the
transmitter circuitry is turned off, so the transceiver can only receive packets. In
sleep mode, both receiver and transmitter are turned off. Typically, among these
actions, the power required to transmit is the highest, the power required to listen is
medium and the power required to sleep is much less compared to the other two
actions. The example provided in [20] shows these power levels: 81 mW for
transmission, 30 mW for listen, and 0.003 mW for sleep.

Recent years have seen tremendous advancement in wireless sensor


networks due to reduction in development costs and improvisation in hardware
manufacturing. Past two to three decades have been marked with rapid use of
wireless sensor networks in various fields. Wireless sensor networks are now used,
other than in military surveillance, in habitat monitoring, seismic activity
surveillance and are now even used in indoor applications. These wireless sensors
have provided us the tool to monitor an area of interest remotely. All one is
supposed to do is to deploy these sensors, aerially or manually, and then these
sensors which form the nodes of the network gather information from the area
under investigation. The information thus obtained is relayed back to the “main
server” or “base station” where the information is processed. Sensor nodes which
constitute the wireless network are autonomous nodes with a microcontroller, one
or more sensors, a transceiver, actuators and a battery for power supply. These
sensors have very little memory and perform little amount of processing with the
information obtained. Now apart from monitoring, collecting and transmitting data
from one node to another and to the base station, these sensors communicate with
other nodes following certain communication protocols moreover the processing
unit regulates and controls functionality of other components of the sensor node.
Nevertheless, the memory operation is an overhead too. This is because the sensors
are equipped with a battery which often is non-replace. Thus increase in processing
would imply more energy is being consumed and hence sensor lifetime would
decrease thereby affecting the lifetime of the network. As mentioned earlier the
relaying of information is done by following a certain communication protocol.
Usually the sensor has a transceiver that can act as both transmitter and a receiver.

Sensors can communicate through transmission media ranging from vast


electromagnetic spectrum. A wireless sensor network is deployed in one of the two
ways: planned and unplanned. In the planned method of deployment, a specific
number of sensors are placed in strategic points in predetermined manner. Here it
should be noted that the area to be monitored can be accessed physically thus the
cost is not a factor under such conditions. These nodes are placed using a
predetermined algorithm such that the area to be covered is maximized placing less
overhead on transmission and battery thereby enhancing the network lifetime. The
wireless sensor network faces various issues one of which includes coverage of the
given area under limited energy. This problem of maximizing the network lifetime
while following the coverage and energy parameters or constraints is known as the
Target Coverage Problem in Wireless Sensor Networks. As the sensor nodes are
battery driven so they have limited energy too and hence the main challenge
becomes maximizing the coverage area and also ensuring a prolonged network
lifetime. The sensor network is formed with small electronic devices possessing
self-configuring capability that are either randomly deployed or manually
positioned in huge bulk. It performs activities in several dimensions, for instance
identifying the neighborhood, presence of targets or monitoring environmental
factors (motion, temperature, humidity, sound and other physical variables).
However, owing to limited battery power, the sensor networks demand energy
efficient resolutions to enhance the performance of sensor network.

Energy consumption problem, being the most visible challenge, is


considered central to the sensor research theme. The processing of data, memory
accesses and input/output operations, all consume sensor energy. Sensor networks
are collection of sensor nodes which co-operatively send sensed data to base
station. As sensor nodes are battery driven, an efficient utilization of power is
essential in order to use networks for long duration hence it is needed to reduce
data traffic inside sensor networks, reduce amount of data that need to send to base
station. Wireless sensor networks (WSN) offer an increasingly Sensor nodes need
less power for processing as compared to transmitting data. It is preferable to do in
network processing inside network and reduce packet size.

Wireless sensor networks (WSN) offer an increasingly attractive method of


data gathering in distributed system architectures and dynamic access via wireless
connectivity. The key advantage of using these small devices to monitor the
environment is that it does not require infrastructure such as electric mains for
power supply and wired lines for Internet connections to collect data, nor need
human interaction while deploying. For one application, the quality of service
depends upon how information is transferred from one node to another while for
others the delay in transmission has to be minimized. The quality of service
parameter here is that the target points in the area under surveillance are to be
maximized while taking in account the limited energy supply of the sensor.
Basically, it is ensured that every sensor monitors at least one target and that they
operate in covers. Each cover is scheduled to work in turns while other sensors
remain in sleep mode. Thus when a particular cover runs out of energy, other cover
is activated which monitors the area and hence the network lifetime is maximized.
Network Lifetime: Network lifetime is the amount of time each target is covered
by at least one sensor, obtain data and transmit them back to the base station. The
main concern or the bottleneck is the limited amount of battery available since
there are various situations in which the sensors are deployed in hostile situation
where it is very difficult to replace the battery. The objective is to maximize the
number of targets monitored before the sensors consume their energy.
CHAPTER 2

LITERATURE SURVEY

2.1Y. Xiao et al., “Tight performance bounds of multihop fair access for MAC
protocols in wireless sensor networks and underwater sensor networks,”
IEEE Trans. Mobile Comput., vol. 11, no. 10, pp. 1538–1554, Oct. 2012.

This paper investigates the fundamental performance limits of medium


access control (MAC) protocols for particular multihop, RF-based wireless sensor
networks and underwater sensor networks. A key aspect of this study is the
modeling of a fair-access criterion that requires sensors to have an equal rate of
underwater frame delivery to the base station. Tight upper bounds on network
utilization and tight lower bounds on the minimum time between samples are
derived for fixed linear and grid topologies. The significance of these bounds is
two-fold: First, they hold for any MAC protocol under both single-channel and
half-duplex radios; second, they are provably tight. For underwater sensor
networks, under certain conditions, we derive a tight upper bound on network
utilization and demonstrate a significant fact that the utilization in networks with
propagation delay is larger than that in networks with no propagation delay. The
challenge of this work about underwater sensor networks lies in the fact that the
propagation delay impact on underwater sensor networks is difficult to model.
Finally, we explore bounds in networks with more complex topologies.
Fundamental performance limitations must be well understood when
establishing a network protocol in order to ensure that the protocol is appropriate
for a particular network design choice. For example, in a bandwidth constrained
system, one might rule out channelization to support the implementation of full
duplex communications because they prefer to use contention-based or
coordinated-access-based protocols, even when the first option may actually be
more efficient. An inappropriate protocol can result in a network which cannot
sustain expected traffic loads. It is important to study the fundamental performance
limitations of wireless sensor networks (WSNs), as establishing the performance
bounds of a network protocol is necessary for determining whether the protocol is
appropriate for a particular network design choice. The wireless sensor networks
(either RF-based sensor networks or acoustic underwater sensor networks)
considered in this paper are multihop: each sensor node performs sensing,
transmission, and relay. All data frames are sent to a dedicated data-collection
node, called the base station, that is responsible for relaying the frames to a
dislocated command center over a radio or wired link.

2.2S. Zhu, C. Chen, W. Li, B. Yang, and X. Guan, “Distributed optimal


consensus filter for target tracking in heterogeneous sensor networks,”
IEEE Trans. Cybern., vol. 43, no. 6, pp. 1963–1976, Dec. 2013.

This paper is concerned with the problem of filter design for target tracking
over sensor networks. Different from most existing works on sensor networks, we
consider the heterogeneous sensor networks with two types of sensors different on
processing abilities (denoted as type-I and type-II sensors, respectively). However,
questions of how to deal with the heterogeneity of sensors and how to design a
filter for target tracking over such kind of networks remain largely unexplored. We
propose in this paper a novel distributed consensus filter to solve the target
tracking problem. Two criteria, namely, unbiasedness and optimality, are imposed
for the filter design. The so-called sequential design scheme is then presented to
tackle the heterogeneity of sensors. The minimum principle of Pontryagin is
adopted for type-I sensors to optimize the estimation errors. As for type-II sensors,
the Lagrange multiplier method coupled with the generalized inverse of matrices is
then used for filter optimization. Furthermore, it is proven that convergence
property is guaranteed for the proposed consensus filter in the presence of process
and measurement noise. Simulation results have validated the performance of the
proposed filter. It is also demonstrated that the heterogeneous sensor networks with
the proposed filter outperform the homogenous counterparts in light of reduction in
the network cost, with slight degradation of estimation performance.

To locate and track a moving target is crucial for many applications such as
robotics, surveillance, monitoring, and security for large-scale complex
environments. In such scenarios, a number of sensors can be employed in order to
improve the tracking accuracy and increase the size of the surveillance area in a
cooperative manner. Basically, these sensors have modest capabilities of sensing,
computation, and multihop wireless communication. Equipped with these
capabilities, the sensors can self-organize to form a network that is capable of
sensing and processing spatial and temporal dense data in the monitored area.

2.3G. Acampora, D. J. Cook, P. Rashidi, and A. V. Vasilakos, “A survey on


ambient intelligence in healthcare,” Proc. IEEE, vol. 101, no. 12, pp. 2470–
2494, Dec. 2013.

Ambient Intelligence (AmI) is a new paradigm in information technology


aimed at empowering people's capabilities by means of digital environments that
are sensitive, adaptive, and responsive to human needs, habits, gestures, and
emotions. This futuristic vision of daily environment will enable innovative
human-machine interactions characterized by pervasive, unobtrusive, and
anticipatory communications. Such innovative interaction paradigms make AmI
technology a suitable candidate for developing various real life solutions, including
in the healthcare domain. This survey will discuss the emergence of AmI
techniques in the healthcare domain, in order to provide the research community
with the necessary background. We will examine the infrastructure and technology
required for achieving the vision of AmI, such as smart environments and wearable
medical devices. We will summarize the state-of-the-art artificial intelligence (AI)
methodologies used for developing AmI system in the healthcare domain,
including various learning techniques (for learning from user interaction),
reasoning techniques (for reasoning about users' goals and intensions), and
planning techniques (for planning activities and interactions). We will also discuss
how AmI technology might support people affected by various physical or mental
disabilities or chronic disease. Finally, we will point to some of the successful case
studies in the area and we will look at the current and future challenges to draw
upon the possible future research paths.

A multifunction handheld device used for sensing and data analysis in Start
Trek series device monitors your health status in a continuous manner, diagnoses
any possible health conditions, has a conversation with you to persuade you to
change your lifestyle for maintaining better health, and communicates with your
doctor, if needed. The device might even be embedded into your regular clothing
fibers in the form of very tiny sensors and it might communicate with other devices
around you, including the variety of sensors embedded into your home to monitor
your lifestyle. For example, you might be alarmed about the lack of a healthy diet
based on the items present in your fridge and based on what you are eating outside
regularly. This might seem like science fiction for now, but many respecters in the
field of Ambient Intelligence (AmI) expect such scenarios to be part of our daily
life in not so far future.

2.4Y. Yao, Q. Cao, and A. V. Vasilakos, “EDAL: An energy-efficient, delay-


aware, and lifetime-balancing data collection protocol for heterogeneous
wireless sensor networks,” IEEE/ACM Trans. Netw., vol. 23, no. 3, pp.
810–823, Jun. 2015, doi: 10.1109/TNET.2014.2306592.

Our work in this paper stems from our insight that recent research efforts on
open vehicle routing (OVR) problems, an active area in operations research, are
based on similar assumptions and constraints compared to sensor networks.
Therefore, it may be feasible that we could adapt these techniques in such a way
that they will provide valuable solutions to certain tricky problems in the wireless
sensor network (WSN) domain. To demonstrate that this approach is feasible, we
develop one data collection protocol called EDAL, which stands for Energy-
efficient Delay-aware Lifetime-balancing data collection. The algorithm design of
EDAL leverages one result from OVR to prove that the problem formulation is
inherently NP-hard. Therefore, we proposed both a centralized heuristic to reduce
its computational overhead and a distributed heuristic to make the algorithm
scalable for large-scale network operations. We also develop EDAL to be closely
integrated with compressive sensing, an emerging technique that promises
considerable reduction in total traffic cost for collecting sensor readings under
loose delay bounds. Finally, we systematically evaluate EDAL to compare its
performance to related protocols in both simulations and a hardware testbed.

In recent years, wireless sensor networks (WSNs) have emerged as a new


category of networking systems with limited computing, communication, and
storage resources. A WSN consists of nodes deployed to sense physical or
environmental conditions for a wide range of applications, such as environment
monitoring, scientific observation, emergency detection, field surveillance, and
structure monitoring. In these applications, prolonging the lifetime of WSN and
guaranteeing packet delivery delays are critical for achieving acceptable quality of
service.

2.5S. H. Semnani and O. A. Basir, “Semi-flocking algorithm for motion


control of mobile sensors in large-scale surveillance systems,” IEEE Trans.
Cybern., vol. 45, no. 1, pp. 129–137, Jan. 2015.

The ability of sensors to self-organize is an important asset in surveillance


sensor networks. Self-organize implies self-control at the sensor level and
coordination at the network level. Biologically inspired approaches have recently
gained significant attention as a tool to address the issue of sensor control and
coordination in sensor networks. These approaches are exemplified by the two
well-known algorithms, namely, the Flocking algorithm and the Anti-Flocking
algorithm. Generally speaking, although these two biologically inspired algorithms
have demonstrated promising performance, they expose deficiencies when it
comes to their ability to maintain simultaneous robust dynamic area coverage and
target coverage. These two coverage performance objectives are inherently
conflicting. This paper presents Semi-Flocking, a biologically inspired algorithm
that benefits from key characteristics of both the Flocking and Anti-Flocking
algorithms. The Semi-Flocking algorithm approaches the problem by assigning a
small flock of sensors to each target, while at the same time leaving some sensors
free to explore the environment. This allows the algorithm to strike balance
between robust area coverage and target coverage. Such balance is facilitated via
flock-sensor coordination. The performance of the proposed Semi-Flocking
algorithm is examined and compared with other two flocking-based algorithms
once using randomly moving targets and once using a standard walking pedestrian
dataset. The results of both experiments show that the Semi-Flocking algorithm
outperforms both the Flocking algorithm and the Anti-Flocking algorithm with
respect to the area of coverage and the target coverage objectives. Furthermore, the
results show that the proposed algorithm demonstrates shorter target detection time
and fewer undetected targets than the other two flocking-based algorithms.

2.6B. Fu, Y. Xiao, X. Liang, and C. L. P. Chen, “Bio-inspired group modeling


and analysis for intruder detection in mobile sensor/robotic networks,”
IEEE Trans. Cybern., vol. 45, no. 1, pp. 103–115, Jan. 2015.

Although previous bio-inspired models have concentrated on invertebrates


(such as ants), mammals such as primates with higher cognitive function are
valuable for modeling the increasingly complex problems in engineering.
Understanding primates' social and communication systems and applying what is
learned from them to engineering domains is likely to inspire solutions to a number
of problems. This paper presents a novel bio-inspired approach to determine group
size by researching and simulating primate society. Group size does matter for both
primate society and digital entities. It is difficult to determine how to group mobile
sensors/robots that patrol in a large area when many factors are considered such as
patrol efficiency, wireless interference, coverage, inter/intragroup communications,
etc. This paper presents a simulation-based theoretical study on patrolling
strategies for robot groups with the comparison of large and small groups through
simulations and theoretical results.

Mobile robots equipped with sensors are able to cooperatively work together
via wireless communication technologies in order to achieve and obtain
surveillance teaming as well as task accomplishments in a large, complex field.
The major challenges of communication in the large and complex field are
considered that the number of mobile sensors is insufficient for a constantly
available network used by intra/intergroups. While each group may be able to
maintain communication within the group at all times, a complete path for constant
end-to-end data communication for any pairs of source and destination in different
groups may not exist. There are always unmonitored locations due to the limited
number of mobile sensors/robots that cannot monitor and cover the whole field. In
order to solve such a problem, the mobile robots/sensors need to patrol the entire
field in order to cover it completely. Unfortunately, we are uncertain as to how to
group the robots/sensors to achieve a low cost. The size of the robot/sensor groups
could be either large or small. It is not easy to intuitively determine which
grouping size is the more efficient. A similar choice exists in primate society as
well. Rhesus macaques and titi monkeys are two kinds of primates that usually live
in groups in order to supervise their territory, defend against intruders, and search
for food. Rhesus macaques live in large groups that normally contain 10–80
individuals, regardless of habitat type. The members in the group communicate via
facial expressions, body postures, and vocal communication. Communication
within each group is complicated because of the large number of members in the
group. Titi monkeys, however, live in small groups that only consist of the parents
and their offspring. Each group of titi monkeys contains a total of 2–7 animals.

2.7Y. Zhao, Y. Liu, Z. Duan, and G. Wen, “Distributed average computation


for multiple time-varying signals with output measurements,” Int. J.
Robust Nonlin. Control, vol. 26, no. 13, pp. 2899–2915, 2016.

We present a distributed discontinuous control algorithm for a team of


agents to track the average of multiple time-varying reference signals with
bounded derivatives. We use tools from non-smooth analysis to analyze the
stability of the system. For time-invariant undirected connected network
topologies, we prove that the states of all agents will converge to the average of the
time-varying reference signals with bounded derivatives in finite time provided
that the control gain is properly chosen. The validity of this result is also
established for scenarios with switching undirected connected network topologies.
For time-invariant directed network topologies with a directed spanning tree, we
show that all agents will still reach a consensus in finite time, but the convergent
value is generally not the average of the time-varying reference signals with
bounded derivatives. Simulation examples are presented to show the validity of the
above results.

In the past, researchers have been working on consensus problems with


different properties of graphs, types of agent dynamics, and analysis tools. Here,
the states of all agents usually converge to the average or the weighted average of
the initial conditions of these agents, which is a constant value. In particular, finite-
time consensus algorithms have attracted much attention due to its better
disturbance rejection property and robustness against uncertainties. Normalized
and signed versions of gradient dynamical systems were introduced. Two results
on finite-time convergence and the second order information of the Lyapunov
functions were derived. The finite time consensus problem with continuous state
feedback for undirected and directed graphs. A general framework for designing
semi-stable protocols in dynamical networks for achieving coordination tasks in
finite time. In addition, finite-time consensus problems for discrete-time systems
were also considered.

In a consensus problem, when there exists a dynamic leader (e.g., an agent


that moves by itself regardless of the other agents) or a time varying reference
signal, the consensus problem becomes a coordinated tracking problem. Here, the
objective is that the states of all agents track the state of the dynamic leader or the
time-varying reference signal. A coordinated tracking problem for a group of
autonomous agents and the results were extended to second order systems a
distributed discrete-time coordinated tracking problem where a team of agents
communicating with their local neighbors at discrete-time instants tracks a time-
varying reference signal available to only a subset of the team members. The
coordinated tracking problems were studied where the reference velocity is
available to only one agent while the other agents estimate the reference velocity
with an adaptive design.

2.8Y. Zhao, Z. Duan, G. Wen, and G. Chen, “Distributed finite-time tracking


of multiple non-identical second-order nonlinear systems with settling time
estimation,” Automatica, vol. 64, pp. 86–93, Feb. 2016.

This paper investigates the distributed finite-time consensus tracking


problem for a group of autonomous agents modeled by multiple non-identical
second-order nonlinear systems. First, a class of distributed finite-time protocols
are proposed based on the relative position and relative velocity measurements. By
providing a topology-dependent Lyapunov function, it is shown that distributed
consensus tracking can be achieved in finite time under the condition that the
nonlinear errors between the leader and the followers are bounded. Then, a new
class of observer-based algorithms are designed to solve the finite-time consensus
tracking problem without using relative velocity measurements. The main
contribution of this paper is that, by computing the value of the Lyapunov function
at the initial point, the finite settling time can be theoretically estimated for second-
order multi-agent systems with the proposed control protocols. Finally, the
effectiveness of the analytical results is illustrated by an application in low-Earth-
orbit spacecraft formation flying.

During the past two decades, distributed cooperative control of autonomous


agents has emerged as a new research direction and received increasing interest in
different fields with the advent of wireless communication networks and powerful
embedded systems. Research on this topic aims to understand how various group
behaviors emerge as a result of local interactions among individuals. Distributed
cooperative control has applications in a wide range of areas, such as attitude
synchronization, state consensus, formation flying, and cooperative surveillance. In
a distributed cooperative control system, a group of autonomous agents, by
coordinating with each other via communication or sensing networks, can perform
certain challenging tasks which cannot be well accomplished by a single agent. As
one of the important and fundamental research issues for multi-agent systems,
consensus problem has been extensively studied over the past few years. The
objective is to develop distributed control policies using only local relative
information to ensure that the states of the agents reach an agreement on certain
quantities of interest. A pioneering work on consensus was attributed to Olfati-
Saber and Murray (2004), where a general framework of the consensus problem
for networks of integrators was proposed. According to the number of leaders in
the network, existing consensus algorithms can be roughly categorized into two
classes, namely, leaderless consensus and leader-following consensus. The latter is
also called the distributed tracking problem, where the objective is to drive the
states of the followers to track those of the leader. Seminal works on the consensus
tracking problem with integrator-type dynamics include.

2.9M. Li, Z. Li, and A. V. Vasilakos, “A survey on topology control in wireless


sensor networks: Taxonomy, comparative study, and open issues,” Proc.
IEEE, vol. 101, no. 12, pp. 2538–2557, Dec. 2013.

The wireless sensor network (WSN) technology spawns a surge of


unforeseen applications. The diversity of these emerging applications represents
the great success of this technology. A fundamental performance benchmark of
such applications is topology control, which characterizes how well a sensing field
is monitored and how well each pair of sensors is mutually connected in WSNs.
This paper provides an overview of topology control techniques. We classify
existing topology control techniques into two categories: network coverage and
network connectivity. For each category, a surge of existing protocols and
techniques are presented with the focus on blanket coverage, barrier coverage,
sweep coverage, power management, and power control, five rising aspects that
attract significant research attention in recent years. In this survey, we emphasize
the basic principles of topology control to understand the state of the arts, while we
explore future research directions in the new open areas and propose a series of
design guidelines under this topic.

Past several years have witnessed a great success of wireless sensor


networks (WSNs). As an emerging and promising technology, WSNs have been
widely used in a variety of long-term and critical applications, including event
detection, target tracking, environment and habitat monitoring, localization, safety
navigation, and so on. A sensor network usually consists of hundreds even
thousands of sensor nodes, which are typically self-organized in a multihop
fashion. By working together, sensor nodes coordinate to finish a common task.

2.10 W. Ye, J. Heidemann, and D. Estrin, “An energy-efficient MAC


protocol for wireless sensor networks,” in Proc. IEEE INFOCOM, New
York, NY, USA, Jun. 2002, pp. 1567–1576.

This paper proposes S-MAC, a medium-access control (MAC) protocol


designed for wireless sensor networks. Wireless sensor networks use battery-
operated computing and sensing devices. A network of these devices will
collaborate for a common application such as environmental monitoring. We
expect sensor networks to be deployed in an ad hoc fashion, with individual nodes
remaining largely inactive for long periods of time, but then becoming suddenly
active when something is detected. These characteristics of sensor networks and
applications motivate a MAC that is different from traditional wireless MACs such
as IEEE 802.11 in almost every way: energy conservation and self-configuration
are primary goals, while per-node fairness and latency are less important. S-MAC
uses three novel techniques to reduce energy consumption and support self-
configuration. To reduce energy consumption in listening to an idle channel, nodes
periodically sleep. Neighboring nodes form virtual clusters to auto-synchronize on
sleep schedules. Inspired by PAMAS, S-MAC also sets the radio to sleep during
transmissions of other nodes. Unlike PAMAS, it only uses in-channel signaling.
Finally, S-MAC applies message passing to reduce contention latency for sensor-
network applications that require store-and-forward processing as data move
through the network. We evaluate our implementation of S-MAC over a sample
sensor node, the Mote, developed at University of California, Berkeley. The
experiment results show that, on a source node, an 802.11-like MAC consumes 2-6
times more energy than S-MAC for traffic load with messages sent every 1-10 s.

WIRELESS sensor networking is an emerging technology that has a wide


range of potential applications including environment monitoring, smart spaces,
medical systems and robotic exploration. Such a network normally consists of a
large number of distributed nodes that organize themselves into a multi-hop
wireless network. Each node has one or more sensors, embedded processors and
low-power radios, and is normally battery operated. Typically, these nodes
coordinate to perform a common task.
CHAPTER 3

SYSTEM ANALYSIS

3.1 Existing System

Most existing studies use the technique of duty cycling to periodically


alternate between awake and sleeping states. In most existing duty cycling-based
sleep/wake-up scheduling approaches, the time axis is divided into periods, each of
which consists of several time slots.

In each period, nodes adjust their sleep and wake up time, i.e., adjusting the
duty cycle, where each node keeps awake in some time slots while sleeps in other
time slots. In the proposed self-adaptive sleep/wake-up scheduling approach, the
time axis is directly divided into time slots. In each time slot, each node
autonomously decides to sleep or wake up.

3.1.1 Disadvantages of Existing System

 Consume more energy.


 High packet delivery latency
 Decrease network lifetime.

3.2 Proposed System


In this paper, a self-adaptive sleep/wake-up scheduling approach is
proposed, which takes both energy saving and packet delivery delay into account.
This approach is an asynchronous one and it does not use the technique of duty
cycling.

This approach is the first one which does not use the technique of duty
cycling. Thus the tradeoff between energy saving and packet delivery delay, which
is incurred by duty cycling, can be avoided.

Unlike recent prediction-based approaches, where nodes have to exchange


information between each other, this approach enables nodes to approximate their
neighbors’ situation without requesting information from these neighbors.

A self-adaptive sleep/wake-up scheduling approach. This approach does not


use the technique of duty cycling. Instead, it divides the time axis into a number of
time slots and lets each node autonomously decide to sleep, listen or transmit in a
time slot. Each node makes a decision based on its current situation and an
approximation of its neighbors’ situations, where such approximation does not
need communication with neighbors. Through these techniques, the performance
of the proposed approach outperforms other related approaches. Most existing
approaches are based on the duty cycling technique and these researchers have
taken much effort to improve the performance of their approaches. Thus, duty
cycling is a mature and efficient technique for sleep/wake up scheduling. This
paper is the first one which does not use the duty cycling technique. Instead, it
proposes an alternative approach which is based on game theory and the
reinforcement learning technique.
Fig 3.1 Overview of the proposed approach

A and B are two neighboring nodes whose clocks may not be synchronized.
They make decisions at the beginning of each time slot autonomously and
independently without exchanging information. There are two points in the figure
which should be noted. First, for the receiver, if the length of a time slot is not long
enough to receive a packet, the length of the time slot will be extended
automatically until the packet is received successfully (see the first time slot of
node B). Second, when a node decides to transmit a packet in the current time slot
and the length of the time slot is longer than the time length required to transmit a
packet, the node will also decide when in the current time slot to transmit the
packet (see the third time slot of node B).

Sleep/wake scheduling algorithm is an effective mechanism to prolong the


lifetime of network. However, sleep–wake scheduling protocol could result in
substantial delays because a transmitting node needs to wait for its next-hop relay
node to wake up. To reduce these delays by developing “Anycast” based packet
forwarding schemes, where each node opportunistically forwards a packet to the
first neighboring node that wakes up among multiple candidate nodes.

Sleep scheduling is use to increase the battery lifetime of sensor node. Sleep
scheduling is applying to those nodes which having low power but some event
happed that time it has to move from passive state to active state. So node should
be in passive state for longer time so it can save the energy.

3.2.1 Advantages of Proposed System

 The proposed approach provides a new way to study sleep/wake-up


scheduling in WSNs.
 Maximize network lifetime.
 Improve packet delivery ratio.

3.3 Wake Scheduling

Wake-up scheduling is a challenging problem in wireless sensor networks. It


was recently shown that a promising approach for solving this problem is to rely
on reinforcement learning (RL). The RL approach is particularly attractive since it
allows the sensor nodes to coordinate through local interactions alone, without the
need of central mediator or any form of explicit coordination. This article extends
previous work by experimentally studying the behavior of RL wake-up scheduling
on a set of three different network topologies, namely line, mesh and grid
topologies. The experiments are run using OMNET++, a the state-of-the-art
network simulator. The obtained results show how simple and computationally
bounded sensor nodes are able to coordinate their wake-up cycles in a distributed
way in order to improve the global system performance. The main insight of these
experiments is to show that sensor nodes learn to synchronize if they have to
cooperate for forwarding data, and learn to desynchronize in order to avoid
interferences. This synchronization/desynchronization behavior, referred to for
short as (de)synchronicity, allows to improve the message throughput even for
very low duty cycles.
A Wireless Sensor Network is a collection of densely deployed autonomous
devices, called sensor nodes, which gather data with the help of sensors. The
untethered nodes use radio communication to transmit sensor measurements to a
terminal node, called the sink. The sink is the access point of the observer, who is
able to process the distributed measurements and obtain useful information about
the monitored environment. Sensor nodes communicate over a wireless medium,
by using a multi-hop communication protocol that allows data packets to be
forwarded by neighboring nodes to the sink. A typical multi-hop communication
protocol is to rely on a shortest path tree with respect to the hop distance. Such a
tree is obtained by letting nodes broadcast packets after deployment, in order
identify their neighbors. The nodes then determine the neighbor node which is the
closest (in terms of hops) to the sink, and use it as the relaying node for the multi-
hop routing. An example of multi-hop shortest path routing, together with the radio
communication ranges of sensor nodes. Since communication is the most energy
expensive action, it is clear that in order to save energy, a node should turn off its
antenna (or go to sleep). However, when sleeping, the node is not able to send or
receive any messages, therefore it increases the latency of the network, i.e., the
time it takes for messages to reach the sink. High latency is undesirable in any real-
time applications. On the other hand, a node does not need to listen to the channel
when no messages are being sent, since it loses energy in vain. As a result, nodes
should determine on their own when they should be awake within a frame. This
behavior is called wake-up scheduling. Once a node wakes up, it remains active for
a predefined amount of time, called duty cycle.

Wake-up scheduling in wireless sensor networks is an active research


domain. A good survey on wake-up strategies in WSNs is presented. The standard
approach is S-MAC, a synchronized medium access control (MAC) protocol for
WSN. In S-MAC, the duty-cycle is fixed by the user, and all sensor nodes
synchronize in such a way that their active periods take place at the same time.
This synchronized active period enables neighboring nodes to communicate with
one another. The use of routing then allows any pair of nodes to exchange
messages. By tuning the duty-cycle, wake-up scheduling therefore allows to adapt
the use of sensor resources to the application requirements in terms of latency, data
rate and lifetime. Recently, we showed that the wake-up scheduling problem could
be efficiently tackled in the framework of multi-agent systems and reinforcement
learning. In wireless sensor networks, the sensor nodes can be seen as agents,
which have to logically self-organize in groups (or coalitions). The actions of
agents within a group need to be synchronized (e.g., for data forwarding), while at
the same time being desynchronized with the actions of agents in other groups
(e.g., to avoid radio interferences). We refer to this concept for short as
(de)synchronicity. Coordinating the actions of agents (i.e., sensor nodes) can
successfully be done using the reinforcement learning framework by rewarding
successful interactions (e.g., transmission of a message in a sensor network) and
penalizing the ones with a negative outcome (e.g., overhearing or packet
collisions). This behavior drives the nodes to repeat actions that result in positive
feedback more often and to decrease the probability of unsuccessful interactions.
Coalitions are formed when agents select the same successful actions. A key
feature of our approach is that no explicit notion of coalition is necessary. Rather,
these coalitions emerge from the global objective of the system, and agents learn
by themselves with whom they have to (de)synchronize (e.g. to maximize
throughput in a routing problem). Here desynchronization refers to the situation
where one agent’s actions (e.g. waking up the radio transmitter of a wireless node)
are shifted in time, relative to another, such that the (same) actions of both agents
do not happen at the same time. In this article, we extend our previous results by
illustrating the benefits of our self-adapting RL approach in three wireless sensor
networks of different topologies, namely line, mesh and grid. We show that nodes
form coalitions which allow to reduce packet collisions and end-to-end latency,
even for very low duty cycles. This (de)synchronicity is achieved in a
decentralized manner, without any explicit communication, and without any prior
knowledge of the environment. Our simulations are implemented using OMNET+
+, a state-ofthe-art simulator. The paper is organized as follows. It presents the
reinforcement learning approach for solving the wake-up scheduling problem in
WSN.

Communication in WSNs is achieved by means of networking protocols,


and in particular by means of the Medium Access Control (MAC) and the routing
protocols. The MAC protocol is the data communication protocol concerned with
sharing the wireless transmission medium among the network nodes. The routing
protocol allows to determine where sensor nodes have to transmit their data so that
they eventually reach the sink. A vast amount of literature exists on these two
topics, and we sketch in the following the key requirements for the MAC and
routing protocols so that our reinforcement learning mechanism presented in
Section 2.2 can be implemented. We emphasize that these requirements are very
loose. We use a simple MAC protocol, inspired from S-MAC, that divides the time
into small discrete units, called frames. We further divide each frame into time
slots. The frame and slot duration are application dependent and in our case they
are fixed by the user prior to network deployment. The sensor nodes then rely on a
standard duty cycle mechanism, in which the node is awake for a predetermined
number of slots during each period. The duration of the awake period is fixed by
the user, while its position is initialized randomly within the frame for each node.
These active slots will be shifted as a result of the learning, which will coordinate
nodes’ wake-up schedules in order to ensure high data throughput and longer
battery life. Each node will learn to be in active mode when its parents and
children are awake, so that it forwards messages faster (synchronization), and stay
asleep when neighboring nodes on the same hop are communicating, so that it
avoids collisions and overhearing (desynchronization). The routing protocol is not
explicitly part of the learning algorithm and therefore any multi-hop routing
scheme can be applied without losing the properties of our approach. The
forwarding nodes need not be explicitly known, as long as they ensure that their
distance to the sink is lower than the sender. Communication is done using a
Carrier Sense Multiple Access (CSMA) protocol. Successful data reception is
acknowledged with an ACK packet. We would like to note that the
acknowledgment packet is necessary for the proper and reliable forwarding of
messages. Our algorithm does use this packet to indicate a “correct reception” in
order to formulate one of its reward signals. However, this signal is not crucial for
the RL algorithm and thus the latter can easily function without acknowledgment
packets. It will further elaborate on the use of reward signals. It is noteworthy that
the communication partners of a node (and thus the formation of coalitions) are
influenced by the communication and routing protocols that are in use and not by
our algorithm itself. These protocols only implicitly determine the direction of the
message flow and not who will forward those messages, since nodes should find
out the latter by themselves. Depending on the routing protocol, coalitions (e.g.,
synchronized groups of nodes) logically emerge across the different hops, such that
there is, if possible, only one agent from a certain hop within a coalition. This
concept in three different topologies. It shows as an example how coalitions form
as a result of the routing protocol. Intuitively, nodes from one coalition need to
synchronize their wake-up schedules. As defined by the routing protocol messages
are not sent between nodes from the same hop, hence these nodes should
desynchronize (or belong to separate coalitions) to avoid communication
interference.

Each agent in the WSN uses a reinforcement learning (RL) algorithm to


learn an efficient wake-up schedule (i.e. when to remain active within the frame)
that will improve throughput and lifetime in a distributed manner. It is clear that
learning in multi-agent systems of this type requires careful exploration in order to
make the action-values of agents converge. We use a value iteration approach
similar to single-state Q-learning with an implicit exploration strategy. However,
our update scheme differs from that of traditional Q-learning. The battery power
required to run the algorithm is marginal to the communication costs and thus it is
neglected. The main challenge in such a decentralized approach is to define a
suitable reward function for the individual agents that will lead to an effective
emergent behavior as a group.

The actions of each agent are restricted to selecting a time window (or a
wake period) within a frame for staying awake. Since the size of these frames
remains unchanged and they constantly repeat throughout the network lifetime, our
agents use no notion of states, i.e. we say that our learning system is stateless (or
single-state). The duration of this wake period is defined by the duty cycle, fixed
by the user of the system. In other words, each node selects a slot within the frame
when its radio will be switched on for the duration of the duty cycle. Thus, the size
of the action space of each agent is determined by the number of slots within a
frame. In general, the more actions agents have, the slower the reinforcement
learning algorithm will converge. On the other hand, a small action space might
lead to suboptimal solutions and will impose an energy burden on the system.
Setting the right amount of time slots within a frame requires a study on itself, that
we shall not undertake in this paper due to space restrictions. Every node stores a
“quality value” (or Q-value) for each slot within its frame. This value for each slot
indicates how beneficial it is for the node to stay awake during these slots for every
frame, i.e. what is an efficient wake-up pattern, given its duty cycle and
considering its communication history. When a communication event occurs at a
node (overheard, sent or received a packet) or if no event occurred during the wake
period (idle listening), that node updates the quality-value of the slot(s) when this
event happened.

Wake-up scheduling is scheduling, in which node are move to active mode


to transferring data form one node to another node wake-up scheduling classified
in two parts. They are wake-up demand and wake-up on specific time. In wake –up
demand, the node is by default set to sleep mode only whenever some event
happed are it require then node is move from passive state to active state. In wake-
up on specific time, a node cannot be in any mode for infinite amount of time there
is mechanize which has threshold values. When it reaches threshold values that
time only it has move to active mode. The sensor scheduling mechanism can be
accomplished by sensors send their location information to the BS. BS executes the
sensor scheduling algorithm and broadcast the schedule when each node is active
and every sensor schedules itself for sleep/active intervals. Nodes happen to learn
the sleep schedule of each of their neighbor nodes and wakeup only to transmit
when they know their destination node is awake. In, Anycast packetforwarding
scheme is proposed, where each node has multiple next-hop relaying nodes in a
candidate set referred as forwarding set. Thus, when a node has data to send, it
needs to wait for one specified next hop neighbor to wake, rather, it forwards the
packet to the first node that wakes up in the forwarding set. It reduces the expected
one-hop delay.
Their approach works very well if packets are delivered in the designated
direction, but it is not efficient when packets are delivered in other directions.

The contributions of this paper are summarized as follows.

1) To the best of our knowledge, this approach is the first one which
does not use the technique of duty cycling. Thus the tradeoff between energy
saving and packet delivery delay, which is incurred by duty cycling, can be
avoided. This approach can reduce both energy consumption and packet delivery
delay.

2) This approach can also achieve higher packet delivery ratios in


various circumstances compared to the benchmark approaches.

3) Unlike recent prediction-based approaches, where nodes have to


exchange information between each other, this approach enables nodes to
approximate their neighbors’ situation without requesting information from these
neighbors. Thus, the large amount of energy used for information exchange can be
saved.

Interaction between two neighboring nodes is modeled as a two-player,


three-action game, where two players,1 a row player and a column player,
represent two neighboring nodes and three actions mean transmit, listen, and sleep.
The three terms, player, node, and sensor, are used interchangeably in this paper.
Game theory is a mathematical technique which can be used to deal with
multiplayer decision making problems. During the decision-making process, there
may be conflict or cooperation among the multiple players. Such conflict or
cooperation can be easily modeled by game theory via properly setting the payoff
matrices and utility functions. In WSNs, there are conflict and cooperation among
sensors during many processes, such as packet routing and sleep/wake-up
scheduling. Thus, in this paper, game theory is used to deal with the sleep/wake-up
scheduling problem among sensors in WSNs. The game is defined by a pair of
payoff matrices

( )
r 11 r 12 r 13
R= r 21 r 22 r 23 and
r 31 r 32 r 33

( )
c 11 c 12 c 13
C= c 21 c 22 c 23
c 31 c 32 c 33

where R and C specify the payoffs for the row player and the column player,
respectively. Each of the two players selects an action from the three available
actions. The joint action of the players determines their payoffs according to their
payoff matrices. If the row player and the column player select actions i and j,
respectively, the row player receives payoff rij and the column player obtains
payoff cij. The players can select actions stochastically based on a probability
distribution over their available actions. Let α1–α3 denote the probability for the
row player to choose actions 1–3, respectively, where α1+ α2 + α3 = 1. Let β1–β3
denote the probability for the column player to choose actions 1–3, respectively,
where β1 + β2 + β3 = 1. The row player’s expected payoff is


Pr= 1 ≤i ≤3
rijαiβj
1< j<3

Let actions 1–3 denote transmit, listen, and sleep, respectively. The values of
those payoffs in the payoff matrices can be defined by the energy used by a node
(which is a negative payoff). In addition, if a packet is successfully transmitted, the
payoff of the transmitter/receiver is the energy, used to transmit/receive the packet,
plus a positive constant, U, say U = 98. Constant U is added on the energy
consumption, if and only if a packet is successfully transmitted. The payoff for
action sleep is −0.003 (the energy consumed during sleeping period) irrespective of
the opponent’s action, where the negative sign means that the energy is consumed.
The value of the constant U is larger than the energy used for transmitting or
receiving a packet. For example, if the row player has a packet to transmit and it
selects transmit and the column player selects listen, the packet can be successfully
transmitted. The payoffs for both players are positive, which can be calculated
using the energy they use to transmit/receive the packet plus the constant U. Then,
the row player gets payoff −81 + 98 = 17 and the column player obtains payoff −30
+ 98 = 68, where 81 and 30 are energy consumption for transmitting and receiving
a packet, respectively, and the negative sign means that the energy is consumed.
However, if the column player selects sleep, the packet cannot be successfully
transmitted. Then, the row player gets payoff −81 (the energy used for transmitting
a packet) and the column player gets payoff −0.003 (the energy used for sleeping).
It should be noted that if a node does not have a packet to transmit, it will not
select transmit.

In a time slot, each node is in one of several states which indicate the status
of its buffer. For example, if a node’s buffer can store three packets, there are four
possible states for the node: s0–s3, which imply that the node has 0–3 packets in its
buffer, respectively. The aim of each node is to find a policy π, mapping states to
actions, that can maximize the node’s long-run payoff. Specifically, for a node, π(s,
a) is a probability, based on which the node selects action a in current state s, and
π(s) is a vector which is a probability distribution over the available actions in
current state s. Thus, policy π is a matrix. For example, a node’s buffer can store
three packets, so the node have four states: s0–s3, as described above. Also, the
node has three actions: transmit, listen, and sleep, denoted as 1–3, respectively.
Hence, the policy of the node is
Here, the terms π(s, a) and α, β denote the probability of selecting an action.
π(s, a) takes states into consideration while α and β do not do so. α and β are used
only for description convenience of the model and the algorithms.

3.4 ANN based self-adaptive Sleep &Wakeup Algorithm

Based on the proposed model, we present a reinforcement learning


algorithm, which is employed by a player to learn its optimal actions through trial-
and-error interactions within a dynamic environment. The algorithm is called Q-
learning. Q-learning is one of the simplest reinforcement learning algorithms. Both
reinforcement learning and evolutionary computation are subfields of machine
learning. Reinforcement learning aims to solve sequential decision tasks through
trial and error interactions with the environment. In a sequential decision task, a
participant interacts with a dynamic system by selecting actions that affect state
transitions to optimize some reward function. Evolutionary algorithms are global
search techniques derived from Darwin’s theory of evolution by natural selection.
An evolutionary algorithm iteratively updates a population of potential solutions,
which are often encoded in structures called chromosomes.

Thus, the major difference between reinforcement learning and evolutionary


algorithms is that reinforcement learning is used by participants to maximize their
individual rewards while evolutionary algorithms are used to achieve global
optimization. Moreover, reinforcement learning algorithms are mainly
decentralized and participants need only local information, whereas evolutionary
algorithms are primarily centralized or require global information. In this paper,
WSNs are distributed environments and each sensor has only local information
about itself, so reinforcement learning is more suitable than evolutionary
algorithms to the sleep/wake-up scheduling problem. The benefit of reinforcement
learning is that a player does not need a teacher to learn how to solve a problem.

The only signal used by the player to learn from its actions in dynamic
environments is payoff (also known as reward), a number which tells the player if
its last action was good or not. Q-learning as the simplest reinforcement learning
algorithm is model-free, which means that players using Q-learning can act
optimally in Markovian domains without building overall maps of the domains.
During the learning process, a player takes an action in a particular state based on a
probability distribution over available actions. The higher the probability of an
action is, the more possible the action could be taken. Then, the player evaluates
the consequence of the action, which the player just takes, based on the immediate
reward or penalty, which it receives by taking the action, and also based on the
estimate of the value of the state in which the action is taken. By trying all actions
in all states repeatedly, the player learns which action is the best choice in a
specific state.

First, a node selects an action based on a probability distribution over the


three actions: transmit, listen, or sleep, in the current state s. Second, the node
carries out the selected action and observes the immediate payoff and the new state
s. Finally, the node adjusts the probability distribution over the three actions in
state s based on the payoff and the approximation of the interacted neighbor’s
policy. At the beginning of each time slot, Algorithm 1 repeats from line 3, except
for the first time slot where the algorithm starts at line 1. In line 1, a learning rate
determines to what extent the newly acquired information will override the old
information. The value of a learning rate is in the range [0, 1]. A factor of 0 means
that the node does not learn anything, while a factor of 1 means that the node
considers only the most recent information. A discount factor determines the
importance of future rewards. The value of a discount factor is in the range [0, 1].
A factor of 0 means that the node is myopic by only considering current rewards,
while a factor approaching 1 means that the node strives for a long-term high
reward. At the beginning of each time slot, a node has to decide in which mode it
will be in this time slot. The node thus selects an action based on the probability
distribution over its available actions in its current state. The initial probability
distribution can be set equally over available actions. For example, in this paper,
there are three actions. Initially, the probability of selecting each action can be set
to (1/3). Later, during the learning process, the probability distribution over the
actions will be updated based on the consequence of each action. If the selected
action is transmitting, the node needs to decide when to transmit the packet in the
time slot. The node then receives a payoff and reaches a new state. It updates the
Q-value of the selected action in its current state based on the received payoff and
the maximum Q-value in the new state. Here, Q-value, Q(s, a), is a reinforcement
of taking action a in state s. This information is used to reinforce the learning
process.

The formula is a value iteration update. Initially, Q-value is given arbitrarily


by the designer. This neighbor is the one that has interacted with the node, i.e.,
transmitted a packet to the node or received a packet from the node, in the current
time slot. Then, based on the approximation, the node updates its policy π(s, a) for
each available action. If the selected action is sleep, which means that the node
does not interact with another node in the current time slot, the node then updates
its policy π(s, a) for each available action based only on its average payoff. In line
12, the calculation of average payoff is based on the Q-value of an action times the
probability of selecting the action. Certainly, average payoff can also be calculated
using the sum of the payoff of each action dividing the total number of actions.
The former calculation method, however, is more efficient and is more widely used
than the latter one.

The probability of selecting each action is updated. The update of the


probability of selecting an action is derived using the current probability of
selecting the action plus the difference between the Q-value of the action and the
average payoff. If the Q-value of an action is larger than the average payoff, the
probability of selecting the action will be increased; otherwise, the probability will
be decreased. In line 15, the probability distribution π(s) is normalized to be a valid
distribution, where a∈A π(s, a) = 1 and each π(s, a) is within the range (0, 1). The
learning rate is decayed to guarantee the convergence of the algorithm as shown in
Theorem 3 in the supplementary material. The decay method is not unique.
Actually, any progressive decay methods can be used here.

Each node generates a packet at the beginning of each time slot based on a
predefined probability: the packet generation probability. As the state of a node is
determined by the number of packets in its buffer, the packet generation
probability directly affects the state of each node. Then, the action selection of
each node will be indirectly affected. The expiry time of a packet is based on
exponential distribution. The average size of a packet is 100 bytes, and the actual
size of a packet is based on normal distribution with variance equal to 10. In this
simulation, four packet generation probabilities are used: 0.2, 0.4, 0.6, and 0.8.
This setting is to evaluate the performance of these approaches in a network with
different number of transmitted packets. For packet routing, we use a basic routing
approach, gossiping. Gossiping is a slightly enhanced version of flooding where
the receiving node sends the packet to a randomly selected neighbour, which picks
another random neighbour to forward the packet to and so on, until the destination
or the maximum hop is reached. It should be noted that when the destination and
some other nodes are all in the signal range of the source, based on the routing
protocol, the source still relays a packet to one of neighbors and this process
continues until the destination or the maximum hop is reached. The routing process
is not optimized in the simulation, as this paper focuses on sleep/wake-up
scheduling only. This routing protocol is not energy-efficient but it is easy to
implement. Because all of the sleep/wake-up scheduling approaches use the same
routing protocol in the simulation, the comparison among them is still fair.
Performance is measured by three quantitative metrics:

1) average packet delivery latency;

2) packet delivery ratio; and

3) average energy consumption.

The minimum time needed by nodes to transmit or receive a packet is about


2 ms, e.g., using the radio chip Chipcon CC2420. The three metrics are described
as follows.

1) Packet delivery latency is measured by the average time taken by each


delivered packet to be transmitted from the source to the destination. Note that
those packets, which do not reach the destination successfully, have also been
taken into account. Their delivery latency is the time interval, during which they
exist in the network.

2) Packet delivery ratio is measured by using the percentage of packets that


are successfully delivered from the source to the destination. Each packet comes
with a parameter, time-to-live (TTL), which is a positive integer. Once a packet is
transmitted from a sender to a receiver (no matter whether successfully or
unsuccessfully), the TTL of this packet subtracts 1. If the TTL of this packet
becomes 0 and it has not reached the destination, the delivery of this packet is a
failure.

3) Average energy consumption is calculated by using the total energy


consumption to divide the number of nodes in the network during a simulation run.
In this simulation, we set up the evaluation cases ourselves. The approaches used
for comparison in the simulation are from four different references which use
different evaluation cases. Thus, there are no common evaluation cases among
these references. Because all the evaluated approaches are tested in the same cases,
the comparison is still fair and the results are convincing. For the compared
approaches, the duty cycle is set to 5%. A full sleep/wake-up interval is set to 1 s.
As our approach does not use the duty cycle, the time length of a time slot in our
approach is set to 8 ms. Because the compared approaches do not use
reinforcement learning, they do not need update and thus do not need learning
rates. In each simulation case, each of these approaches has 200 repeats and each
repeat consists of 5000 s. The detailed simulation result data are given in the
supplementary material.

For i in 1 to N

Listen for Packet Active SN

If AN available

For k in 1 to size of queue

Start communicate with AN

Compute the Transmission History

Update the queue value


End for

GET transmission Results

For I in range of listen state

Transfer functions for each neuron

Compute the next schedule

End for

3.5 Expected Outcome

• This paper introduced a self-adaptive sleep/wake-up scheduling approach and


also This approach does not use the technique of duty cycling

• We expect that the performance improvement of the proposed approach,


compared with existing approaches, may not be big, but the proposed approach
provides a new way to study sleep/wake-up scheduling in WSNs.

3.6 Block Diagram

A large number of ractical sensing and actuating applications require


immediate notification of rare but urgent events and also fast delivery of time
sensitive actuation commands. In this, we consider the design of efficient wakeup
scheduling schemes for energy constrained sensor nodes that adhere to the
bidirectional end-to-end delay constraints posed by such applications. We evaluate
several existing scheduling schemes and propose novel scheduling methods that
outperform existing ones. We also resent a new family of wakeup methods, called
multi-parent schemes, which take a cross-layer a roach where multiple routes for
transfer of messages and wakeup schedules for various nodes are crafted in
synergy to increase longevity while reducing message delivery latencies. We
analyze the power-delay and lifetime-latency tradeoffs for several wakeup methods
and show that our proposed techniques significantly improve the performance and
allow for much longer network lifetime while satisfying the latency constraints.

A packet scheduling scheme is proposed which aims at scheduling different


types of data packets, such as real time and non-real-time data packets at sensor
nodes with resource constraints in Wireless Sensor Networks. Most of the existing
packet-scheduling mechanisms of Wireless Sensor Networks use First Come First
Served (FCFS), non-preemptive priority and preemptive priority scheduling
algorithms. These algorithms result in long end-to-end data transmission delay,
high energy consumption, deprivation of high priority real-time data packets also it
results in improper allocation of data packets to queues. Moreover, these
algorithms are not dynamic to the changing requirements of Wireless Sensor
Network applications since their scheduling policies are predetermined. In this
paper, each node has three levels of priority queues. Real-time packets are placed
into the highest-priority queue and can preempt data packets in other queues. Non-
real-time packets are placed into two other queues based on a certain threshold of
their estimated processing time. Leaf nodes have two queues for real-time and non-
real-time data packets since they do not receive data from other nodes and thus,
reduce end-to-end delay. The priority packet scheduling scheme outperforms
conventional schemes in terms of average data waiting time and end-to-end delay
and also it reduces sensor energy consumption. Index terms- Wireless sensor
networks, FCFS, priority packet scheduling, pre-emptive, nonpreemptive, real-time
packets, non-real-time packets
Fig 3.3 Block Diagram

3.7 Architecture
Fig 3.4 Architecture

CHAPTER 4

RESULT AND DISCUSSION


CHAPTER 5

SYSTEM SPECIFICATION
5.1 Hardware Requirement
SYSTEM : Core 2 DUO 2.4 GHz.

HARD DISK : 80 GB.

MONITOR : 15 VGA Color.

MOUSE : Logitech.
RAM : 2GB.

5.2 Software Requirement


OPERATING SYSTEM : Windows - 7.

FRAMEWORK : NS2

5.2.1 NS2

NS2 stands for Network Simulator Version 2. It is an open-source event-


driven simulator designed specifically for research in computer communication
networks. NS2 is an open-source simulation tool that runs on Linux. It is a discreet
event simulator targeted at networking research and provides substantial support
for simulation of routing, multicast protocols and IP protocols, such as UDP, TCP,
RTP and SRM over wired and wireless (local and satellite) networks. ns or the
network simulator (also popularly called ns-2, in reference to its current
generation) is a discrete event network simulator. It is popular in academia for its
extensibility (due to its open source model) and plentiful online documentation. ns
is popularly used in the simulation of routing and multicast protocols, among
others, and is heavily used in ad-hoc networking research. ns supports an array of
popular network protocols, offering simulation results for wired and wireless
networks alike. It can be also used as limited-functionality network emulator. ns is
licensed for use under version 2 of the GNU General Public License.

NS was built in C++ and provides a simulation interface through OTcl, an


object oriented dialect of Tcl. The user describes a network topology by writing
OTcl scripts, and then the main ns program simulates that topology with specified
parameters. The NS2 makes use of flat earth model in which it assumes that the
environment is flat without any elevations or depressions. However the real world
does have geographical features like valleys and mountains. NS2 fails to capture
this model in it.

Many researchers have proposed the additions of new models to NS2.


Shadowing Model in NS2 attempts to capture the shadow effect of signals in real
life, but does that inaccurately. NS2's shadowing model does not consider
correlations: a real shadowing effect has strong correlations between two locations
that are close to each other. Shadow fading should be modeled as a two
dimensional log-normal random process with exponentially decaying spatial
correlations.

Features of NS2

 It is a discrete event simulator for networking research.


 It provides substantial support to simulate bunch of protocols like TCP, FTP,
UDP, https and DSR.
 It simulates wired and wireless network.
 It is primarily Unix based.
 Uses TCL as its scripting language.
 Otcl: Object oriented support
 Tclcl: C++ and otcl linkage
 Discrete event scheduler

Basic Architecture

NS2 consists of two key languages: C++ and Object-oriented Tool


Command Language (OTcl). While the C++ defines the internal mechanism (i.e., a
backend) of the simulation objects, the OTcl sets up simulation by assembling and
configuring the objects as well as scheduling discrete events. The C++ and the
OTcl are linked together using TclCL.
Fig 5.1 Basic Architecture of NS

NS2 uses OTcl to create and configure a network, and uses C++ to run simulation.
All C++ codes need to be compiled and linked to create an executable file.

Use OTcl

- For configuration, setup, or one time simulation, or

- To run simulation with existing NS2 modules.

This option is preferable for most beginners, since it does not involve complicated
internal mechanism of NS2. Unfortunately, existing NS2 modules are fairly
limited. This option is perhaps not sufficient for most researchers.

Use C++

- When you are dealing with a packet, or – when you need to modify
existing NS2 modules.

This option perhaps discourages most of the beginners from using NS2. This book
particularly aims at helping the readers understand the structure of NS2 and feel
more comfortable in modifying NS2 modules.
Installing NS2 on windows 7

NS2 builds and runs under windows using Cygwin. Cygwin provides Linux like
environment under windows. System Requirements: A computer with C++
compiler. Building full NS2 package requires large memory space approximately
250MB

Fig 5.2 1st step for installation process

1. Download Cygwin from following link https://www.cygwin.com/setup.exe

2. Run the downloaded setup.exe and you will see screen shown below click next.

3. Select option “Install From Internet”. If you have already downloaded the
package select “Install from local directory” and click next

4. Keep the default installation directory as “C:\cygwin” and click next

5. Keep default local package directory as your download folder and click next.
6. Next screen will ask for your Internet connection type keep it as “Direct
connection” and click next and in next screen choose one site to download the
packages and click next.

7. In next screen Cygwin will allow to select the packages you want to install

8. Uncheck the option “Hide obsolete packages” then click on “view” button till
the word “category” changes to “Full”

9. Once installation is complete create desktop icons if you need.

10. Cygwin installation is complete now you can run Cygwin from desktop and see
its interface.

Fig 5.3 Last step for installation process

Tools for generating TCL Script for NS2

NS2 a very common and widely used tool to simulate small and large area
networks. Tcl scripts are widely used in NS-2 simulation tool. Tcl scripts are used
to set up a wired or wireless communication network, and then run these scripts via
the NS-2 for getting the simulation results.

Several tools are available to design networks and generate TCL scripts some of
them are discussed below

NS2 scenario Generator (NSG):

Its a java based tool that can run on any platform and can generate TCL
scripts for wired and Wireless scenarios for NS2.Main features of NSG are:

1. Creating Wired and wireless nodes by drag and drop.

2. Creating Simplex and Duplex links for wired network.

3. Creating Grid, Random and Chain topologies.

4. Creating TCP and UDP agents. Also supports TCP

5. Tahoe, TCP Reno, TCP New-Reno and TCP Vegas.

6. Supports Ad Hoc routing protocols such as DSDV,

7. AODV, DSR and TORA.

8. Supports FTP and CBR applications.

9. Supports node mobility.

10. Setting the packet size, start time of simulation, end

11. Time of simulation, transmission range and interference

12. Range in case of wireless networks, etc.

13.Setting other network parameters such as bandwidth, etc for wireless


scenarios
Visual Network Simulator (VNS):

This tool is centered on capabilities of NSG. It also provides support to


Differentiated Services (DiffServ) scenarios and simple and intuitive set of icons to
represent the components of a network. Some features of VNS are given below:

1. Adding and configuration of links, agents and traffic sources.

2. Modeling network scenarios with support to multicast.

3. Selection of a dynamic routing protocol.

4. Definition of the simulation output as an animation and/or graphics.

5. Edition of the Tcl script generated.

6. Saving the defined simulation scenario

NS 2 Workbench

Ns Bench makes NS-2 simulation development and analysis faster and


easier for students and researchers without losing the flexibility or expressiveness
gained by writing a script. Some features are:

1. Nodes, simplex/duplex links and LANs

2. Agents: TCP,UDP, TCPSink, TCP/Fack,TCP/FullTcp, TCP/Newreno,


TCP/Reno,TCP/Sack1, TCPSink, TCPSink/Sack1,TCPSink/DelAck,

3. TCPSink/Sack1/DelAck,TCP/Vegas, Null Agent.

4. Applications/Traffic: FTP, Telent, https/Server,https/Client, https/Cache,


webtraf, Traffic/CBR,Traffic/Pareto, Traffic/Exponential.

5. Services: Multicast, Packet Scheduling, RED, Diff-Serv.

6. Creating "Groups" concept to compensate for "loops".


7. Scenario generator.

8. Link Monitors.

9. Loss Models.

10. Routing Protocols

Advantages

1. Cheap- Does not require costly equipment

2. Complex scenarios can be easily tested.

3. Results can be quickly obtained – more ideas can be tested in a smaller


time frame.

4. Supported protocols

5. Supported platforms

6. Modularity

7. Popular

GloMoSim

Global Mobile Information System Simulator (GloMoSim) is a network


protocol simulation software that simulates wireless and wired network systems.
GloMoSim is designed using the parallel discrete event simulation capability
provided by Parsec, a parallel programming language. GloMoSim currently
supports protocols for a purely wireless network. It uses the Parsec compiler to
compile the simulation protocols.

In GloMoSim we are building a scalable simulation environment for


wireless and wired network systems. It is being designed using the parallel
discrete-event simulation capability provided by Parsec. GloMoSim currently
supports protocols for a purely wireless network. In the future, we anticipate
adding functionality to simulate a wired as well as a hybrid network with both
wired and wireless capabilities.

Most network systems are currently built using a layered approach that is
similar to the OSI seven layer network architecture. The plan is to build
GloMoSim using a similar layered approach. Standard APIs will be used between
the different simulation layers. This will allow the rapid integration of models
developed at different layers by different people.

This simulator is a simulation of a wireless sensor network. Such a network


is used to detect and report certain events across an expanse of a remote area - e.g.,
a battlefield sensor network that detects and reports troop movements. The idea
behind this network is that it can be deployed simply by scattering sensor units
across the area, e.g. by dropping them out of an airplane; the sensors should
automatically activate, self-configure as a wireless network with a mesh topology,
and determine how to send communications packets toward a data collector (e.g., a
satellite uplink.) Thus, one important feature of such a network is that collected
data packets are always traveling toward the data collector, and the network can
therefore be modeled as a directed graph (and every two connected nodes can be
identified as "upstream" and "downstream.")

A primary challenge of such a network is that all of the sensors operate on a


finite energy supply, in the form of a battery. (These batteries can be rechargeable,
e.g. by embedded solar panels, but the sensors still have a finite maximum power
store.) Any node that loses power drops out of the communications network, and
may end up partitioning the network (severing the communications link from
upstream sensors toward the data collector.) Thus, the maximum useful lifetime of
the network, at worst case, is the mnimum lifetime of any sensor.

NS2, perhaps the most widely used network simulator, has been extended to
include some basic facilities to simulate sensor networks. However, one of the
problems of ns2 is its object-oriented design that introduces much unnecessary
interdependency between modules. Such interdependency sometimes makes the
addition of new protocol models extremely difficult, only mastered by those who
have intimate familiarity with the simulator. Being difficult to extend is not a
major problem for simulators targeted at traditional networks, for there the set of
popular protocols is relatively small. For example, Ethernet is widely used for
wired LAN, IEEE 802.11 for wireless LAN, TCP for reliable transmission over
unreliable media. For sensor networks, however, the situation is quite different.
There are no such dominant protocols or algorithms and there will unlikely be any,
because a sensor network is often tailored for a particular application with specific
features, and it is unlikely that a single algorithm can always be the optimal one
under various circumstances.

5.2.2 Platform: windows

Microsoft Windows is a group of several graphical operating system families,


all of which are developed, marketed, and sold by Microsoft. Each family caters to
a certain sector of the computing industry. Active Windows families include
Windows NT and Windows Embedded; these may encompass subfamilies, e.g.
Windows Embedded Compact (Windows CE) or Windows Server. Defunct
Windows families include Windows 9x, Windows Mobile and Windows Phone.

Microsoft introduced an operating environment named Windows on November


20, 1985, as a graphical operating system shell for MS-DOS in response to the
growing interest in graphical user interfaces (GUIs). Microsoft Windows came to
dominate the world's personal computer (PC) market with over 90% market share,
overtaking Mac OS, which had been introduced in 1984. Apple came to see
Windows as an unfair encroachment on their innovation in GUI development as
implemented on products such as the Lisa and Macintosh (eventually settled in
court in Microsoft's favor in 1993). On PCs, Windows is still the most popular
operating system. However, in 2014, Microsoft admitted losing the majority of the
overall operating system market to Android, because of the massive growth in
sales of Android smartphones. In 2014, the number of Windows devices sold was
less than 25% that of Android devices sold. This comparison however may not be
fully relevant, as the two operating systems traditionally target different platforms.
Still, numbers for server use of Windows (that are comparable to competitors)
show one third market share, similar to for end user use.

Microsoft, the developer of Windows, has registered several trademarks each of


which denote a family of Windows operating systems that target a specific sector
of the computing industry. As of 2014, the following Windows families are being
actively developed:
Windows NT: Started as a family of operating system with Windows NT 3.1,
an operating system for server computers and workstations. It now consists of three
operating system subfamilies that are released almost at the same time and share
the same kernel. It is almost impossible for someone unfamiliar with the subject to
identify the members of this family by name because they do not adhere to any
specific rule; e.g. Windows 7 and Windows 8.1 are members of this family but
Windows 3.1 is not.

Windows: The operating system for mainstream personal computers, tablets


and smartphones. The latest version is Windows 10. The main competitor of this
family is macOS by Apple Inc. for personal computers and Android for mobile
devices (c.f. Usage share of operating systems § Market share by category).

Windows Server: The operating system for server computers. The latest version
is Windows Server 2016. Unlike its clients sibling, it has adopted a strong naming
scheme. The main competitor of this family is Linux. (c.f. Usage share of operating
systems § Market share by category)

Windows PE: A lightweight version of its Windows sibling meant to operate as


a live operating system, used for installing Windows on bare-metal computers
(especially on many computers at once), recovery or troubleshooting purposes. The
latest version is Windows PE 10.

Windows Embedded: Initially, Microsoft developed Windows CE as a general-


purpose operating system for every device that was too resource-limited to be
called a full-fledged computer. Eventually, however, Windows CE was renamed
Windows Embedded Compact and was folded under Windows Compact trademark
which also consists of Windows Embedded Industry, Windows Embedded
Professional, Windows Embedded Standard, Windows Embedded Handheld and
Windows Embedded Automotive.

The following Windows families are no longer being developed:

Windows 9x: An operating system that targeted consumers market.


Discontinued because of suboptimal performance. (PC World called its last
version, Windows ME, one of the worst products of all times.)[ Microsoft now
caters to the consumers market with Windows NT.

Windows Mobile: The predecessor to Windows Phone, it was a mobile phone


operating system. The first version was called Pocket PC 2000; the third version,
Windows Mobile 2003 is the first version to adopt the Windows Mobile trademark.
The last version is Windows Mobile 6.5.

Windows Phone: An operating system sold only to manufacturers of


smartphones. The first version was Windows Phone 7, followed by Windows
Phone 8, and the last version Windows Phone 8.1. It was succeeded by Windows
10 Mobile.

Multilingual support
Multilingual support is built into Windows. The language for both the keyboard
and the interface can be changed through the Region and Language Control Panel.
Components for all supported input languages, such as Input Method Editors, are
automatically installed during Windows installation (in Windows XP and earlier,
files for East Asian languages, such as Chinese, and right-to-left scripts, such as
Arabic, may need to be installed separately, also from the said Control Panel).
Third-party IMEs may also be installed if a user feels that the provided one is
insufficient for their needs.
Interface languages for the operating system are free for download, but some
languages are limited to certain editions of Windows. Language Interface Packs
(LIPs) are redistributable and may be downloaded from Microsoft's Download
Center and installed for any edition of Windows (XP or later) – they translate most,
but not all, of the Windows interface, and require a certain base language (the
language which Windows originally shipped with). This is used for most languages
in emerging markets. Full Language Packs, which translates the complete
operating system, are only available for specific editions of Windows (Ultimate
and Enterprise editions of Windows Vista and 7, and all editions of Windows 8,
8.1 and RT except Single Language). They do not require a specific base language,
and are commonly used for more popular languages such as French or Chinese.
These languages cannot be downloaded through the Download Center, but
available as optional updates through the Windows Update service (except
Windows 8).

The interface language of installed applications are not affected by changes in


the Windows interface language. Availability of languages depends on the
application developers themselves.

Windows 8 and Windows Server 2012 introduces a new Language Control


Panel where both the interface and input languages can be simultaneously changed,
and language packs, regardless of type, can be downloaded from a central location.
The PC Settings app in Windows 8.1 and Windows Server 2012 R2 also includes a
counterpart settings page for this. Changing the interface language also changes the
language of preinstalled Windows Store apps (such as Mail, Maps and News) and
certain other Microsoft-developed apps (such as Remote Desktop). The above
limitations for language packs are however still in effect, except that full language
packs can be installed for any edition except Single Language, which caters to
emerging markets.

Security

Consumer versions of Windows were originally designed for ease-of-use on


a single-user PC without a network connection, and did not have security features
built in from the outset. However, Windows NT and its successors are designed for
security (including on a network) and multi-user PCs, but were not initially
designed with Internet security in mind as much, since, when it was first developed
in the early 1990s, Internet use was less prevalent.

These design issues combined with programming errors (e.g. buffer


overflows) and the popularity of Windows means that it is a frequent target of
computer worm and virus writers. In June 2005, Bruce Schneier's Counterpane
Internet Security reported that it had seen over 1,000 new viruses and worms in the
previous six months. In 2005, Kaspersky Lab found around 11,000 malicious
programs—viruses, Trojans, back-doors, and exploits written for Windows.

Microsoft releases security patches through its Windows Update service


approximately once a month (usually the second Tuesday of the month), although
critical updates are made available at shorter intervals when necessary. In versions
of Windows after and including Windows 2000 SP3 and Windows XP, updates can
be automatically downloaded and installed if the user selects to do so. As a result,
Service Pack 2 for Windows XP, as well as Service Pack 1 for Windows Server
2003, were installed by users more quickly than it otherwise might have been.

While the Windows 9x series offered the option of having profiles for
multiple users, they had no concept of access privileges, and did not allow
concurrent access; and so were not true multi-user operating systems. In addition,
they implemented only partial memory protection. They were accordingly widely
criticized for lack of security.

The Windows NT series of operating systems, by contrast, are true multi-


user, and implement absolute memory protection. However, a lot of the advantages
of being a true multi-user operating system were nullified by the fact that, prior to
Windows Vista, the first user account created during the setup process was an
administrator account, which was also the default for new accounts. Though
Windows XP did have limited accounts, the majority of home users did not change
to an account type with fewer rights – partially due to the number of programs
which unnecessarily required administrator rights – and so most home users ran as
administrator all the time.

Windows Vista changes this by introducing a privilege elevation system called


User Account Control. When logging in as a standard user, a logon session is
created and a token containing only the most basic privileges is assigned. In this
way, the new logon session is incapable of making changes that would affect the
entire system. When logging in as a user in the Administrators group, two separate
tokens are assigned. The first token contains all privileges typically awarded to an
administrator, and the second is a restricted token similar to what a standard user
would receive. User applications, including the Windows shell, are then started
with the restricted token, resulting in a reduced privilege environment even under
an Administrator account. When an application requests higher privileges or "Run
as administrator" is clicked, UAC will prompt for confirmation and, if consent is
given (including administrator credentials if the account requesting the elevation is
not a member of the administrators group), start the process using the unrestricted
token.

Windows Defender
On January 6, 2005, Microsoft released a Beta version of Microsoft Anti
Spyware, based upon the previously released Giant Anti Spyware. On February 14,
2006, Microsoft Anti Spyware became Windows Defender with the release of Beta
2. Windows Defender is a freeware program designed to protect against spyware
and other unwanted software. Windows XP and Windows Server 2003 users who
have genuine copies of Microsoft Windows can freely download the program from
Microsoft's web site, and Windows Defender ships as part of Windows Vista and
7. In Windows 8, Windows Defender and Microsoft Security Essentials have been
combined into a single program, named Windows Defender. It is based on
Microsoft Security Essentials, borrowing its features and user interface. Although
it is enabled by default, it can be turned off to use another anti-virus solution.
Windows Malicious Software Removal Tool and the optional Microsoft Safety
Scanner are two other free security products offered by Microsoft.

CHAPTER 6

CONCLUSION
This paper introduced a self-adaptive sleep/wake-up scheduling approach.
This approach does not use the technique of duty cycling. Instead, it divides the
time axis into a number of time slots and lets each node autonomously decide to
sleep, listen or transmit in a time slot. Each node makes a decision based on its
current situation and an approximation of its neighbors’ situations, where such
approximation does not need communication with neighbors. Through these
techniques, the performance of the proposed approach outperforms other related
approaches. Most existing approaches are based on the duty cycling technique and
these researchers have taken much effort to improve the performance of their
approaches. Thus, duty cycling is a mature and efficient technique for
sleep/wakeup scheduling. This paper is the first one which does not use the duty
cycling technique. Instead, it proposes an alternative approach which is based on
game theory and the reinforcement learning technique. The performance
improvement of the proposed approach, compared with existing approaches, may
not be big, but the proposed approach provides a new way to study sleep/wake-up
scheduling in WSNs. This paper primarily focuses on theoretical study, so there
are some assumptions. These assumptions are set to simplify the discussion of our
approach. Without these assumptions, the discussion of our approach will become
extremely complex, which is harmful for the readability of this paper. The problem
itself addressed in this paper, however, is not simplified by these assumptions.

6.1 Future Work

In our future work, to prevent against the denial of sleep attack a cross layer
energy efficient security mechanism is used to protect the network from these
attacks. The cross layer interaction between network MAC and physical layers is
mainly exploited to identify the intruder’s nodes and prevent the sensor nodes from
the denial of sleep.
REFERENCES
[1] Y. Xiao et al., “Tight performance bounds of multihop fair access for MAC
protocols in wireless sensor networks and underwater sensor networks,” IEEE
Trans. Mobile Comput., vol. 11, no. 10, pp. 1538–1554, Oct. 2012.

[2] S. Zhu, C. Chen, W. Li, B. Yang, and X. Guan, “Distributed optimal consensus
filter for target tracking in heterogeneous sensor networks,” IEEE Trans. Cybern.,
vol. 43, no. 6, pp. 1963–1976, Dec. 2013.

[3] G. Acampora, D. J. Cook, P. Rashidi, and A. V. Vasilakos, “A survey on


ambient intelligence in healthcare,” Proc. IEEE, vol. 101, no. 12, pp. 2470–2494,
Dec. 2013.

[4] Y. Yao, Q. Cao, and A. V. Vasilakos, “EDAL: An energy-efficient, delay-


aware, and lifetime-balancing data collection protocol for heterogeneous wireless
sensor networks,” IEEE/ACM Trans. Netw., vol. 23, no. 3, pp. 810–823, Jun.
2015, doi: 10.1109/TNET.2014.2306592.

[5] S. H. Semnani and O. A. Basir, “Semi-flocking algorithm for motion control of


mobile sensors in large-scale surveillance systems,” IEEE Trans. Cybern., vol. 45,
no. 1, pp. 129–137, Jan. 2015.

[6] B. Fu, Y. Xiao, X. Liang, and C. L. P. Chen, “Bio-inspired group modeling and
analysis for intruder detection in mobile sensor/robotic networks,” IEEE Trans.
Cybern., vol. 45, no. 1, pp. 103–115, Jan. 2015.

[7] Y. Zhao, Y. Liu, Z. Duan, and G. Wen, “Distributed average computation for
multiple time-varying signals with output measurements,” Int. J. Robust Nonlin.
Control, vol. 26, no. 13, pp. 2899–2915, 2016.
[8] Y. Zhao, Z. Duan, G. Wen, and G. Chen, “Distributed finite-time tracking of
multiple non-identical second-order nonlinear systems with settling time
estimation,” Automatica, vol. 64, pp. 86–93, Feb. 2016.

[9] M. Li, Z. Li, and A. V. Vasilakos, “A survey on topology control in wireless


sensor networks: Taxonomy, comparative study, and open issues,” Proc. IEEE,
vol. 101, no. 12, pp. 2538–2557, Dec. 2013.

[10] W. Ye, J. Heidemann, and D. Estrin, “An energy-efficient MAC protocol for
wireless sensor networks,” in Proc. IEEE INFOCOM, New York, NY, USA, Jun.
2002, pp. 1567–1576.

You might also like