Professional Documents
Culture Documents
A Self-Adaptive Sleep Wake-Up Scheduling
A Self-Adaptive Sleep Wake-Up Scheduling
low latency for WSN. The artificial neural network (ANN) was used as a classifier.
This pilot study consisted of three aims; the first aim was to utilize only the
wakeup signal for automatic sleep-wake stage detection. The second objective was
to investigate which features were the most effective in detecting the sleep-wake
INTRODUCTION
A sensor can receive and transmit packets when it is awake, i.e., in wake-up
state. A sensor in wake-up state consumes much more energy compared to sleep
state.
Sensors adjust the sleeping time length and the awake time length in each
period in order to save energy and meanwhile guarantee the efficient transmission
of packets.
Typically, WSNs contain hundreds or thousands of sensors which have the
ability to communicate with each other. The energy of each sensor is limited and
they are usually unrechargeable, so energy consumption of each sensor has to be
minimized to prolong the life time of WSNs. Major sources of energy waste are
idle listening, collision, overhearing and control overhead. Among these, idle
listening is a dominant factor in most sensor network applications. There are
several ways to prolong the life time of WSNs, e.g., efficient deployment of
sensors, optimization of WSN coverage, and sleep/wake-up scheduling.
Broadly speaking, there are two sets of challenges in MWSNs; hardware and
environment. The main hardware constraints are limited battery power and low
cost requirements. The limited power means that it's important for the nodes to be
energy efficient. Price limitations often demand low complexity algorithms for
simpler microcontrollers and use of only a simplex radio. The major environmental
factors are the shared medium and varying topology. The shared medium dictates
that channel access must be regulated in some way. This is often done using a
medium access control (MAC) scheme, such as carrier sense multiple access
(CSMA), frequency division multiple access (FDMA) or code division multiple
access (CDMA). The varying topology of the network comes from the mobility of
nodes, which means that multihop paths from the sensors to the sink are not stable.
Routing
Protocols designed specifically for MWSNs are almost always multihop and
sometimes adaptations of existing protocols. For example, Angle-based Dynamic
Source Routing (ADSR), is an adaptation of the wireless mesh network protocol
Dynamic Source Routing (DSR) for MWSNs. ADSR uses location information to
work out the angle between the node intending to transmit, potential forwarding
nodes and the sink. This is then used to insure that packets are always forwarded
towards the sink. Also, Low Energy Adaptive Clustering Hierarchy (LEACH)
protocol for WSNs has been adapted to LEACH-M (LEACH-Mobile),[7] for
MWSNs. The main issue with hierarchical protocols is that mobile nodes are prone
to frequently switching between clusters, which can cause large amounts of
overhead from the nodes having to regularly re-associate themselves with different
cluster heads.
There are three types of medium access control (MAC) techniques: based on
time division, frequency division and code division. Due to the relative ease of
implementation, the most common choice of MAC is time-division-based, closely
related to the popular CSMA/CA MAC. The vast majority of MAC protocols that
have been designed with MWSNs in mind, are adapted from existing WSN MACs
and focus on low power consumption, duty-cycled schemes.
1.3.1 Characteristics
Application layer agents have the options for packet size, data rate, data
transmission interval, start and stop time of data transmission. The node mobility
model can be created with specification of target location and speed. Nodes with
different Communication range can be configured. Energy model can be created
with specification of initial energy, transmission, reception, idle and sleep power of
the nodes. Error model can be created with a random packet loss rate to simulate
network interference and fading.
The dynamic topology can be created using rand function in Tool Command
Language (TCL) script with fixed number of nodes. The nodes can be deployed in
the area of X & Y. Each node is assigned with a random location within X & Y
using rand function. In dynamic topology, the neighbors of each node vary with the
location of that particular node. The code segment in sample2.tcl file demonstrates
the dynamic topology in wireless network with 2 nodes that are deployed in the
area of 500m & 400m.
The dynamic wireless network in ns2 can be modeled using rand function in
Tool Command Language (TCL) script. The number of nodes in the network
varies dynamically during the runtime. The dynamic wireless network allocates the
dynamic location for each node. The sample3.tcl file shows the dynamic network
with dynamic number of nodes specified during the execution that are deployed in
the area of 500m 500m
6) Mobility model
8) Transmission Range
The energy model represents the energy level of nodes in the network. The
energy model defined in a node has an initial value that is the level of energy the
node has at the beginning of the simulation. This energy is termed as initial
Energy_. In simulation, the variable “energy” represents the energy level in a node
at any specified time. The value of initial Energy_ is passed as an input argument.
A node loses a particular amount of energy for every packet transmitted and every
packet received. As a result, the value of initial Energy_ in a node gets decreased.
The energy consumption level of a node at any time of the simulation can be
determined by finding the difference between the current energy value and initial
Energy_ value. If an energy level of a node reaches zero, it cannot receive or
transmit anymore packets. The amount of energy consumption in a node can be
printed in the trace file. The energy level of a network can be determined by
summing the entire node’s energy level in the network.
Sleep wake scheduling has been used to extend the network lifetime. Energy
efficiency has inherent tradeoff with delay, thus, generally in such sleep wake
scheduling strategies, maximization in network lifetime is achieved at the expense
of increase in delay. In many delay sensitive application where, real time response
is required, such delays could not be tolerated. Generally, WSNs operate for a long
time in idle mode and only occasionally send data. The energy consumption of
listening to the idle channel is equivalent to its energy consumption when sending
or receiving, and much larger than the energy consumption of the sleep mode. To
receive data, the receiver must be in high power state, for example, active/listen
state; as in sleep state, the radio is in low power mode with the receiving circuitry
switched off. If the receiver operates at 100% duty cycle, that is, its transceiver is
always on; then it would be able to receive the data at the cost of high energy
consumption. To reduce the power consumption low duty cycle operations are
required. This fact is exploited by sleep-wake scheduling techniques and effort is
made to reduce this energy wastage in idle mode by designing low duty cycle
operations. Variety of sleep/wake scheduling protocols has been proposed. Most of
them use a period sleep/wake interval and provide effective energy conservation at
the cost of delay and throughput. For example, for a source node to transmit data, it
has to know the sleep/wakeup schedule of the neighbor node and has to wait for
the neighbor to come into the active state. The same is repeated until the data
reaches the final destination thus resulting in unprecedented delays. This increase
in delay is equal to the product of the number of intermediate forwarders times the
length of the wakeup interval. Such increase in endto-end delay incurred due to
latency-energy tradeoff has the potential to become major problem in many
emerging delay-sensitive WSN applications, which require fast response and real-
time control. To extend the network lifetime by organizing the sensors into a
maximal number of set covers that are activated successively. Only the sensors are
from the current active set are responsible for monitoring all targets and for
transmitting the collected data , which all other nodes are in low-energy sleep
mode. To save power, schedule the wireless nodes to alternate between active and
sleep mode. The contribution of my paper is to introduce a new model of
maximizing the network lifetime of the target coverage problem by organizing the
sensor nodes and analyze the performance of my paper through simulation.
Sensor node in four states: transmit, receive, idle and sleep. The idle state is
when the transceiver is neither transmitting nor receiving, and the sleep mode is
when the radio is turned off. The receive and idle modes may require as much
energy as transmitting, while the sleep mode requires the less energy. To extend
the network lifetime by dividing the sensor nodes into a number of sets, such that
each set completely cover all the targets. These sensor sets are activated
successfully, such that at any time instant only one set is active. The sensors from
the active set are in active state (e.g. transmit, receive or idle) and all other sensors
are in sleep state. If, while meeting the coverage requirements, sensor nodes
alternate between the active state and sleep mode, that will result in increasing the
network lifetime.
LITERATURE SURVEY
2.1Y. Xiao et al., “Tight performance bounds of multihop fair access for MAC
protocols in wireless sensor networks and underwater sensor networks,”
IEEE Trans. Mobile Comput., vol. 11, no. 10, pp. 1538–1554, Oct. 2012.
This paper is concerned with the problem of filter design for target tracking
over sensor networks. Different from most existing works on sensor networks, we
consider the heterogeneous sensor networks with two types of sensors different on
processing abilities (denoted as type-I and type-II sensors, respectively). However,
questions of how to deal with the heterogeneity of sensors and how to design a
filter for target tracking over such kind of networks remain largely unexplored. We
propose in this paper a novel distributed consensus filter to solve the target
tracking problem. Two criteria, namely, unbiasedness and optimality, are imposed
for the filter design. The so-called sequential design scheme is then presented to
tackle the heterogeneity of sensors. The minimum principle of Pontryagin is
adopted for type-I sensors to optimize the estimation errors. As for type-II sensors,
the Lagrange multiplier method coupled with the generalized inverse of matrices is
then used for filter optimization. Furthermore, it is proven that convergence
property is guaranteed for the proposed consensus filter in the presence of process
and measurement noise. Simulation results have validated the performance of the
proposed filter. It is also demonstrated that the heterogeneous sensor networks with
the proposed filter outperform the homogenous counterparts in light of reduction in
the network cost, with slight degradation of estimation performance.
To locate and track a moving target is crucial for many applications such as
robotics, surveillance, monitoring, and security for large-scale complex
environments. In such scenarios, a number of sensors can be employed in order to
improve the tracking accuracy and increase the size of the surveillance area in a
cooperative manner. Basically, these sensors have modest capabilities of sensing,
computation, and multihop wireless communication. Equipped with these
capabilities, the sensors can self-organize to form a network that is capable of
sensing and processing spatial and temporal dense data in the monitored area.
A multifunction handheld device used for sensing and data analysis in Start
Trek series device monitors your health status in a continuous manner, diagnoses
any possible health conditions, has a conversation with you to persuade you to
change your lifestyle for maintaining better health, and communicates with your
doctor, if needed. The device might even be embedded into your regular clothing
fibers in the form of very tiny sensors and it might communicate with other devices
around you, including the variety of sensors embedded into your home to monitor
your lifestyle. For example, you might be alarmed about the lack of a healthy diet
based on the items present in your fridge and based on what you are eating outside
regularly. This might seem like science fiction for now, but many respecters in the
field of Ambient Intelligence (AmI) expect such scenarios to be part of our daily
life in not so far future.
Our work in this paper stems from our insight that recent research efforts on
open vehicle routing (OVR) problems, an active area in operations research, are
based on similar assumptions and constraints compared to sensor networks.
Therefore, it may be feasible that we could adapt these techniques in such a way
that they will provide valuable solutions to certain tricky problems in the wireless
sensor network (WSN) domain. To demonstrate that this approach is feasible, we
develop one data collection protocol called EDAL, which stands for Energy-
efficient Delay-aware Lifetime-balancing data collection. The algorithm design of
EDAL leverages one result from OVR to prove that the problem formulation is
inherently NP-hard. Therefore, we proposed both a centralized heuristic to reduce
its computational overhead and a distributed heuristic to make the algorithm
scalable for large-scale network operations. We also develop EDAL to be closely
integrated with compressive sensing, an emerging technique that promises
considerable reduction in total traffic cost for collecting sensor readings under
loose delay bounds. Finally, we systematically evaluate EDAL to compare its
performance to related protocols in both simulations and a hardware testbed.
Mobile robots equipped with sensors are able to cooperatively work together
via wireless communication technologies in order to achieve and obtain
surveillance teaming as well as task accomplishments in a large, complex field.
The major challenges of communication in the large and complex field are
considered that the number of mobile sensors is insufficient for a constantly
available network used by intra/intergroups. While each group may be able to
maintain communication within the group at all times, a complete path for constant
end-to-end data communication for any pairs of source and destination in different
groups may not exist. There are always unmonitored locations due to the limited
number of mobile sensors/robots that cannot monitor and cover the whole field. In
order to solve such a problem, the mobile robots/sensors need to patrol the entire
field in order to cover it completely. Unfortunately, we are uncertain as to how to
group the robots/sensors to achieve a low cost. The size of the robot/sensor groups
could be either large or small. It is not easy to intuitively determine which
grouping size is the more efficient. A similar choice exists in primate society as
well. Rhesus macaques and titi monkeys are two kinds of primates that usually live
in groups in order to supervise their territory, defend against intruders, and search
for food. Rhesus macaques live in large groups that normally contain 10–80
individuals, regardless of habitat type. The members in the group communicate via
facial expressions, body postures, and vocal communication. Communication
within each group is complicated because of the large number of members in the
group. Titi monkeys, however, live in small groups that only consist of the parents
and their offspring. Each group of titi monkeys contains a total of 2–7 animals.
SYSTEM ANALYSIS
In each period, nodes adjust their sleep and wake up time, i.e., adjusting the
duty cycle, where each node keeps awake in some time slots while sleeps in other
time slots. In the proposed self-adaptive sleep/wake-up scheduling approach, the
time axis is directly divided into time slots. In each time slot, each node
autonomously decides to sleep or wake up.
This approach is the first one which does not use the technique of duty
cycling. Thus the tradeoff between energy saving and packet delivery delay, which
is incurred by duty cycling, can be avoided.
A and B are two neighboring nodes whose clocks may not be synchronized.
They make decisions at the beginning of each time slot autonomously and
independently without exchanging information. There are two points in the figure
which should be noted. First, for the receiver, if the length of a time slot is not long
enough to receive a packet, the length of the time slot will be extended
automatically until the packet is received successfully (see the first time slot of
node B). Second, when a node decides to transmit a packet in the current time slot
and the length of the time slot is longer than the time length required to transmit a
packet, the node will also decide when in the current time slot to transmit the
packet (see the third time slot of node B).
Sleep scheduling is use to increase the battery lifetime of sensor node. Sleep
scheduling is applying to those nodes which having low power but some event
happed that time it has to move from passive state to active state. So node should
be in passive state for longer time so it can save the energy.
The actions of each agent are restricted to selecting a time window (or a
wake period) within a frame for staying awake. Since the size of these frames
remains unchanged and they constantly repeat throughout the network lifetime, our
agents use no notion of states, i.e. we say that our learning system is stateless (or
single-state). The duration of this wake period is defined by the duty cycle, fixed
by the user of the system. In other words, each node selects a slot within the frame
when its radio will be switched on for the duration of the duty cycle. Thus, the size
of the action space of each agent is determined by the number of slots within a
frame. In general, the more actions agents have, the slower the reinforcement
learning algorithm will converge. On the other hand, a small action space might
lead to suboptimal solutions and will impose an energy burden on the system.
Setting the right amount of time slots within a frame requires a study on itself, that
we shall not undertake in this paper due to space restrictions. Every node stores a
“quality value” (or Q-value) for each slot within its frame. This value for each slot
indicates how beneficial it is for the node to stay awake during these slots for every
frame, i.e. what is an efficient wake-up pattern, given its duty cycle and
considering its communication history. When a communication event occurs at a
node (overheard, sent or received a packet) or if no event occurred during the wake
period (idle listening), that node updates the quality-value of the slot(s) when this
event happened.
1) To the best of our knowledge, this approach is the first one which
does not use the technique of duty cycling. Thus the tradeoff between energy
saving and packet delivery delay, which is incurred by duty cycling, can be
avoided. This approach can reduce both energy consumption and packet delivery
delay.
( )
r 11 r 12 r 13
R= r 21 r 22 r 23 and
r 31 r 32 r 33
( )
c 11 c 12 c 13
C= c 21 c 22 c 23
c 31 c 32 c 33
where R and C specify the payoffs for the row player and the column player,
respectively. Each of the two players selects an action from the three available
actions. The joint action of the players determines their payoffs according to their
payoff matrices. If the row player and the column player select actions i and j,
respectively, the row player receives payoff rij and the column player obtains
payoff cij. The players can select actions stochastically based on a probability
distribution over their available actions. Let α1–α3 denote the probability for the
row player to choose actions 1–3, respectively, where α1+ α2 + α3 = 1. Let β1–β3
denote the probability for the column player to choose actions 1–3, respectively,
where β1 + β2 + β3 = 1. The row player’s expected payoff is
∑
Pr= 1 ≤i ≤3
rijαiβj
1< j<3
Let actions 1–3 denote transmit, listen, and sleep, respectively. The values of
those payoffs in the payoff matrices can be defined by the energy used by a node
(which is a negative payoff). In addition, if a packet is successfully transmitted, the
payoff of the transmitter/receiver is the energy, used to transmit/receive the packet,
plus a positive constant, U, say U = 98. Constant U is added on the energy
consumption, if and only if a packet is successfully transmitted. The payoff for
action sleep is −0.003 (the energy consumed during sleeping period) irrespective of
the opponent’s action, where the negative sign means that the energy is consumed.
The value of the constant U is larger than the energy used for transmitting or
receiving a packet. For example, if the row player has a packet to transmit and it
selects transmit and the column player selects listen, the packet can be successfully
transmitted. The payoffs for both players are positive, which can be calculated
using the energy they use to transmit/receive the packet plus the constant U. Then,
the row player gets payoff −81 + 98 = 17 and the column player obtains payoff −30
+ 98 = 68, where 81 and 30 are energy consumption for transmitting and receiving
a packet, respectively, and the negative sign means that the energy is consumed.
However, if the column player selects sleep, the packet cannot be successfully
transmitted. Then, the row player gets payoff −81 (the energy used for transmitting
a packet) and the column player gets payoff −0.003 (the energy used for sleeping).
It should be noted that if a node does not have a packet to transmit, it will not
select transmit.
In a time slot, each node is in one of several states which indicate the status
of its buffer. For example, if a node’s buffer can store three packets, there are four
possible states for the node: s0–s3, which imply that the node has 0–3 packets in its
buffer, respectively. The aim of each node is to find a policy π, mapping states to
actions, that can maximize the node’s long-run payoff. Specifically, for a node, π(s,
a) is a probability, based on which the node selects action a in current state s, and
π(s) is a vector which is a probability distribution over the available actions in
current state s. Thus, policy π is a matrix. For example, a node’s buffer can store
three packets, so the node have four states: s0–s3, as described above. Also, the
node has three actions: transmit, listen, and sleep, denoted as 1–3, respectively.
Hence, the policy of the node is
Here, the terms π(s, a) and α, β denote the probability of selecting an action.
π(s, a) takes states into consideration while α and β do not do so. α and β are used
only for description convenience of the model and the algorithms.
The only signal used by the player to learn from its actions in dynamic
environments is payoff (also known as reward), a number which tells the player if
its last action was good or not. Q-learning as the simplest reinforcement learning
algorithm is model-free, which means that players using Q-learning can act
optimally in Markovian domains without building overall maps of the domains.
During the learning process, a player takes an action in a particular state based on a
probability distribution over available actions. The higher the probability of an
action is, the more possible the action could be taken. Then, the player evaluates
the consequence of the action, which the player just takes, based on the immediate
reward or penalty, which it receives by taking the action, and also based on the
estimate of the value of the state in which the action is taken. By trying all actions
in all states repeatedly, the player learns which action is the best choice in a
specific state.
Each node generates a packet at the beginning of each time slot based on a
predefined probability: the packet generation probability. As the state of a node is
determined by the number of packets in its buffer, the packet generation
probability directly affects the state of each node. Then, the action selection of
each node will be indirectly affected. The expiry time of a packet is based on
exponential distribution. The average size of a packet is 100 bytes, and the actual
size of a packet is based on normal distribution with variance equal to 10. In this
simulation, four packet generation probabilities are used: 0.2, 0.4, 0.6, and 0.8.
This setting is to evaluate the performance of these approaches in a network with
different number of transmitted packets. For packet routing, we use a basic routing
approach, gossiping. Gossiping is a slightly enhanced version of flooding where
the receiving node sends the packet to a randomly selected neighbour, which picks
another random neighbour to forward the packet to and so on, until the destination
or the maximum hop is reached. It should be noted that when the destination and
some other nodes are all in the signal range of the source, based on the routing
protocol, the source still relays a packet to one of neighbors and this process
continues until the destination or the maximum hop is reached. The routing process
is not optimized in the simulation, as this paper focuses on sleep/wake-up
scheduling only. This routing protocol is not energy-efficient but it is easy to
implement. Because all of the sleep/wake-up scheduling approaches use the same
routing protocol in the simulation, the comparison among them is still fair.
Performance is measured by three quantitative metrics:
For i in 1 to N
If AN available
End for
3.7 Architecture
Fig 3.4 Architecture
CHAPTER 4
SYSTEM SPECIFICATION
5.1 Hardware Requirement
SYSTEM : Core 2 DUO 2.4 GHz.
MOUSE : Logitech.
RAM : 2GB.
FRAMEWORK : NS2
5.2.1 NS2
Features of NS2
Basic Architecture
NS2 uses OTcl to create and configure a network, and uses C++ to run simulation.
All C++ codes need to be compiled and linked to create an executable file.
Use OTcl
This option is preferable for most beginners, since it does not involve complicated
internal mechanism of NS2. Unfortunately, existing NS2 modules are fairly
limited. This option is perhaps not sufficient for most researchers.
Use C++
- When you are dealing with a packet, or – when you need to modify
existing NS2 modules.
This option perhaps discourages most of the beginners from using NS2. This book
particularly aims at helping the readers understand the structure of NS2 and feel
more comfortable in modifying NS2 modules.
Installing NS2 on windows 7
NS2 builds and runs under windows using Cygwin. Cygwin provides Linux like
environment under windows. System Requirements: A computer with C++
compiler. Building full NS2 package requires large memory space approximately
250MB
2. Run the downloaded setup.exe and you will see screen shown below click next.
3. Select option “Install From Internet”. If you have already downloaded the
package select “Install from local directory” and click next
5. Keep default local package directory as your download folder and click next.
6. Next screen will ask for your Internet connection type keep it as “Direct
connection” and click next and in next screen choose one site to download the
packages and click next.
7. In next screen Cygwin will allow to select the packages you want to install
8. Uncheck the option “Hide obsolete packages” then click on “view” button till
the word “category” changes to “Full”
10. Cygwin installation is complete now you can run Cygwin from desktop and see
its interface.
NS2 a very common and widely used tool to simulate small and large area
networks. Tcl scripts are widely used in NS-2 simulation tool. Tcl scripts are used
to set up a wired or wireless communication network, and then run these scripts via
the NS-2 for getting the simulation results.
Several tools are available to design networks and generate TCL scripts some of
them are discussed below
Its a java based tool that can run on any platform and can generate TCL
scripts for wired and Wireless scenarios for NS2.Main features of NSG are:
NS 2 Workbench
8. Link Monitors.
9. Loss Models.
Advantages
4. Supported protocols
5. Supported platforms
6. Modularity
7. Popular
GloMoSim
Most network systems are currently built using a layered approach that is
similar to the OSI seven layer network architecture. The plan is to build
GloMoSim using a similar layered approach. Standard APIs will be used between
the different simulation layers. This will allow the rapid integration of models
developed at different layers by different people.
NS2, perhaps the most widely used network simulator, has been extended to
include some basic facilities to simulate sensor networks. However, one of the
problems of ns2 is its object-oriented design that introduces much unnecessary
interdependency between modules. Such interdependency sometimes makes the
addition of new protocol models extremely difficult, only mastered by those who
have intimate familiarity with the simulator. Being difficult to extend is not a
major problem for simulators targeted at traditional networks, for there the set of
popular protocols is relatively small. For example, Ethernet is widely used for
wired LAN, IEEE 802.11 for wireless LAN, TCP for reliable transmission over
unreliable media. For sensor networks, however, the situation is quite different.
There are no such dominant protocols or algorithms and there will unlikely be any,
because a sensor network is often tailored for a particular application with specific
features, and it is unlikely that a single algorithm can always be the optimal one
under various circumstances.
Windows Server: The operating system for server computers. The latest version
is Windows Server 2016. Unlike its clients sibling, it has adopted a strong naming
scheme. The main competitor of this family is Linux. (c.f. Usage share of operating
systems § Market share by category)
Multilingual support
Multilingual support is built into Windows. The language for both the keyboard
and the interface can be changed through the Region and Language Control Panel.
Components for all supported input languages, such as Input Method Editors, are
automatically installed during Windows installation (in Windows XP and earlier,
files for East Asian languages, such as Chinese, and right-to-left scripts, such as
Arabic, may need to be installed separately, also from the said Control Panel).
Third-party IMEs may also be installed if a user feels that the provided one is
insufficient for their needs.
Interface languages for the operating system are free for download, but some
languages are limited to certain editions of Windows. Language Interface Packs
(LIPs) are redistributable and may be downloaded from Microsoft's Download
Center and installed for any edition of Windows (XP or later) – they translate most,
but not all, of the Windows interface, and require a certain base language (the
language which Windows originally shipped with). This is used for most languages
in emerging markets. Full Language Packs, which translates the complete
operating system, are only available for specific editions of Windows (Ultimate
and Enterprise editions of Windows Vista and 7, and all editions of Windows 8,
8.1 and RT except Single Language). They do not require a specific base language,
and are commonly used for more popular languages such as French or Chinese.
These languages cannot be downloaded through the Download Center, but
available as optional updates through the Windows Update service (except
Windows 8).
Security
While the Windows 9x series offered the option of having profiles for
multiple users, they had no concept of access privileges, and did not allow
concurrent access; and so were not true multi-user operating systems. In addition,
they implemented only partial memory protection. They were accordingly widely
criticized for lack of security.
Windows Defender
On January 6, 2005, Microsoft released a Beta version of Microsoft Anti
Spyware, based upon the previously released Giant Anti Spyware. On February 14,
2006, Microsoft Anti Spyware became Windows Defender with the release of Beta
2. Windows Defender is a freeware program designed to protect against spyware
and other unwanted software. Windows XP and Windows Server 2003 users who
have genuine copies of Microsoft Windows can freely download the program from
Microsoft's web site, and Windows Defender ships as part of Windows Vista and
7. In Windows 8, Windows Defender and Microsoft Security Essentials have been
combined into a single program, named Windows Defender. It is based on
Microsoft Security Essentials, borrowing its features and user interface. Although
it is enabled by default, it can be turned off to use another anti-virus solution.
Windows Malicious Software Removal Tool and the optional Microsoft Safety
Scanner are two other free security products offered by Microsoft.
CHAPTER 6
CONCLUSION
This paper introduced a self-adaptive sleep/wake-up scheduling approach.
This approach does not use the technique of duty cycling. Instead, it divides the
time axis into a number of time slots and lets each node autonomously decide to
sleep, listen or transmit in a time slot. Each node makes a decision based on its
current situation and an approximation of its neighbors’ situations, where such
approximation does not need communication with neighbors. Through these
techniques, the performance of the proposed approach outperforms other related
approaches. Most existing approaches are based on the duty cycling technique and
these researchers have taken much effort to improve the performance of their
approaches. Thus, duty cycling is a mature and efficient technique for
sleep/wakeup scheduling. This paper is the first one which does not use the duty
cycling technique. Instead, it proposes an alternative approach which is based on
game theory and the reinforcement learning technique. The performance
improvement of the proposed approach, compared with existing approaches, may
not be big, but the proposed approach provides a new way to study sleep/wake-up
scheduling in WSNs. This paper primarily focuses on theoretical study, so there
are some assumptions. These assumptions are set to simplify the discussion of our
approach. Without these assumptions, the discussion of our approach will become
extremely complex, which is harmful for the readability of this paper. The problem
itself addressed in this paper, however, is not simplified by these assumptions.
In our future work, to prevent against the denial of sleep attack a cross layer
energy efficient security mechanism is used to protect the network from these
attacks. The cross layer interaction between network MAC and physical layers is
mainly exploited to identify the intruder’s nodes and prevent the sensor nodes from
the denial of sleep.
REFERENCES
[1] Y. Xiao et al., “Tight performance bounds of multihop fair access for MAC
protocols in wireless sensor networks and underwater sensor networks,” IEEE
Trans. Mobile Comput., vol. 11, no. 10, pp. 1538–1554, Oct. 2012.
[2] S. Zhu, C. Chen, W. Li, B. Yang, and X. Guan, “Distributed optimal consensus
filter for target tracking in heterogeneous sensor networks,” IEEE Trans. Cybern.,
vol. 43, no. 6, pp. 1963–1976, Dec. 2013.
[6] B. Fu, Y. Xiao, X. Liang, and C. L. P. Chen, “Bio-inspired group modeling and
analysis for intruder detection in mobile sensor/robotic networks,” IEEE Trans.
Cybern., vol. 45, no. 1, pp. 103–115, Jan. 2015.
[7] Y. Zhao, Y. Liu, Z. Duan, and G. Wen, “Distributed average computation for
multiple time-varying signals with output measurements,” Int. J. Robust Nonlin.
Control, vol. 26, no. 13, pp. 2899–2915, 2016.
[8] Y. Zhao, Z. Duan, G. Wen, and G. Chen, “Distributed finite-time tracking of
multiple non-identical second-order nonlinear systems with settling time
estimation,” Automatica, vol. 64, pp. 86–93, Feb. 2016.
[10] W. Ye, J. Heidemann, and D. Estrin, “An energy-efficient MAC protocol for
wireless sensor networks,” in Proc. IEEE INFOCOM, New York, NY, USA, Jun.
2002, pp. 1567–1576.