Professional Documents
Culture Documents
Lecture6 Feb11 Dissemination
Lecture6 Feb11 Dissemination
Lecture6 Feb11 Dissemination
In Sensor Networks
The need for Data Dissemination and Fusion
– Since sensor network contains large amount of data for the end user,
methods of combining or aggregating data into small set of information
is necessary and contributes to energy savings
– Communication between sensor nodes and the base station is expensive and no
high energy nodes exist to achieve communication
– By using clusters to transmit data to the BS, only few nodes need to transmit for
larger distances to the BS while other nodes in each cluster use small transmit
distances
– LEACH achieves superior performance compared to classical clustering algorithms
by using adaptive clustering and rotating clusterheads; assisting the total energy of
the system to be distributed among all the nodes
– By performing load computation in each cluster, amount of data to be transmitted to
BS is reduced. Therefore, large reduction in the energy dissipation is achieved
since communication is more expensive than computation
LEACH
Algorithm Overview
– The nodes are grouped into local clusters with one node acting as the local base
station (BS) or clusterhead (CH)
– The CHs are rotated in random fashion among the various sensors
– Local data fusion is achieved to compress the data being sent from clusters to the
BS; resulting the reduction in the energy dissipation and increase in the network
lifetime
– Sensor elect themselves to be local BSs at any any given time with a certain
probability and these CHs broadcast their status to other sensor nodes
– Each node decided which CH to join based on the minimum communication energy
– Upon clusters formation, each CH creates a schedule for the nodes in its cluster
such that radio components of each non-clusterhead node need to be turned OFF
always except during the transmit time
– The CH aggregates all the data received from the nodes in its cluster before
transmitting the compressed data to BS
LEACH
Algorithm Overview
1. Advertisement Phase:
– Initially, each node need to decide to become a CH for the current round based
on the suggested percentage of CHs for the network (set prior to this phase)
and the number times the node has acted as a CH
– The node (n) decides by choosing a random number between 0 and 1
– If this random number is less than T(n), the nodes become a CH for this round
– The threshold is set as follows:
P
If n C G P = desired percentage of CHs
T(n) = 1 – P * (rmod 1P ) r = current round
G = set of nodes that have not
0 Otherwise been CHs in the last 1/P rounds
LEACH
Algorithm Details
1. Advertisement Phase:
– Assumptions are (i) each node starts with the same amount of energy and (ii)
each CHs consumes relatively same amount of energy for each node
– Each node elected as CH broadcasts an advertisement message to the rest
– During this “clusterhead-advertisement” phase, the non-clusterhead nodes
hear the ads of all CHs and decide which CH to join
– A node joins to a CH in which it hears with its advertisement with the highest
signal strength
3. Schedule Creation:
– Upon receiving all the join messages from its members, CH creates a TDMA
schedule about their allowed transmission time based on the total number of
members in the cluster
LEACH
Algorithm Details
4. Data Transmission:
– Each node starts data transmission to their CH based on their TDMA schedule
– The radio of each cluster member nodes can be turned OFF until their
allocated transmission time comes; minimizing the energy dissipation
– The CH nodes must keep its receiver ON to receive all the data
– Once all the data is received, the CH compresses the data to send it to BS
Multiple Clusters
– In order to minimize the radio interference between nearby clusters, each CH
chooses randomly from a list of spreading CDMA codes and it informs its
cluster members to transmit using this code
– The neighboring CHs radio signals will be filtered out to avoid corruption in the
transmission
LEACH
Advantages:
– How to decide the percentage of cluster heads for a network? The topology,
density and number of nodes of a network could be different from other networks
– No suggestions about when the re-election needs to be invoked
– The clusterheads farther away from the base station will use higher power and die
more quickly than the nearby ones
LEACH
Suggestions/Improvements/Future Work:
Meta Data
– Used to uniquely and completely describe the data being collected by sensors
– If two pieces of actual data are distinguishable, then their meta-data should also
be distinguishable
– Since the format of meta-data is application-specific, each application needs to
interpret and synthesize its own meta-data
SPIN
Meta Data
– SPIN applications must define a meta-data format for representing data that
concerns with the costs of storing, retrieving and managing the meta-data
– SPIN nodes uses three types of communication messages:
ADV (new data advertisement)
REQ (request for data)
DATA (data message)
– ADV and REQ messages contain only meta-data that is smaller than the DATA
message
Implosion: A node always sends data to its neighbors without being concerned about
if the same data has been received by the neighbors from other nodes
Overlap: The nodes waste energy and bandwidth by sending the overlapping data
Resource Blindness: Nodes do not make decisions based on the energy available
SPIN
The Solution
– SPIN provides solution to the problems of implosion and overlap by negotiating
with each other before transmitting data eliminates the transmission of
redundant data
– Nodes poll their resources before transmitting or processing data by probing the
resource manager which keeps track of the resource consumption
– Nodes can make efficient decisions based on the available energy level
– The use meta-data descriptors eliminates the possibility of overlap since the
nodes can name the part of the data the nodes are interested in receiving
– Resource-awareness of local resources allow sensors to make meaningful
decisions to extend longevity
SPIN
SPIN Protocols
Disadvantages:
Suggestions/Improvements/Future Work:
– Assumes that sensor networks are task-specific – the task types are known at the
time the sensor network is deployed
– An essential feature of directed diffusion is that interest, data propagation and
data aggregation are determined by local interactions
– Focused on design of dissemination protocols for tasks and events
Naming
– Task descriptions are named (specifies an interest for data matching the list of
attribute-value pairs) and also called as interest
– Example task: “Every I ms, for the next T seconds, send me a location of any
four-legged animal in subregion R of the sensor field.”
task = four-legged animal // detect animal location
interval = 20 ms // send back events every 20 ms
duration = 10 seconds // … for the next 10 seconds
rect = [-100, 100, 200, 400] // from sensors within rectangle
Directed Diffusion
Naming
– A sensor detecting an animal may generate the following data:
task = four-legged animal // type of animal seen
instance = horse // instance of this type
location = [150, 200] // node location
intensity = 0.5 // signal amplitude measure
confidence = 0.85 // confidence in the match
timestamp = 01:30:45 // event generation time
Data propagation
– Data message is unicast individually to the relevant neighbors
– A node receiving a data message from its neighbors checks to see if matching
interest entry in its cache exists according the matching rules described
1. If no match exist, the data message is dropped
2. If match exists, the node checks its data cache associated with the matching
interest entry
If a received data message has a matching data cache entry, the data
message is dropped
Otherwise, the received message is added to the data cache and the
data message is re-sent to the neighbors
– Data cache keeps track of the recently seen data items, preventing loops
– By checking the data cache, a node can determine the data rate of the received
events
Directed Diffusion
Reinforcement
– After the sink starts receiving low data rate events, it reinforces one neighbor in
order to “draw down” higher quality (higher data rate) events
– This is achieved by data driven local rules
– To enforce a neighbor, the sink may re-send the original interest with higher data
rate
– When the data rate is higher than before, the node node must also reinforce at
least one neighbor
– Reinforcement can be carried out from neighbors to other neighbors in a
particular path (i.e., if a path when a path delivers an event faster than others,
sink attempts to use this path to draw down high quality data)
– In Summary, reinforce one path, or part of it, based on observed losses, delay
variances, and so on
– Negative reinforce certain paths because resource levels are low
Directed Diffusion
– Data-centric dissemination
– Robust multi-path delivery
– Reinforcement-based adaptation to the empirically best network path
– Energy savings with in-network data aggregation and caching
– Gives designers the freedom to attach different semantics to gradient values
– Reinforcement can be triggered not only by sources but also by intermediate
nodes
Disadvantages:
– It may consume memory since all the attribute list is being sent
Suggestions/Improvements/Future Work: