Professional Documents
Culture Documents
Internet of Things: Crowd Sensing and Human Centric
Internet of Things: Crowd Sensing and Human Centric
Internet of Things: Crowd Sensing and Human Centric
Evolution
Crowd sensing at it’s early stages:
The term "mobile crowd sensing" was coined by Raghu Ganti, Fan Ye, and Hui Lei
in 2011.Mobile crowd sensing belongs to three main types: environmental (such
as monitoring pollution), infrastructure (such as locating potholes), and social
(such as tracking exercise data within a community).
Current crowd sensing applications operate based on the assumption that all
users voluntarily submit the sensing data leading to extensive user participation.
It can also indicate the way mobile device users form microcrowds based on a
specific crowd sensing activity.
Today's mobile devices not only serve as communication devices, but also provide
many sensors, such as GPS, accelerometer, gyroscope and so on.
Implications of Inferences
Resource limitations:
Mobile crowd sensing potential is limited by constraints involving energy, bandwidth and computation
power. Using the GPS, for example, drains batteries, but location can also be tracked using Wi-Fi and
GSM, although these are less accurate. Eliminating redundant data can also reduce energy and
bandwidth costs, as can restricting data sensing when quality is unlikely to be high (e.g., when two
photos are taken in the same location, the second is unlikely to provide new information).
Human Centric:
Recent developments in the area of RFID have seen the technology expand from its role in industrial and
animal tagging applications, to being implantable in humans. With a gap in literature identified between
current technological development and future human centric possibility, little has been previously
known about the nature of contemporary human centric applications.
EXISTING MOBILE CROWDSENSING STRATEGIES
In this section, we describe different mobile crowdsensing strategies aiming to
reduce the resource consumption in order to reduce the resource cost and
improve QoS. Previous works demonstrate that there is significant redundancy in
the content of the data. In many cases, sensors are likely to collect very similar
kinds of data from related sensors. Thus, it is important and necessary to
eliminate the redundant data, which on the one hand can reduce the resource
consumption and thus reduce the cost (e.g., bandwidth cost, energy cost, etc.),
and on the other hand can improve the QoS of timely information delivery by
reducing the traffic load. One of the key challenges here however, is detecting
‘what data is similar’. Another key challenge is how to eliminate the similar data
while ensuring high QoS (e.g., without compromising the quality of the data,
timely delivery of valuable data). To handle the problem caused by limited
available resources, many methods have been proposed. Below, we present a
review of previously proposed strategies.
2) Resource limitations: Sensing devices (e.g., sensors and mobile phones) usually
have limited resources, and the resource limitations arise as a challenge for
crowdsensing. Although more resources (e.g., computing, bandwidth) are
provided for mobile phones compared to mote-class sensors, mobile phones still
face the problem of resource limitations. Different types of sensed data may be
independent with each other because of the multi-modality sensing capabilities of
sensing devices. In practical scenarios, different types of sensed data may be used
for the same purpose. However, the diversities on the quality and resource
consumption of the sensed data pose an obstacle for improving the quality of
data with low resource consumption. Therefore, it is still a challenge to improve
the quality of data and minimize the resource consumption.
Privacy, security, and data integrity: The sensing devices potentially collect
sensitive data of individuals, thus privacy arises as a key problem. For example,
the GPS sensor readings usually record the private information of individuals (e.g.,
the routes they take during their daily commutes, and locations [56]). By sharing
the GPS sensor measurements, individuals’ privacy can be revealed. Hence, it is
important and necessary to preserve the security and privacy of an individual.
Also, the GPS records the information which is from daily commutes shared
within a larger community and can be used to learn the information of traffic
congestion in a city. Thus, it is also necessary to enable the crowdsensing
applications so that individuals can better understand their surroundings and can
ultimately benefit from the information sharing. To well preserve the enormous
amounts of private information of individuals, not only methodology efforts but
also systematic studies are needed. The AnonySense architecture, proposed in,
can support the development of privacy aware applications based on
crowdsensing. Also, it is important to guarantee that an individual’s data is not
revealed to untrustworthy third parties. For example, malicious individuals usually
contribute erroneous sensor data. Meanwhile, for their own benefit, malicious
individuals may intentionally pollute the sensing data. The lack of control
mechanisms to guarantee source validity and data accuracy can result in
information credibility issues. Therefore, it is necessary to develop trust
preservation and abnormal detection technologies to ensure the quality of the
obtained data. The problem of data integrity that ensures the integrity of
individuals’ sensor data, also needs to be well addressed. In the existing literature
[59], [60], although some methods have been proposed, they typically rely on co-
located infrastructure that may not be installed as a witness and have limited
scalability, which makes such kind of methods prohibitive and unavailable at
times. The reason behind this is that the approach relies on the inputs which is
from the installation of expensive infrastructure. Another approach for handling
data integrity problem is to sign the sensor data (e.g., typically, trusted hardware
installed on mobile phones are used for this purpose), i.e., a trusted platform
module signs a SHA-1 digest of the sensor data. This approach is potentially
problematic due to the reason that the verification process has to be done even in
the software.
Privacy protection: Privacy protection is a principal issue that has not yet been
well addressed, especially in the crowdsensing area. There is a large body of work
focusing on privacy protection. The CAROMM framework, making use of the
context of the data from user’s smartphone, bears high risks to leak the privacy
information of users since the information like location and time, which are
required to be protected. Obviously, the privacy risk must be reduced to an
acceptable level before any crowdsensing activity is conducted. Otherwise, the
user’s privacy may be exposed to the public. Conducted research on the
automatic data anonymization by masking particular information from the raw
data sensed by the local smartphone.
Social Internet of Things: Real humans are believed to understand and answer
better than a machine, and they are the most “intelligent machines”. A large
number of individuals tied in a social network can provide better answers to
complicated problems than a single individual (or even a knowledgeable
individual). The collective intelligence emerging in social networks can help users
find information (e.g., answers to their problems), which attracts many interests.
Social networks have the advantage of efficiently discovering and distributing
services, and social networks are utilized by many systems, such as Yahoo!
Answers, Facebook, for sharing the information (e.g., knowledge). There is a great
potential and prospect for integrating social networking into Internet of Things,
which will be an important research direction.
Methodologies:
Revisiting Experimental Methodologies To evaluate any system, including sensor-
based systems, we must choose an experimental research methodology. The
chosen methodology has an impact on the serverity of the listed challenges. This
choice can depend on factors such as which methodologies are promoted or
required by the domain or research community, the resources available, and
researchers’ previous experience. Moreover, in some cases, applying several
methodologies either in turn or in parallel is advisable. Here, I examine three
experimental methodologies: simulation, emulation, and deployment. Each
methodology can be applied in controlled settings, such as a laboratory, or in
uncontrolled settings, as in a field study.
Simulation
Simulation tests a construct using input data that’s artificially generated, possibly
according to a number of parameterized models. To evaluate flocking patterns,
for example, a simulation could generate the movement of pedestrians in an
airport based on a movement model. To match realistic behavior, model
parameterization and selection is often based on several assumptions about the
real world, and parameter calibration is conducted using real-world data. So, this
methodology is less prone to many of the challenges I’ve outlined; on the other
hand, it’s limited to artificial data. Simulations allow for abstraction, so problems
can be simplified using abstract models. They also require the least resource
consumption, as long as researchers use a course model and parameterization.
They scale to large numbers of simulated entities and avoid privacy constraints, a
concern when working with sensing data collected from human subjects. They
also support easily repeating evaluations of construct variations, including
changes to reflect adjusted behavior definitions. Finally, simulation tools can be
shared and reused. Simulation has two main disadvantages. First, transferring the
conclusions obtained to the real world is difficult — for example, if assumptions
are unrealistic due to a subjective definition of expected behavior, or wrong
compared to a real-world setting, the resulting conclusions might be misleading.
Second, building precise models and achieving realistic parameterizations can be
a laborious process.
Emulation
Emulation evaluates a construct using prerecorded input data, thereby
attempting to mimic a real-world situation. This data is collected during a
separate experiment with intended users, devices, and places and is thus prone to
the listed challenges. During collection, additional information about the input
data’s context is recorded to establish a ground truth — for example, in regards
to human behavior or environmental conditions. Emulations are based on real
data. Like simulations, they support easily repeating evaluations of construct
variations, including changes to reflect adjusted behavior definitions. Moreover,
emulation tools and datasets can be shared. However, the data might be non
representative — for example, it might not contain all relevant situations or have
been collected under all relevant conditions, a challenge when scaling to human
crowds. When replaying data, researchers might make incorrect assumptions
about, for instance, temporal latency in system components or wireless
networking that might affect system behavior. This can make it hard to reproduce
results if the datasets or emulation tools aren’t publicly available. Finally, systems
that contain feedback loops can’t be evaluated, as in a scenario where the
system’s output makes a user change his or her behavior, which then changes the
input data.
Deployment
Deployment tests a construct in situ as it is being used in intended situations with
given input data; it’s thus also prone to the listed challenges. Performance is
evaluated by monitoring the operations and the construct’s output, as well as
relevant data about the deployment’s context. Deployments measure the actual
performance in the intended context on the intended hardware. Researchers can
test feedback loops involving users, and implementations of the construct can be
shared for replication purposes. However, deployments can be resource
consuming. Collecting enough reliable data about the context to establish a
ground truth can be challenging, as highlighted previously. Testing variability
requires repeated experiments under comparable conditions, and generalizing
the results from a single deployment can be difficult — that is, does the method
scale in users, devices, or places and apply to other deployments?
There are many accidents happen our nearby but, we are not aware of that.
There are many good people in our society who are willing to help others but,
they are not knowing how to help and when and where is their need. These
people or Health workers or police officers who are regular in their duties can join
in this team. This Citizen Safety system asks our Aadhar number, PAN card
number, Account number and Passport Number (if any) for security purposes. We
also add our Family and friends mobile number in this system this will track and
monitor them which will be useful to see their location.
Our Idea is to create a smart wrist device that could connect through GPS and
internet which could even monitor our heart beat.
Ok, lets dive into a scenario…
Example, I’m a health worker who have joined in Citizen Safety system and a
incident has happened near me with in radius of 1km the person who has met the
incident needs an immediate first aid till the ambulance arrives. The person near
the incident can post about this in the system and all health workers within a
radius of 1 or 2km will get the Emergency notification and they could reach there
on time. Even a Minute is important
Another Example is, if a women is in danger she can send a Emergency message
to her family member or friend irrespective of distance and this same emergency
message along with the live location of the women will be sent to a police officer
who has joined in Citizen safety system and available within 5km of radius.
These people who have saved or helped others will be rewarded with monetary
rewards through their Account linked to the system which will also encourage
others to join in Citizens safety.
CONCLUSIONS:
The IoT has attracted much attention over the past few years. Numerous sensing
devices emerge in our living environments, which creates the IoT integrating the
cyber and physical objects. Mobile crowd sensing plays an important role in the
IoT paradigm. Sensors continuously generate enormous amounts of data, which
consumes much resource, such as storage resource for storing data and
bandwidth resource for data transfer. Previous works demonstrate that there is
significant amount of redundancy in sensor data. Thus, redundancy elimination of
sensor data is important and worthwhile, which can significantly reduce the cost
(e.g., bandwidth cost for data transfer) and facilitate the timely delivery of critical
information by reducing the traffic load, and thereby help achieving good QoS. In
this paper, we review the mobile crowd sensing techniques and challenges. We
focus on the discussion of the resource limitation and QoS (e.g., data quality)
issues and solutions in mobile crowd sensing. A better understanding of resource
management and QoS estimation in mobile crowd sensing can help us design a
cost-effective crowd sensing system that can reduce the cost by fully utilizing the
resource and improve the QoS for users. In the end of the paper, we discuss some
of the trends in the mobile crowd sensing. In the future, we will give an in-depth
study of challenges and techniques, solutions for addressing challenges in mobile
crowd sensing for IoT, and we will also analyze the production systems and
provide case studies.
References:
Mikkel Baun Kjærgaard “Studying Sensing-Based Systems: Scaling to Human Crowds in the Real World”
IEEE Pervasive Computing, vol. 17, no. 5, 2013.
Raghu .K “Mobile Crowdsensing: Current State and Future Challenges” vol. 49, no. 11, 2011.
Dan Peng, Fan Wu “Pay as How Well You Do: A Quality Based Incentive Mechanism for Crowdsensing”
vol. 17, no. 2, 2018.