Sobeih 2015

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 6

2015 IEEE International Conference on Computer and Information Technology; Ubiquitous Computing and Communications;

Dependable, Autonomic and Secure Computing; Pervasive Intelligence and Computing

DIVINE: Building a wearable device for intelligent


control of environment using Google Glass
Tamir Sobeih, Nick Whittaker and Liangxiu Han
School of Computing, Mathematics and Digital Technology
Manchester Metropolitan University
Manchester, UK
Email: l.han@mmu.ac.uk

AbstractGoogle Glass has introduced a new way of user


interaction with their environment. Users wearing Glass interact
through a head-mounted display, a touchpad and spoken commands to allow a full immersion in their environment ( such as
taking pictures, navigating the web, etc.), linked to background
software. Coupled with strong public and commercial interests in
smart homes and intelligent environments, in this work, for the
rst time, we have prototyped a wearable system of using Glass
for intelligent control of environment, which could potentially
lead to the use of wearable technology in different applications
such as home automation, smart cities or smart spaces.

keywords: Wearable technology; Google Glass, Mobile


Computing, Ubiquitous computing; Smart/Intelligent environment
I. I NTRODUCTION
According to Gartner [1], there are 4.9 billion things
connected to the Internet in 2015 and will reach 25 billion
by 2020. The increasing number of connected smart things
forms the foundation of the Internet of Things. The IoT is
rapidly building connections between things or devices such as
sensors, processors or controllers, often networked with other
sensors and sharing data between devices using Machine-toMachine (M2M). With the rise of wearable technology, the
IoT connectivity has been expanded using Machine to People
(M2P) through the wearable devices - such as Google Glass,
Smart Watches, which allows gathering data from the body of
the wearer or from the environment - or providing information,
or both, which is dramatically reshaping the the way we are
living. In this vision, humans will be completely immersed
with technology, which leads to the so-called Internet of
Everything.
The wearable devices (or the wearables) are small electronic
devices coupled with the small size of one or more sensors and
having computational capability. In fact, they are consumer
gadgets that attach to the body (any place of the body that
may capture information or present information), as shown in
Fig.1 (source:[2]).
The wearables allow measuring and tracking users physiological and bio-mechanical data for the purpose of their health
and tness. On the other hand, they can also help present
and display information to users. Currently, most applications
of wearable technology are mainly in the area of health and
medicine, tness and sports. For example,
978-1-5090-0154-5/15 $31.00 2015 IEEE
DOI 10.1109/CIT/IUCC/DASC/PICOM.2015.190

Fig. 1.

1280

Examples of wearables, their location on the user (source:[2])

Wearables for tness and sports: Nike+ FuelBand, one


of the best-known tness wearables on the market [3]. It
performs all-day activity tracking of calories burned and
steps taken. Jawbone - wristband-based tness tracker [4]
and Fitbit -Two wristbands (Force and Flex) and two clipon trackers [5]. Airo Health - wristband for Nutrition,
exercise, stress and sleep tracker [6].
Wearables for medical and healthcare: the potential applications of wearables for medical and health has gained
increasingly popularity since it can track or diagnose
users health conditions through monitoring biometric
signals. For example, the wearable BodyGuardian Remote Monitoring System by Preventice [7] can monitor

says ok, glass, then uses different spoken commands such as


take a picture, or record a video or send a message using
speech to text or hang out with friends or get a direction
to somewhere. After the actions, users are able to choose
additional option commands, for instance, to view picture,
etc. The results can be displayed in view without obscuring
the users natural vision and combined with the transformation
of audio, which leaves the user hearing unobstructed. It can
be potentially used in hands-free control applications. 2) The
other way of controlling Glass is by a touchpad via scrolling
through the menu, and then tapping to select the options in
the menu list.
1) Interface: The main interface of Google Glass is called
the timeline, consisting of 640 * 360 pixel cards, representing the past, present and future. The default Home card is
the Glass clock with ok, glass, located in the center of the
timeline. After waking up the glass, users can scroll to the
left of the Glass clock to display information relating to the
present and future or scroll to the right of the Glass clock to
display information relating to the past.
2) Glassware applications: There are two ways to create
Google Glass applications, so-called Glassware. One way
is to use the API (Mirror API) provided by Google based
on web services (RESTful services) [11], a software architecture design style. Glassware is a server side application,
which is running on a web server ( Controlled by Google
platform). Most existing Glassware are deployed on Google
App Engine ( a cloud-based platform as a service) for easy
development, management and maintenance. Glass itself as
a client can access and interact with Glassware through the
Internet. Glassware is like a software as a service. An user
wearing Glass access to a service of a Glassware sends a
request and then the service will be available for users once
authorised. In this case the device is used merely as a means
of displaying web app content as the application is run online
and as such it has no access to the underlying Glass hardware,
sensors, storage, etc. In this way it displays the incoming views
as part of the timeline and doesnt allow the application to
completely control the experience of the user. The advantage
of this approach is that it is very easy for a developer to
implement Glassware, increasing productivity. However, one
of the main disadvantage is about security concern since each
of piece of Glassware has to run in the Google-controlled
Cloud platform ( not controlled by end users themselves).
The other way to develop Glassware is to create a native
application that runs directly on the Glass using its Android OS
in a similar fashion to creating phone applications. This allows
complete control of the Glass and access to all the hardware
available and allows excellent customisability. Further it allows
the application to be run while the device is not connected to
the internet.

 
 

 
 

 

  

  

Fig. 2.

Components of Google Glass

patients biometric signals. The doctor can diagnosis


based on the information delivered through these signals.
Note that the wearable medical applications as medical
devices have to be approved before putting into practice.
As more sophisticated wearable devices ( e.g. Google Glass)
grow, beyond health and tness applications, there are many
potential areas to be exploited. For example, wearables can
be used in homes to tailor environmental experiences, such as
automatically adjusting lighting, temperature, or entertainment
options as users move from one space to another. It could be
also used in defence and security for crime prediction and
intervention, in reghting for responding to res and other
emergencies more rapidly using heads-up displays to obtain
instant readouts. IDC [8] estimates the shipments of smart
wearables to grow to 72.1 million units for 2015 made up of
39 million basic wearables and 33.1 smart wearables.
In this work, for the rst time, we have prototyped a
wearable system based on Google Glass, which can allow a
user to interact with sensors for intelligent control of their
environment. Different from the existing Google Glass-based
project using Google Cloud, we have developed our own
sensor environment consisting of a set of sensors and a web
server, which allows the Google Glass application to access
and control sensors and gives users ability a full control of
their environment in a smart way.
II. R ELATED WORK : G OOGLE G LASS AND ITS CURRENT
EXPLORATORY APPLICATIONS

A. Google Glass
Google Glass [9] is a wearable computer with a headsup display like a pair of glasses. It has built-in sensors
(e.g. gyroscope, accelerometer), Bluetooth, WiFi, and interacts
with users via voice controls, a touchpad, and Google Glass
applications ( also called Glassware), as shown in Fig.2.
Glass runs on the Android operating system. For Google
Glass Explorer Edition, it incorporates a dual - core processor
and 2GB of RAM and well as 16GB Flash Storage. As an
experimental wearable device, Google released a prototype of
Google Glass in 2013 and then made it available to the public
in 2014. Despite that Google stopped producing the prototype
( completion of experimental phase) but are still committed to
the development of enterprise version [10].
There are two ways to control Google Glass: 1) one way
is to control it by a microphone through spoken commands.
To invoke an action or a Google Glass application, a user

B. Current exploratory applications of using Google Glass


A growing number of Google Glass Applications have been
explored and developed in the areas of health [12], [13], [14],
education[15], [16], [17], and entertainment[18].

1281

For example, a cardiothoracic surgeon from University of


California, San Francisco [12] has experimented using Google
Glass in the operating room. Glassware empowers the surgeons
the ability to view patient x-rays directly without having to
leave the operating table. Namely, it allows doctors to access
healthcare records without having to break eye contact with the
patient. As Glass can be operated by voice command it could
even allow surgeons in the sterile environment of an operating
theatre to document ndings in a patients medical notes and
in real-time share this data with colleagues. Another Google
Glass application [13] was developed to predict glucose levels
in the food diabetics eat and help paralytics (Marks, 2013).
Apart from health and medical applications, Google Glass has
also been used in the education. STEMbite has used Google
Glass to record video lessons for teaching the math and science
of everyday life as new teaching and learning experiences [16].
Google Glass has been also used in medical education, which
allows medical students watch different medical procedures in
real time. Meanwhile, it also allows residents to view their
bedside manner from the patients perspective (Glauser, 2013).
Vallurupalli et al. [17] carried out a study of using wearable
technology for enhancing medical education. The authors used
Google Glass to explore different scenarios in cardiovascular
practice. A mock trainee was wearing Google Glass that
enacted to each scenario. Live video stream from Google
Glass was transmitted via wireless connection to smartphones
of each fellow who participated in experiment. It allowed
improvement in education and patient outcomes in cardiology
fellowship program.
Researchers [18] have developed a game application, which
connects Google Glass wearers to a virtual ant colony and
to solve real-world problems using crowdsourcing. The work
[19] has developed Glassware that can recognise location
and deliver users of wearable computers a tour guidance
experience.
Development of Google Glass applications is still at its early
exploratory stage. In this work, for the rst time, we have
designed and prototyped a wearable system to enable a user
to fully connect, display and intelligent control sensor devices
within an environment, leading to different applications such
as home automation, smart cities and /or smart spaces.

"

 
" 

 

"
" 


!"


Fig. 3.

Fig. 4.


 

System overview of DIVINE

A detailed system design of DIVINE

A. Sensor environment
To access and control environmental sensors using Glass,
it is important to create a sensor environment, which is built
upon our previous work [20]. This sensor environment is a
client-server model, consisting of a server with a remote client
connected to a large number and variety sensors that can
be controlled through a web interface, smartphone, or smart
wearables, as shown in the left side of the Fig.4.
1) Sensor device board: The sensor device board has been
created using the Phidgets starter kit ( building-block tools for
sensing and control from different computing devices) [21]
containing a RFID scanner, spatial sensor, Motor controller
and four motors and an interface kit through which eight
sensors as well as four LEDs to simulate digital outputs and
two switches to simulate digital inputs are connected. The
setup was completed by using a USB wireless adapter and
powered USB hub that enables all of the Phidgets to be
connected at the same time.
2) Raspberry Pi, web server and sensor device application:
The central component is the server, running on the Raspberry
Pi, which is responsible for controlling the broker, advertising
itself to the network and then managing all the update and

III. DIVINE THE PROPOSED WEARABLE SYSTEM


To fully control the Glass and sensor environment, we have
developed Glassware running directly on the Glass, which
then connects via WIFI connection to the server controlling
sensors to enhance the users personal experience in the
context of smart environment. The overview of the proposed
wearable system based on Google Glass is shown in Fig.3.
The Glassware running directly on the Glass. The Raspberry
Pi runs server applications controlling sensors ( sensor environment/ board) and creates a Wireless Hotspot. The Glass
then connects over the WiFi connection to the server running
on the Raspberry Pi. A detailed system design is show in Fig.4.
In the following sections, we will describe each component of
the system.

1282

control data sent between the devices and clients ( such as


mobile apps, wearables like Glass, smart watch). This includes
a simple automation system. Devices are responsible for
maintaining their own state, connecting to the network and the
server and conveying any change in that state while listening
for control communication. In the design, MQTT protocol
[22] has been chosen to allow reliable communication with a
very low network overhead costs within a resource constrained
environment. MQTT is a publish/subscribe model that is
broker-based and lightweight as well as easy to implement.
It is especially optimised for unreliable and low bandwidth
networks made up of embedded devices with limited physical
resources. The corresponding client sensor device application
(client programme), is also running on a Raspberry Pi, which
manages a number of analogue sensors such as Light, Temperature, Vibration, Magnetic as well as digital inputs from
switches and outputs form of LEDs. In addition four Motors
were attached: 2 acting as binary switches having on and off
positions while 2 were able to be set anywhere along their
180 degree range. This variety allows easy demonstration of
display and control of both binary and range based sensors
and actuators.

Fig. 5.

Components of Glass and Glassware

Activity, allowing access and display of all the sensor information in the environment as necessary and accessing all the
input and navigation facilities built in to Glass to allow the
user to control said devices.
In order to connect to the server, the Glassware accesses
the Wi-Fi transceiver and searches for the service detailing the
MQTT Broker. On resolution of this service the IP address and
port of the broker are determined and the Glassware initiates
a MQTT Client and login to the Broker. The Glassware and
Glass components are shown in Fig.5. The application then
builds a data model from the received device list and updates
it as required when on receipt of update messages from the
server.
From this point the Glassware loads into the main navigation activity which represents the sensor data within the
bounds of the Glass card interface. The data model contains
three basic object types: Collections, Devices and Sensors;
the latter of which is further split into a Switch Sensor and a
Range Sensor. Each of these is represented by different card
layouts arranged in a horizontal row that can be traversed by
the user swiping forward and backward on the touchpad with
the general Glass convention of a top down swipe gesture
being used as a back action to go back to the previous
row. Collections and Devices are represented by a card that
displays the name of the object and when selected with a tap
opens up into another row of similar cards. A Device, when
selected, opens up into a row of sensor cards which display
the name of the sensor and the current value. Switch Sensors
are represented by coloured toggle switch indicating its current
position either on or off by both the colour and differing on/off
text. If a sensor is controllable, then tapping on the touchpad
causes a sensor update object to be generated for that sensor
which is dispatched to the server by posting it to the relevant
topic and the successful update is received and causes the card
to alter to indicate the new value. A Range sensor depicts the
name of the sensor along with the current numerical value
of the sensor. If it is controllable, it also depicts a slider
which can be focused by tapping the touchpad, after which

B. Glass and Glassware


In this case, we have used Google Glass Explorer Edition
V3. The underlying operating system of the Glass is Android
4.4 KitKat and as a fully functional mobile computing device
this allows applications to be developed in a similar way to
those designed for phones and tablets. However, due to the
nature of the device a number of considerations must be made
with regards to the user experience in terms of conveying
information to the user and detecting their input.
As described in earlier sections, the Glass user interface
is a card based model where each screen is represented by
a card containing information which can be designed and
manipulated as desired by a developer. The basic form of
the interface is the timeline which is a series of cards in
temporal sequence that can be navigated by swiping forward
and backward on the touch pad; new cards are added at the
beginning as they are created allowing the user to traverse
recently used applications easily. As a part of this system
Glassware is based on one of two basic designs: Live Cards or
Immersive Activities. The former is designed as single cards
that are capable of displaying live information to the user that
is updated as necessary and can be displayed as a part of
the timeline enabling the user to immediately interact with
that aspect of the glassware just by navigating to the card.
The second is a more traditional design which gives complete
navigational control to the Glassware encapsulating the user
within the application until they choose to exit. This allows
the Glassware to completely control all aspects of the user
experience.
In order to connect to and display data from a variety of
sources in the environment and convey a large amount of
variable data and control in a structured way to facilitate ease
of use, we have adopted the second method using Immersive

1283

start card with the phrase connect to open day server. Once
in the application there are a number of contextual menus, the
rst is available within a collection or device and allows the
user to traverse the group and select a particular card, go back
or restart the navigation activity. While a Sensor card is shown
an additional option enters the sensor menu which enables the
user to toggle a switch sensor or set a range sensor value.
This prototype has undergone a through software testing including black box and white box testing. The system functions
well and can automatically adjustment sensors such as the
lighting, temperature and give a full control of user experience.
IV. C ONCLUSION AND F UTURE W ORK

Fig. 6.

In this work, for the rst time, the main goal of our
exploratory work was to prototype a wearable system with
particular emphasis on interacting with sensor devices in the
environment. The testing and demonstration has shown the
successful delivery of this prototype. This work could be
potentially used in Smart Homes, smart spaces for full control
of the smart environment.
While the Glassware produced achieved the goals set there
are still many avenues of development that could be undertaken. One aspect is to utilise the Bluetooth and Wi-Fi
capabilities of Glass along with any other relevant sensors to
determine precise location data and feed it to an application
to create contextual data display automatically. Another aspect
is to utilise the camera and particularly the fact that it sees a
similar view to the user. Accessing the camera and using it to
identify devices in the environment that are in the users eld of
view through some form of photo recognition would be a great
leap forward in hands free ubiquitous computing. One of the
most important aspects is security design. Security is a very
important part of the Internet of Things (IoT). The amount
and type of data to be gathered will be extremely personal
and should be protected. The future work will be incorporate
security aspect in the system.

Glassware user navigation ow

sliding forward and backward will move the slider altering


the displayed value of the sensor accordingly. When the user
stops touching the pad the Glassware generates the update
object based upon the selected slider position and transmits it
to the device. The user can then swipe down to exit focus of
the slider and return to the general navigation paradigm. In this
way the user can easily traverse all the devices connected to
the server and control them as desired. The detailed Glassware
ow is shown in Fig.6.
One additional enhancement to this application was the use
of voice commands to control the Glassware. As Glass is worn
and designed to be used while the user is undertaking other
activities there is a sophisticated voice recognition component
to the general Glass interface which can be exploited by
custom applications as well. Voice control is initiated by the
user saying OK Google after which Glass displays the list of
available options and listens for the next phrase spoken. If it
recognises the phrase as matching one of the list items that
item is selected and passed through to the application for it to
be dealt with. The design of the Glassware incorporates this
voice control offering a start command that integrates with the
Glass application menu allowing it to be launched from the

ACKNOWLEDGMENT
The work reported in this paper has formed part of the
DIVINE project, which is funded by Manchester Metropolitan
University in UK.
R EFERENCES
[1] Gartner. (2015, 07) The disruptive impact of iot on
business - gartner symposium/itxpo 2014. [Online]. Available:
http://www.gartner.com/newsroom/id/2905717
[2] H. Salah, E. MacIntosh, and N. Rajakulendran. (2014) Mars
market insights: Wearable tech: Leveraging canadian innovation
to improve health. [Online]. Available: www.marsdd.com/newsinsights/mars-reports/
[3] Nickfuelband. (2015, 07) Nike+fuelband. [Online]. Available:
http://www.nike.com/
[4] Jawbone. (2015, 07) Jawbone. [Online]. Available: https://jawbone.com/
[5] Fitbit. (2015, 07) Fitbit. [Online]. Available: http://www.tbit.com/uk
[6] AiroHealth. (2015, 07) Airo Health. [Online]. Available:
http://www.getairo.com/
[7] Preventice. Bodyguardian remote monitoring system. [Online].
Available: http://www.preventice.com/products/bodyguardian/
[8] IDC. (2015, 07) Worldwide quarterly wearable device tracker. [Online].
Available: http://www.idc.com/tracker/

1284

[9] Google.
(2015,
07)
Google
glass.
[Online].
Available:
https://developers.google.com/glass/
[10] BBC. (2015, 07) Google glass sales halted but rm says kit is not dead.
[Online]. Available: http://www.bbc.co.uk/news/technology-30831128
[11] L. Richardson, M. Amundsen, and S. Ruby, RESTful Web APIs, 1st ed.
OReilly Media, 2013.
[12] L. Kathy-Chin. (2013) A surgeons review of google
glass
in
the
operating
room.
[Online].
Available:
http://www.fastcompany.com/3022534/internet-of- things/a-surgeonsreview-of-google-glass-in-the-operating-room
[13] P. Marks, A healthy dose of google glass, New Scientist, vol. 219, no.
2936, pp. 2223, 2013.
[14] C. Wiltz, Epgl medical sciences among rst device makers to use
google glass, Medical Device and Diagnostic Industry, vol. 35, no. 4,
2013.
[15] W. Glauser, Doctors among early adopters of google glass, Canadian
Medical Association Journal, p. 109, 2013.
[16] STEMbite.
(2015,
07)
Stembite.
[Online].
Available:
www.youtube.com/STEMbite
[17] S. Vallurupalli, H. Paydak, S. Agarwal, M. Agrawal, and C. Assad Kottner, Wearable technology to improve education and patient outcomes
in a cardiology fellowship program: A feasibility study, Health and
Technology, vol. 3, no. 4, pp. 267270, 2013.
[18] M. Campbell, Hive-mind solves tasks using google glass ant game,
New Scientist, vol. 219, no. 2928, p. 20, 2013.
[19] H. Altwaijry, M. Moghimi, and S. Belongie, Recognizing locations with
google glass: A case study, in IEEE Winter Conference on Applications
of Computer Vision (WACV). IEEE, 24-26 March 2014 2014, pp. 167
174.
[20] T. Sobeith, Mimir: Development of a sotfware system to control remote
snesor devices using mobile and wearbale technolgies, Masters thesis,
Manchester Metropolitan University, 2014.
[21] Phidgets.
(2015,
07)
Phidgets.
[Online].
Available:
http://www.phidgets.com/
[22] MQTT. (2015, 07) Mqtt. [Online]. Available: http://mqtt.org/

1285

You might also like