Download as pdf or txt
Download as pdf or txt
You are on page 1of 4

Towards Using Embedded Magnetic Field Sensor for

Around Mobile Device 3D Interaction

Hamed Ketabdar Kamer Ali Yuksel Mehran Roshandel


Quality and Usability Lab, TU Berlin TU Berlin Deutsche Telekom Laboratories
Deutsche Telekom Laboratories Ernst-Reuter-Platz 7 Ernst-Reuter-Platz 7
Ernst-Reuter-Platz 7, 10587 Berlin 10587 Berlin
10587 Berlin
hamed.ketabdar@telekom.de kamer.yuksel@telekom.de mehran.roshandel@telekom.de

ABSTRACT tangible devices. ADI provides possibility of extending interaction


We present a new technique based on using embedded compass space of small mobile devices beyond their physical boundary
(magnetic) sensor for efficient use of 3D space around a mobile allowing effective use of 3D space around the device for
device for interaction with the device. Around Device Interaction interaction. This can be especially useful for small
(ADI) enables extending interaction space of small mobile and tangible/wearable mobile or controller devices such as mobile
tangible devices beyond their physical boundary. Our proposed phones, wrist watches, headsets, etc. In these devices, it is
method is based on using compass (magnetic field) sensor extremely difficult to operate small buttons and touch screens.
integrated in new mobile devices (e.g. iPhone 3GS, G1/2 However, the space beyond the device can be easily used, no
Android). In this method, a properly shaped permanent magnet matter how small the device is. ADI can be also very useful for
(e.g. a rod, pen or a ring) is used for interaction. The user makes interaction when the device screen is not in line of user's sight.
coarse gestures in 3D space around the device using the magnet. ADI techniques can be combined with other interaction methods
Movement of the magnet affects magnetic field sensed by the such as keyboard or touch to provide more advanced interaction
compass sensor integrated in the device. The temporal pattern of possibilities with devices.
the gesture is then used as a basis for sending different interaction ADI concept can allow coarse movement-based gestures made in
commands to mobile device. The proposed method does not the 3D space around the device to be used for sending different
impose changes in hardware and physical specifications of the interaction commands such as turning pages (in an e-book or
mobile device, and unlike optical methods is not limited by calendar), controlling a portable music player (changing sound
occlusion problems. Therefore, it allows for efficient use of 3D volume or music track), zooming, rotation, etc. In a mobile phone
space around device, including back of device. Zooming, turning case, it can be also used for dealing with incoming calls, for
pages, accepting/rejecting calls, clicking items, controlling a instance accepting, rejecting or diverting a call. ADI techniques
music player, and mobile game interaction are some example use are based on using different sensory inputs such as camera [1],
cases. Initial evaluation of our algorithm using a prototype infrared distance sensors [2, 3, 4, 5], touch screen at the back of
application developed for iPhone shows convincing gesture device [6], proximity sensor [7], electric field sensing [8, 9], etc.
classification results.
Some new mobile devices (phones) such as Apple iPhone 3GS
and Google Android are equipped with compass (magnetic field)
Categories and Subject Descriptors sensor. In this work, we propose ADI based on interaction with
I.5.4 [Computing Methodologies]: Pattern Recognition, compass (magnetic) sensor integrated in these mobile devices
Applications – Signal processing. using a properly shaped magnetic material. The user takes the
magnet which can be in shape of a rod, pen or ring in hand, and
draws coarse gestures in 3D space around the device (Fig. 1).
General Terms These gestures can be interpreted as different interaction
Algorithms
commands by the device. This new interaction method can
overcome some shortcomings of state of the art techniques such as
Keywords occlusion problems existing in optical methods. In addition, as the
Embedded Compass (Magnetic) Sensor, Mobile Devices, Around sensor is already integrated in mobile phones (devices) it does not
Device 3D Interaction, Properly Shaped Magnet, Movement- impose change in hardware and physical specifications of mobile
Based Gestures. devices, and only uses a properly shaped magnet as external
accessory. Since the magnetic field can pass through many
different materials, the interaction is possible even if the device is
1. INTRODUCTION: AROUND DEVICE for instance in a bag or pocket. Such an approach open ups a new
INTERACTION and effective way for interaction with mobile devices (phones),
Around Device Interaction (ADI) is being increasingly mobile games, as well as for user authentication based on
investigated as an efficient interaction technique for mobile and magnetic gestures.
Copyright is held by the author/owner(s).
MobileHCI 2010 September 7 - 10, 2010, Lisboa, Portugal.
ACM 978-1-60558-835-3.
.
The paper is organized as follows: Section 2 describes the idea be efficiently used for interaction (Fig. 2). Additionally, the user
behind our approach for ADI in more details, and compares it with can interact with the mobile device, even if the device is not in the
some state-of-the-art approaches. Section 3 explains feature line of sight, or covered (e.g. mobile device in a pocket or bag).
extraction and gesture classification. Section 4 presents initial For instance, the user may decide to accept or reject a call, or
experiments and results. An implementation of the proposed change a music track, without taking the phone out of his
approach on Apple iPhone is introduced in Section 5, and Section pocket/bag (Fig. 3).
6 provides conclusions and future work tracks. This interaction can be used in different applications such as
turning pages, zooming, reacting to a call alert, and music players.
We have built a demonstrator called “MagiTact” based on this
concept which is presented in Section 5. In addition to regular
interaction options, our technique can be used for efficient
interaction with games, as well as a new user authentication
technique based on signature shaped gestures.

3. GESTURE RECOGNITION BASED ON


MAGNETIC FIELD
The gestures are created based on moving the magnet (a rod or
ring) by hand in the space around the device along different 3D
trajectories. The gestures studied in this work are mainly based on
movement of magnet at different positions around the device in
different directions with different periodicities. Fig. 4 shows
different gestures used in this study. The rod shaped magnet is
installed in a pen. We have used iPhone 3GS as mobile device for
our studies.
The embedded compass (magnetic) sensor provides a measure of
Figure 1. Interaction with a mobile phone using space around magnetic field strength along x, y, and z directions. The values
the phone based on change in magnetic field. change over a range of -128 to 128.
The first step in processing gestures is detecting beginning and
end of them. This is achieved by comparing Euclidian norm of
2. OUR APPROACH: INTERACTION magnetic field strength against a pre-defined threshold.

WITH MOBILE DEVICES BASED ON 3.1 Feature Extraction


MAGNETIC FIELD SENSOR The next step in gesture processing is feature extraction. Feature
In this work, we demonstrate our initial investigations towards extraction allows for preserving information which can be
using compass (magnetic field) sensor integrated in mobile discriminative between gestures and removes redundant
devices (e.g. iPhone 3GS, G1 Android) for ADI. In our approach, information. All the features are extracted over samples in an
we use a regular magnetic material in a proper shape to be taken in interval marked by the beginning and end of the gesture. This
hand (e.g. rod shaped, pen, ring) to influence compass (magnetic) interval is divided into two equal length windows, and a feature
sensor by different movement-based gestures, and hence interact vector is extracted for each window. The two feature vectors are
with the device. then concatenated to form a new feature vector to be used for
In the Introduction, we briefly mentioned to a few methods for gesture classification. Dividing the gesture interval to multiple
ADI. Compared to camera based techniques, getting useful windows allows for capturing temporal pattern of the gesture in a
information from magnetic sensor is algorithmically much simpler more detailed way. The features we have used are mainly based on
than implementing computer vision techniques. Our method does average or variance of magnetic field strength in different
not impose major change in hardware specifications of mobile directions, as well as piecewise correlation between field strength
devices or installing many optical sensors (e.g. in front, back or in different directions. Features used in this study are listed in the
following:
edges) of the device. It is only based on a magnetic sensor which
is internally embedded in some new mobile devices (phones). • Average field strength along x, y, and z directions (3 features)
Optical sensor installation occupies certain physical space which
can be critical in small devices. In our method, for mobile devices • Variance of field strength along x, y, and z directions (3
features)
such as iPhone and G1/2 Android, it is only necessary to have a
properly shaped magnet as an extra accessory. Our approach also • Average of Euclidian norm of filed strength along x, y, z (1
does not suffer from illumination variation and occlusion feature)
problems. Optical interaction techniques can be limited when the
senor is occluded by an object, including body of user. In our • Variance of Euclidian norm of field strength along x, y, and z (1
proposed method, the interaction is based on magnetic field which feature)
can pass though many different materials. Considering the fact • Piecewise correlation between field strength along x and y, x
that the back of mobile device is usually covered by hand, optical and z, and y and z (3 features)
ADI techniques (e.g. camera and infra-red based) can face
difficulties for using the space at the back of device. However, These features form an 11 elements feature vector for each
since the interaction in our method is based on magnetic field window. The two window feature vectors are then concatenated to
(which can pass through hand), the space at the back of device can form a new 22 elements feature vector for each gesture.
3.2 Gesture Classification We invited 6 test users for the experiments. Each user is asked to
The extracted feature vector is used as input to machine learning repeat each gesture at least 15 times using a rod shaped magnet.
algorithms for gesture classification. We have studied two We recorded the signals captured by the magnetic sensor using an
different classifiers: Multi-Layer Perceptron (MLP) [10], and application developed for Apple iPhone 3GS.
Binary Decision Tree [11]. Features are extracted from magnetic signals as described in
Multi-Layer Perceptron (MLP) is an Artificial Neural Network Section 3.1. The extracted features are used for classification using
which can realize an arbitrary set of decision regions in the different classifiers. We have used a 10 fold cross-validation
input feature space. The feature vectors are used to train the MLP. scheme for managing training and test data.
During testing the system, a feature vector is presented at MLP Table 1 shows classification results using Multi-Layer Perceptron
input. The MLP estimates posterior probability of different gesture (MLP), and Binary Decision Tree for classification. As can be
classes at output (each MLP output is associated with one gesture seen in the table, the MLP performance is better than Binary
class). The gesture class with highest posterior probability is Decision Tree, reaching good accuracy of 91.4% for gesture
selected as recognition output. recognition. Table 2 shows confusion matrix for MLP based
Binary Decision Tree is a logical model represented as a binary results. The confusion matrix shows that the highest level of
(two-way split) tree that shows how the value of a target confusion is between gestures 3 and 6, as well as 1 and 7.
variable (gesture classes) can be predicted by using the values of a Gesture 3 can be similar to gesture 6 (circle) if the right-left
set of predictor variables (features). trajectory in this gesture is different from first left-right
trajectory. Gesture 7 can be interpreted as quick repetition
(twice) of gesture 1 (double click vs. click).
The MLP based recognition algorithm has also been implemented
on Apple iPhone 3GS device, and is able to perform gesture
classification in real time.
Our studies have shown that even using a smaller feature set (e.g.
only cross correlation based features), and a very simple classifier
(e.g. a MLP with 3, 3, and 8 input, hidden and output nodes,
respectively) reasonably high classification results can be
obtained. This can very important considering practical aspects of
implementing such a system on mobile devices.

Table 1. Gesture classification results for different classifiers


Figure 2. Back of device interaction based on magnetic field.
Multi-Layer Perceptron Binary Decision Tree
91.4% 83.7%

Table 2. Confusion matrix for gesture recognition using MLP


classifier. It shows the actual gesture entries (rows) and the
classification results (columns). The numbers in each row are
divided by the total number of entered gestures for that class.
Gesture 1 2 3 4 5 6 7 8
Index

1 0.89 0.02 0.01 0 0 0 0.08 0

2 0.01 0.93 0.03 0.01 0.01 0.01 0 0


3 0.01 0.01 0.86 0.01 0.02 0.08 0.01 0

4 0 0 0 0.90 0.06 0.04 0 0

5 0 0.02 0.01 0.03 0.92 0.02 0 0


Figure 3. Interaction with a mobile device (for instance dealing
with incoming calls) is possible even if the device is in a bag or 6 0 0.03 0.01 0.04 0 0.91 0 0.01
pocket. 0.02 0 0.03 0 0 0 0.95 0
7
8 0 0 0.01 0.01 0 0.02 0 0.96
4. EXPERIMENTS AND RESULTS
We have set up gesture recognition experiments in order to have
an initial evaluation of our method for interaction. We have used 8
gestures for the experiments. Figure 4 shows the gestures used for
the experiment. The gestures are selected in a way that they have
different variability in movement pattern and usage of space
around the device.
increasingly developed for mobile devices, and more user friendly
interaction techniques are essential for their enhancement. Our
method enables an efficient way of using 3D space around the
device for game interactions. It can also be combined with regular
game interaction techniques.
In addition to gaming, the proposed method can be used as a new
technique for user authentication. The user can draw a 3D
signature by a magnet in the space around the device for
authentication. This is what we call as “3D Magnetic Signature”.
The 3D magnetic signature provides a wider choice for
authentication as it can be flexibly drawn in 3D space around the
device, and can be consequently very difficult to replicate.
Additionally, unlike regular signature, no hardcopy of the
magnetic signature can be produced, resulting in higher security.

7. REFERENCES
Figure 4. Different gestures used in this study. Gestures 7 and [1] Starner, T. and Auxier, J. and Ashbrook, D. and Gandy, M,
8 can be interpreted as quick repetition (twice) of gestures 1 The gesture pendant: A self-illuminating, wearable, infrared
and 3, respectively (double click vs. click). computer vision system for home automation control and
medical monitoring, International Symposium on Wearable
Computing, 2000, pp. 87-94.
[2] Kratz, S., Rohs, M. HoverFlow: expanding the design space of
5. IMPLEMENTATION: “MagiTact” around-device interaction. In Proc. of the 11th International
We have implemented a demo application called “MagiTact” Conference on Human Interaction with Mobile Devices and
based on the presented method for Apple iPhone 3GS. The Services, Sep. 2009, Bonn, Germany.
interaction is used to turn pages left-right or up-down in a photo
view and document view application, as well as zooming a map in [3] Hinckley, K., Pierce, J., Sinclair, M., and Horvitz, E. Sensing
and out. Zooming functionality can be also achieved using space techniques for mobile interaction. In Proc. of UIST '00. ACM,
at the back of device, so that the screen doe not get occluded. The pp. 91-100.
application can also demonstrate rejecting/accepting a call using [4] Butler, A., Izadi, S., and Hodges, S. 2008. SideSight: Multi-
our interaction method. This functionality can be achieved even “touch” interaction around small devices. In Proc. of UIST’08.
when the phone is in a bag or pocket, and facilitates dealing with ACM, 201-204.
unexpected calls in an improper situation (e.g. office, meeting). In [5] Howard, B. and Howard, M.G.: Ubiquitous Computing
addition, the application demonstrates interacting with a music Enabled by Optical Reflectance Controller. Whitepaper.
player for changing sound volume or music track. Lightglove, Inc., http://lightglove.com/WhitePaper.htm.
(Visited on 25.06.2009).
[6] Baudisch, P. and Chu, G. Back-of-device interaction allows
6. CONCLUSIONS AND EXTENSION OF creating very small touch devices. In Proc. of CHI ‘09.
THE WORK [7] Metzger, C., Anderson, M., and Starner, T. 2004. FreeDigiter:
In this paper, we studied the use of magnetic sensor embedded in A Contact-Free Device for Gesture Control. In Proc. of the
new smart phones (e.g. Apple iPhone 3GS and Google Android) Eighth international Symposium on Wearable Computers.
for interacting with the device by 3D movement-based gestures. ISWC. IEEE Computer Society, 18-21.
We showed that such an interaction is possible using a properly [8] Theremin, L.S. The Design of a Musical Instrument Based on
shaped magnet. Experiments and results show high gesture Cathode Relays. Reprinted in Leonardo Music J., No. 6, 1996,
classification accuracy with a relatively simple processing system. 49-50.
The proposed method provides simple yet effective interaction
technique allowing efficient use of 3D space around device [9] Smith, J., White, T., Dodge, C., Paradiso, J., Gershenfeld, N.,
(including back of device), and it is not limited with occlusion and Allport, D. 1998. Electric Field Sensing For Graphical
problems existing in optical methods. Additionally, it does not Interfaces. IEEE Comput. Graph. Appl. 18, 3 (May. 1998), 54-
impose any change in hardware and physical specifications of new 60.
mobile devices which are already equipped with compass [10] Minsky M L and Papert S A 1969 Perceptrons (Cambridge,
(magnetic) sensor. MA: MIT Press).
Apart from gesture commands for interacting with device’s user [11] Sheldon B. Akers. Binary Decision Diagrams, IEEE
interface, such a technique can have a good potential for Transactions on Computers, C-27(6), pp. 509–516, June 1978.
improving mobile games. Gaming applications are being

You might also like