Download as pdf or txt
Download as pdf or txt
You are on page 1of 114

 

    Tangible  
User  
Interfaces  
and  
Embodied  
Cognition  

Two case studies


towards grounding
design in theory

Augusto Esteves
C ONTENTS

ABSTRACT ........................................................................................................... 8
1. INTRODUCTION ............................................................................................... 9
1.1 THESIS GOALS ........................................................................................ 10
1.1.1 THE DEVELOPMENT OF TUIS FOR NOVEL DOMAINS.................. 10
1.1.2 THE CREATION OF GUIDELINES FOR TUI DESIGN BASED ON THE
THEORIES OF EMBODIED COGNITION .................................................. 11

1.2 THESIS STRUCTURE ............................................................................... 11


2. TANGIBLE USER INTERFACES ....................................................................... 12
2.1 INTRODUCTION ..................................................................................... 12
2.2 HISTORY ................................................................................................. 14
2.2.1 GRASPABLE USER INTERFACE ....................................................... 15
2.2.2 TANGIBLE BITS ............................................................................... 16
2.2.3 EARLY TANGIBLE USER INTERFACES ........................................... 17
2.3 RELATED AND OVERLAPPING AREAS OF RESEARCH ........................ 18
2.3.1 TANGIBLE AUGMENTED REALITY ................................................ 19
2.3.2 TANGIBLE TABLETOP INTERACTION ........................................... 19
2.3.3 AMBIENT DISPLAYS ........................................................................ 19
2.3.4 EMBODIED USER INTERFACES ...................................................... 20
2.3.5 UNIFYING RESEARCH AREAS ........................................................ 20
2.3.6 REALITY-BASED INTERACTION .................................................... 22
2.4 APPLICATION DOMAINS ....................................................................... 23
2.4.1 TANGIBLE INTERFACES FOR LEARNING ...................................... 24
2.4.2 PROBLEM SOLVING AND PLANNING ............................................ 25
2.4.3 INFORMATION VISUALIZATION .................................................... 28
2.4.4 TANGIBLE PROGRAMMING ............................................................ 28
2.4.5 ENTERTAINMENT, PLAY AND EDUTAINMENT ............................ 29
2.4.6 MUSIC AND PERFORMANCE ........................................................... 31
2.4.7 SOCIAL COMMUNICATION ............................................................. 32

2  
 
2.4.8 TANGIBLE REMINDERS .................................................................. 33
2.5 CURRENT FRAMEWORKS AND CLASSIFICATIONS ............................... 33
2.5.1 PROPERTIES OF GRASPABLE USER INTERFACES ......................... 34
2.5.2 CONCEPT AND THE MCRIT INTERACTION MODEL FOR TUIS .. 34
2.5.3 CLASSIFICATION OF TUIS .............................................................. 35
2.5.4 MAPPINGS BETWEEN THE PHYSICAL AND THE DIGITAL ........... 36
2.5.5 TOKENS AND CONSTRAINTS ......................................................... 37
2.5.6 TANGIBLE INTERACTION .............................................................. 38
2.6 TECHNOLOGIES FOR BUILDING TUIS ................................................. 39
2.6.1 RADIO-FREQUENCY IDENTIFICATION ......................................... 39
2.6.2 COMPUTER VISION ......................................................................... 39
2.6.3 MICROCONTROLLERS , SENSORS AND ACTUATORS ..................... 40
2.6.4 TOOLS FOR TUI DEVELOPMENT ................................................... 40
2.7 BENEFITS AND LIMITATIONS OF TUIS ................................................ 43
2.7.1 STRENGTHS ..................................................................................... 43
2.7.2 LIMITATIONS ................................................................................... 45
3. EMBODIED COGNITION ................................................................................. 48
3.1 INTRODUCTION AND RELEVANCE ....................................................... 48
3.2 FOUR VIEWS OF EMBODIED COGNITION........................................... 50
3.2.1 COGNITION IS SITUATED ............................................................... 50
3.2.2 HUMANS OFF -LOAD COGNITION ONTO THE ENVIRONMENT .... 50
3.2.3 COGNITION IS FOR ACTION ........................................................... 50
3.2.4 OFF-LINE COGNITION IS BODY-BASED ........................................ 51
3.3 GUIDELINES FOR TUI DEVELOPMENT .............................................. 51
3.3.1 THINKING THROUGH TUIS ........................................................... 51
3.3.2 ACTING THROUGH TUIS ............................................................... 52
3.3.3 SCAFFOLDING TO TUIS ................................................................. 54
3.3.4 COLLABORATING THROUGH TUIS ............................................... 55
4. FIRST PROTOTYPE: MEMENTOS ................................................................... 57

3  
 
4.1 INTRODUCTION ..................................................................................... 57
4.2 CONCEPT ............................................................................................... 58
4.2.1 QUESTIONNAIRE ............................................................................ 59
4.2.2 MIND MAP ....................................................................................... 62
4.2.3 DESCRIPTION .................................................................................. 63
4.2.4 SCENARIO ........................................................................................ 65
4.2.5 STAKEHOLDER·S DIAGRAM ............................................................ 67
4.3 DESIGN .................................................................................................. 68
4.3.1 THE TAC PARADIGM ...................................................................... 68
4.3.3 TOKENS· DESIGN ............................................................................ 71
4.3.4 WIREFRAMES ................................................................................... 72
4.4 IMPLEMENTATION ................................................................................ 75
4.4.1 TOKENS (TRAVELLING) ................................................................. 75
4.4.2 PUBLIC KIOSKS ................................................................................ 77
4.4.3 HOME (PERSONAL COMPUTER ) ..................................................... 79
4.5 EVALUATION ......................................................................................... 80
4.5.1 RESULTS ........................................................................................... 81
4.6 FINAL REMARKS .................................................................................... 82
5. SECOND PROTOTYPE: ECO PLANNER ........................................................... 83
5.1 INTRODUCTION ..................................................................................... 83
5.2 CONCEPT ............................................................................................... 84
5.2.1 THE TRANS-THEORETICAL MODEL .............................................. 86
5.2.2 MIND MAP ....................................................................................... 86
5.2.3 SCENARIO ........................................................................................ 87
5.3 DESIGN .................................................................................................. 89
5.3.2 THE TAC PARADIGM ...................................................................... 89
5.3.3 TOKENS· DESIGN ............................................................................ 91
5.3.1 WIREFRAMES ................................................................................... 92
5.4 IMPLEMENTATION ................................................................................ 93

4  
 
5.4.1 PROCESSING LIBRARIES.................................................................. 94
5.5 FINAL REMARKS .................................................................................... 95
6. DISCUSSION AND CONCLUSION ..................................................................... 96
6.1 CONTRIBUTIONS ................................................................................... 96
6.2 FUTURE WORK ....................................................................................... 99
6.3 FINAL REMARKS .................................................................................. 100
7. REFERENCES .............................................................................................. 101

P ICTURES

FIGURE 1 INTERACTION MODEL OF TUIS (MCRIT MODEL) [UI00]. .............................................. 13


FIGURE 2 CENTER AND PERIPHERY OF THE USER ·S ATTENTION WITHIN A PHYSICAL SPACE
[IU97]. .................................................................................................................................................... 16
FIGURE 3 URP, A TUI FOR URBAN PLANNING THAT ALLOWS FOR USERS TO DETERMINE THE
WIND FLOW AND SHADOW PROJECTING OF BUILDINGS [UI99]. ...................................................... 17
FIGURE 4 DURREL BISHOP · S MARBLE ANSWERING MACHINE [P95]. A USER IS PICKING A
MARBLE THAT REPRESENTS A RECORDED CALL , AND PLACING IT IN THE INDENTATION THAT
ALLOWS THE SYSTEM TO PLAY IT ......................................................................................................... 18
FIGURE 5 VIRTUAL AIRPLANE MAPPED TO A PHYSICAL CARD......................................................... 19
FIGURE 6 TANGIBLE OBJECTS ON A MULTI -TOUCH SURFACE . ........................................................ 19
FIGURE 7 TANGIBLE OBJECT AS AMBIENT DISPLAY FOR THE ACTUAL TEMPERATURE . ................ 19
FIGURE 8 PHYSICAL DEVICES ARE INTEGRATED WITH THEIR DIGITAL CONTENT . ...................... 20
FIGURE 9 THE FOUR THEMES OF REALITY-BASED INTERACTION [JGH+08]. .............................. 22
FIGURE 10 FROM LEFT TO RIGHT : A LEGO MINDSTORMS [URL6] ROBOT THAT CAN DETECT
AND GRAB COLORED BALLS;; A TOPOBO [RPI04] ASSEMBLY , WITH PHYSICAL PARTS THAT CAN BE
PROGRAMMED IN SEPARATE BY SIMPLE DEMONSTRATION . ............................................................. 24
FIGURE 11 THE SAND SCAPE [I08B], A TUI THAT ALLOWS USERS TO INTERACT WITH PHYSICAL
SAND AND SEE THE RESULTS PROJECTED ONTO THE LANDSCAPE IN REAL - TIME . ......................... 26
FIGURE 12 THE IP NETWORK DESIGN WORKBENCH SYSTEM [KHN+03] ................................... 27
FIGURE 13 TINKER SHEETS [ZJL+09], A SIMULATION ENVIRONMENT FOR WAREHOUSE
LOGISTICS . ............................................................................................................................................. 27
FIGURE 14 THE PHILIPS ENTERTAIBLE PROJECT [LBB+07]. .......................................................... 30
FIGURE 15 THE IOBRUSH [RMI04], WITH ITS PHYSICAL PAINTBRUSH EMBEDDED WITH A VIDEO
CAMERA. ................................................................................................................................................. 30
FIGURE 16 THE REAC TABLE [JGA+07], A MUSICAL TUI. ................................................................ 31
FIGURE 17 FROM LEFT TO RIGHT : THE LUMITOUCH [CRK+01], LOVERS C UP [CLS06], AND
INTOUCH [BD97]. ................................................................................................................................. 33
FIGURE 18 IN THIS PICTURE , EACH BALL IS BOTH A TOKEN AND A CONSTRAINT. ........................ 36
FIGURE 19 THE TAC PALETTE FOR THE MARBLE ANSWERING MACHINE [P95]. ......................... 38
FIGURE 20 THE REAC TIVISION [JGA+07] FRAMEWORK DIAGRAM.. ............................................. 43
FIGURE 21 RESULT FROM ONE OF THE BRAINSTORM SESSIONS FOR THE MEMENTOS TUI. ........ 58

5  
 
FIGURE 22 GRAPHS WITH THE AGE GROUPS OF TYPICAL TOURISTS . THE GRAPH ON THE LEFT
REPRESENTS THE ANSWERS TO THE ONLINE QUESTIONNAIRE , WHILE THE GRAPH ON THE RIGHT
REPRESENTS ANSWERS TO THE PHYSICALLY QUESTIONNAIRES HANDED IN HOTELS . .................. 59
FIGURE 23 GRAPHS WITH THE GROUP SIZE OF TYPICAL TOURISTS. THE GRAPH ON THE LEFT
REPRESENTS THE ANSWERS TO THE ONLINE QUESTIONNAIRE , WHILE THE GRAPH ON THE RIGHT
REPRESENTS ANSWERS TO THE PHYSICALLY QUESTIONNAIRES HANDED IN HOTELS . .................. 60
FIGURE 24 GRAPHS WITH THE COMPANIONS OF TYPICAL TOURISTS. THE GRAPH ON THE LEFT
REPRESENTS THE ANSWERS TO THE ONLINE QUESTIONNAIRE , WHILE THE GRAPH ON THE RIGHT
REPRESENTS ANSWERS TO THE PHYSICALLY QUESTIONNAIRES HANDED IN HOTELS . .................. 60
FIGURE 25 GRAPHS WITH THE DURATION OF THE TRIPS OF TYPICAL TOURISTS . THE GRAPH ON
THE LEFT REPRESENTS THE ANSWERS TO THE ONLINE QUESTIONNAIRE , WHILE THE GRAPH ON
THE RIGHT REPRESENTS ANSWERS TO THE PHYSICALLY QUESTIONNAIRES HANDED IN HOTELS .
................................................................................................................................................................ 60
FIGURE 26 GRAPHS WITH THE ANSWERS TO HOW OFTEN PEOPLE FEEL LOST ON THEIR TRIPS.
THE GRAPH ON THE LEFT REPRESENTS THE ANSWERS TO THE ONLINE QUESTIONNAIRE , WHILE
THE GRAPH ON THE RIGHT REPRESENTS ANSWERS TO THE PHYSICALLY QUESTIONNAIRES
HANDED IN HOTELS .............................................................................................................................. 61
FIGURE 27 GRAPHS WITH WHEN PEOPLE PREFER TO PLAN FOR THEIR TRIPS . THE GRAPH ON
THE LEFT REPRESENTS THE ANSWERS TO THE PHYSICALLY QUESTIONNAIRES HANDED IN
HOTELS (WHERE PEOPLE GRADED THEIR PREFERENCES FROM 1 TO 5), WHILE THE GRAPH ON
THE RIGHT REPRESENTS ANSWERS TO THE ONLINE QUESTIONNAIRE . ........................................... 61
FIGURE 28 GRAPHS WITH HOW PEOPLE PREFER TO TRAVEL DURING THEIR TRIPS . THE GRAPH
ON THE LEFT REPRESENTS THE ANSWERS TO THE PHYSICALLY QUESTIONNAIRES HANDED IN
HOTELS (WHERE PEOPLE GRADED THEIR PREFERENCES FROM 1 TO 5), WHILE THE GRAPH ON
THE RIGHT REPRESENTS ANSWERS TO THE ONLINE QUESTIONNAIRE . ........................................... 61
FIGURE 29 GRAPHS WITH HOW PEOPLE SHARE MEDIA FROM THEIR TRIPS. THE GRAPH ON THE
LEFT REPRESENTS THE ANSWERS TO THE PHYSICALLY QUESTIONNAIRES HANDED IN HOTELS
(WHERE PEOPLE GRADED THEIR PREFERENCES FROM 1 TO 5), WHILE THE GRAPH ON THE RIGHT
REPRESENTS ANSWERS TO THE ONLINE QUESTIONNAIRE . ............................................................... 62
FIGURE 30 MIND MAP FOR THE TUI PROTOTYPE MEMENTOS. FOR AN IMAGE WITH HIGHER
RESOLUTION , PLEASE REFER TO ANNEX D........................................................................................ 62
FIGURE 31 THE MEMENTOS· CONCEPT. ............................................................................................ 63
FIGURE 32 THE STAKEHOLDER ·S DIAGRAM FOR THE MEMENTOS TUI. ....................................... 67
FIGURE 33 THE TAC S FOR THE MEMENTOS TUI. ........................................................................... 69
FIGURE 34 THE DIALOGUE DIAGRAM FOR THE MEMENTOS TUI. ................................................. 70
FIGURE 35 THE INTERACTION DIAGRAM FOR THE MEMENTOS TUI (AT THE PUBLIC KIOSKS ). . 70
FIGURE 36 THE INTERACTION DIAGRAM FOR THE MEMENTOS TUI (WHILE TRAVELLING ). ...... 71
FIGURE 37 SOME OF THE CONSIDERED TOKENS. ............................................................................. 71
FIGURE 38 WIREFRAME #1 FOR THE PUBLIC KIOSK OF MEMENTOS: THE USER IS INFORMED OF
WHERE HE IS IN RELATION TO THE CITY HE IS VISITING . F OR AN IMAGE WITH HIGHER
RESOLUTION , PLEASE REFER TO ANNEX E. ....................................................................................... 72
FIGURE 39 WIREFRAME #2 FOR THE PUBLIC KIOSK OF MEMENTOS: THE USER HAS PLACED ONE
OF THE CONCRETE TOKENS HE WAS CARRYING ON THE FIRST SLOT OF THE KIOSK ;; HE HIS
INFORMED OF TRANSPORTATION OPTIONS (AND THE TIME COSTS ) CLOSE TO HIS CURRENT
LOCATION . F OR AN IMAGE WITH HIGHER RESOLUTION , PLEASE REFER TO ANNEX E................. 73
FIGURE 40 WIREFRAME #3 FOR THE PUBLIC KIOSK OF MEMENTOS: THE USER HAS PLACED
ANOTHER OF THE CONCRETE TOKENS HE WAS CARRYING ON THE SECOND SLOT OF THE KIOSK ;;
HE HIS INFORMED OF TRANSPORTATION OPTIONS (AND THE TIME COST ) BETWEEN HIS

6  
 
LOCATION AND THE TWO DESIRED DESTINATIONS . FOR AN IMAGE WITH HIGHER RESOLUTION ,
PLEASE REFER TO A NNEX E. ............................................................................................................... 73
FIGURE 41 WIREFRAME #4 FOR THE PUBLIC KIOSK OF MEMENTOS: THE USER HAS PLACED ONE
OF THE ABSTRACT TOKENS FOUND IN THE KIOSK UNDER THE CONCRETE TOKEN ON THE FIRST
SLOT;; HE HIS INFORMED OF BUS STOPS CLOSE TO THE FIRST DESTINATION . FOR AN IMAGE WITH
HIGHER RESOLUTION , PLEASE REFER TO ANNEX E. ........................................................................ 74
FIGURE 42 WIREFRAME #5 FOR THE PUBLIC KIOSK OF MEMENTOS: THE USER HAS REMOVED
THE CONCRETE TOKEN FROM THE FIRST SLOT OF THE KIOSK ;; HE HIS INFORMED OF
TRANSPORTATION OPTIONS (AND THE TIME COST ) CLOSE TO HIS CURRENT LOCATION ;; A
HISTORY OF PREVIOUS DESTINATIONS IS KEPT ON - SCREEN . F OR AN IMAGE WITH HIGHER
RESOLUTION , PLEASE REFER TO ANNEX E. ....................................................................................... 74
FIGURE 43 USERS RECEIVE VIBROTACTILE MESSAGES ON THE TOKENS THEY CARRY AS THEY
PASS THROUGH RELEVANT REAL- WORLD SETTINGS . THIS IS ACHIEVED BY SENDING THE
MESSAGES VIA BLUETOOTH , FROM THE RELEVANT SPOTS (WHO SCAN FOR THE CORRECT
TOKENS ) TO THE TOKENS , EMBEDDED WITH A SHAKE [URL8] DEVICE . ..................................... 75
FIGURE 44 AN OPEN SHAKE DEVICE [URL8]. ................................................................................ 76
FIGURE 45 THE SET OF TOKENS USED IN THE KIOSK EVALUATION : FOUR ABSTRACT TOKENS
(REPRESENTING MARKETS , TAXIS, CAFES, AND MUNICIPAL WIFI) AND TWO CONCRETE TOKENS
(REPRESENTING A FAMOUS CHURCH AND BOTANICAL GARDEN ). ................................................... 76
FIGURE 46 A TOUCHATAG RFID READER AND FIVE TAGS [URL12]. ............................................ 77
FIGURE 47 SCREENSHOT FROM APPLICATION THAT RUNS ON THE PUBLIC KIOSK , AS PART OF
THE MEMENTOS TUI. F OR AN IMAGE WITH HIGHER RESOLUTION , PLEASE REFER TO ANNEX I.
................................................................................................................................................................ 78
FIGURE 48 SCREENSHOT FROM APPLICATION THAT RUNS AT THE PERSONAL COMPUTER IN THE
USERS · HOME , AS PART OF THE MEMENTOS TUI. .............................................................................. 79
FIGURE 49 ONE OF THE GROUPS CREATING THEIR VERSION OF A PLAN TO VISIT FUNCHAL FOR
A DAY ...................................................................................................................................................... 81
FIGURE 50 MIND MAP FOR THE ECO PLANNER TUI. ...................................................................... 86
FIGURE 51 MIND MAP FOR THE ECO PLANNER TUI. FOR AN IMAGE WITH HIGHER RESOLUTION ,
PLEASE REFER TO A NNEX Q. ............................................................................................................... 87
FIGURE 52 THE DIALOGUE DIAGRAM FOR THE ECO PLANNER TUI. ............................................ 90
FIGURE 53 THE INTERACTION DIAGRAM FOR THE ECO PLANNER TUI. ....................................... 91
FIGURE 54 TOKENS USED FOR THE ECO PLANNER TUI. THEY CAN BE VERTICALLY ATTACHED
TO ONE ANOTHER , AND IT ·S POSSIBLE TO ADD TIME PIECES IN THE FRONT. ................................ 91
FIGURE 55 THE WIREFRAME #1 FOR THE ECO PLANNER TUI. FIVE ACTIVITY TOKENS ARE
BEING USED . F OR AN IMAGE WITH HIGHER RESOLUTION , PLEASE REFER TO ANNEX R. ............. 92
FIGURE 56 THE WIREFRAME #2 FOR THE ECO PLANNER TUI. ONE ACTIVITY TOKEN IS IN THE
OPTIONS AREA . F OR AN IMAGE WITH HIGHER RESOLUTION , PLEASE REFER TO ANNEX R. ........ 92
FIGURE 57 THE WIREFRAME #3 FOR THE ECO PLANNER TUI. USER CAN COMMIT TO THE
CHANGES MADE . F OR AN IMAGE WITH HIGHER RESOLUTION , PLEASE REFER TO ANNEX R. ...... 93
FIGURE 58 THE WIREFRAME #4 FOR THE ECO PLANNER TUI. USER HAS COMMITTED TO A NEW
ROUTINE . F OR AN IMAGE WITH HIGHER RESOLUTION , PLEASE REFER TO ANNEX R. .................. 93
FIGURE 59 PHOTO TAKEN DURING THE TESTING PHASE OF THE ECO PLANNER PROTOTYPE .
THE TOKENS ARE BEING SCANNED FROM A TOP PERSPECTIVE USING REACTIVISION
TECHNOLOGY [JGA+07]. ..................................................................................................................... 94
FIGURE 60 SCREENSHOT FROM A TEST OF THE ECO PLANNER TUI. THE TOKENS ARE BEING
TRACKED IN A SURFACE , WHILE THE INTERFACE IS BEING SIMULATED ON A MONITOR (WITH
TOKENS AS PICTURES OF ACTIVITIES ). F OR AN IMAGE WITH HIGHER RESOLUTION , PLEASE REFER
TO ANNEX U. ........................................................................................................................................ 95

7  
 
 

A BSTRACT

Tangibles User Interfaces (TUIs) have matured to a point where they can be
considered as an established area of Human-Computer Interaction research.
Despite this, most TUIs are built to address very particular tasks found
normally in research labs, kindergartens, or specific work environments. This
dissertation contributes with the design and implementation of two Tangible
Interfaces that tackle common problems of typical users ² Mementos, a TUI
for tourists and travelers, and Eco Planner, a TUI that focuses on
sustainability. Finally, this dissertation also reviews the theories on Embodied
Cognition, opening the discussion on the possibility of using them as core to
new frameworks or guidelines to aid TUI developers in creating systems that
feel more like real world tools.

8  
 
1. I NTRODUCTION

Tangible Interaction is one of the post-WIMP (windows, icons, menus, and


pointers) interaction styles that arose in the last two decades. It differs from
the established Graphical User Interfaces (GUIs) by offering tangible
representations to digital information and controls, allowing users to literally
interact with data with their hands and bodies through the use of tangible
objects. These work ´VLPXOWDQHRXVO\ (as) interface, interaction object and
LQWHUDFWLRQ GHYLFHµ >+%@ For this reason, Tangible User Interfaces (TUIs)
are appealing to different kinds of people. They let users be active and creative
with their hands [W98], while supporting their real world knowledge and skills
[DBS09] (e.g. by employing physical constraints as slots or racks to the
interface [JGH+08]). This dedication of physical actions to interface functions
also affords kinesthetic learning and memorization [KHT06]. TUIs are also
well suited for collaboration [I08], as multiple users can concurrently
manipulate physically represented data and information. As users can see
DQRWKHU¶VSK\VLFDODFWLRQVWKH\EHFRPHPRUHDZDUHOHDGLQJWRIOXLGLQWHUDFWLRQ
and coordination [HMD+08] as well as promoting discussion [ZAR05]. This
way, TUIs are making the interaction with computers and digital information
less cognitively taxing, allowing users to focus their attention and work on
what really matters, the job at hand [SGH94].
Since most post-WIMP interfaces where born in the same time frame they
have been influencing each other ever since. There are a number of interaction
paradigms that result of the combination of TUIs with other research areas,
such as with augmented reality, multi-touch surfaces, or ambient displays.
There is also great variety of domains in which TUIs are becoming

9  
 
increasingly popular in, such as in the areas of learning, planning and problem
solving, programming and simulation tools, information visualization and
exploration, entertainment, play, performance and music, social
communication [SH10], and most recently work (e.g. [HIW08, EB09]).
In the last decade a considerable amount of frameworks for TUI design and
development were created to provide developers with explanatory and
generative power, allowing them to analyze and compare different TUI
prototypes and providing them with a conceptual structure for thinking
through a problem or application [SH10]. Developers can then implement
their systems through the use of technologies such as radio-frequency
iGHQWLÀcation (RFID), computer vision (e.g. ARToolKit [KB99, KB00]), and
microcontrollers associated with sensors and actuators (e.g. phidgets [URL7],
littleBits [URL8]).

1.1 THESIS GOALS

1.1.1 THE DEVELOPMENT OF TUIS FOR NOVEL DOMAINS


TUIs are still not broadly used by casual users as they tend to be deployed on
very specific conditions (e.g. location or purpose, see Section 2.7.2.1). The first
goal of this dissertation will be of developing two proof-of-concept TUIs that
maximize their applicability to most users and commonplace, everyday tasks,
while documenting the conceptual, methodological, and technical steps
required for building them. The first, Mementos (see Section 4), is a TUI that
supports tourism by transforming in time and space, exploring how can TUIs
be created for complex real world scenarios, supporting multiple tasks and
operating in multiple contexts. The second, Eco Planner (see Section 5), will
be the first TUI that tackles the issue of energy sustainability at home, by
allowing the creation, management and analysis of daily routines on a tabletop
interface. Ultimately, the goal will be of taking TUIs out of the labs and
FRQWUROOHG VHWWLQJV LQWR WKH XVHUV· HYHU\GD\ life and environments, exploring
technical and methodological solutions that can be serve other researchers in
the future.

10  
 
1.1.2 THE CREATION OF GUIDELINES FOR TUI DESIGN BASED ON THE THEORIES OF

EMBODIED COGNITION
This dissertation will present arguments that current frameworks for TUI
design (see Section 2.5) focus mainly on implementation and characterization
issues, DQGGRQ·W take advantage from core principle of TUIs ² their physically
and existence in the real world. These frameworks say little regarding how a
TUI can be designed in order to maximize usability or to reflect aspects of
human cognition. In order to tackle this research gap, a literature review on
the cognitive theories of Embodied Cognition will be conducted (see Section
3) to analyze their potential as a basis for the development of guidelines or
frameworks for TUI design. From this review, an initial set of guidelines will
be created (see Section 3.3) to guide the conceptualizing process of both the
prototypes to be developed (see Section 4 and 5). These guidelines will be
iterated over as the development process of the prototypes moves forward, as
to provide an understanding of the utility of such guidelines in TUI design.

1.2 THESIS STRUCTURE


The remainder of this dissertation is structured as follows:

o If offers a comprehensive review of the actual state of TUI research,


giving a brief introduction on how Tangible Interaction became a
preeminent research field in the HCI community.

o The cognitive theories of Embodied Cognition are reviewed in order to


analyze the possibility of making them more relevant in the process of
TUI development.

o The design and implementation process of two TUI prototypes ²


Mementos, a TUI for tourists, and Eco Planner, a TUI to foster
sustainability ² is documented, providing other researchers with a basis on
how TUIs can be constructed to operate in real-world scenarios, tackling
the issues of ordinary users.

o Finally, a discussion of the implications of this work for the research


community is conducted.

11  
 
2. T ANGIBLE U SER I NTERFACES

´The body is the ultimate instrument of all our external


NQRZOHGJH ZKHWKHULQWHOOHFWXDO RU SUDFWLFDO« ([SHULHQFH
[is] always in terms of the world to which we are attending
from our body.µ[P67]

2.1 INTRODUCTION
Since the 1970s that the dominant form of human-computer interface is
limited to the desktop computer, using mouse and keyboard to interact with a
series of windows, icons, menus and pointers (WIMP). Named Graphical User
Interface (GUI), it serves as a general purpose interface, emulating multiple
different tools and allowing for many different interfaces to be represented
digitally on-screen. As it allows for both hands to work in synchrony, the GUI
paradigm is competent enough for a task such as writing this paper, but the
same input devices are used in any other application domains, from
productivity tools to games [KHT06].
The technological advancements made over the last twenty years, allied with
research on the psychological and social aspects of Human-Computer
Interaction (HCI), have lead to a recent explosion of new post-WIMP
interaction styles on almost every possible domain. These novel input devices
(such as the Wii Remote or multi-touch surfaces) are becoming increasingly
popular, as they draw on WKH XVHUV· real world skills to interact with digital
content [DBS09]. Not only are input devices changing but computers as well,
EHFRPLQJHPEHGGHGLQWKHXVHUV·HYHU\GD\REMHFWVDQGHQYLURQPHQWs, making
computation more accessible and pro-active [R06].

12  
 
One of the emerging post-WIMP interfaces are Tangible User Interfaces
(TUIs), which differ from GUIs by being special purpose interfaces for
specific applications and domains that are reliant on well-defined physical
forms [I08]. These forms provide tangible representations to digital
information and controls, allowing users to literally interact with data through
their hands and bodies. TUIs are implemented using a series of materials and
technologies, augmenting physical objects to receive and interpret inputs (e.g.
grabbing, squeezing, tapping, moving), and to provide a series of sensory
stimulating outputs (e.g. altering textures and shapes, sound and visual effects).
These tangible objects then provide users with a parallel feedback loop,
combining physical passive haptic feedback with digital feedback (visual or
auditory) [UI00]. A tangiblHREMHFWRUIRUPLVWKHQ´VLPXOWDQHRXVO\LQWHUIDFH
LQWHUDFWLRQ REMHFW DQG LQWHUDFWLRQ GHYLFHµ >HB06]. Furthermore, by having a
three-GLPHQVLRQDOLQWHUDFWLRQ78,VGRQ·WOLPLWWKHPVHOYHVWRWZR-dimensional
images on a screen [SH10].

Figure 1 Interaction model of TUIs (MCRit model) [UI00].

For many reasons, TUIs are very appealing to different kinds of users. They
let users be active and creative with their hands [W98], and the interaction is
firmly rooted in WKH XVHUV· real world knowledge and skills [DBS09] (e.g. by
employing physical constraints as slots or racks to the interface, indicating how
an object must be handled in the context of the application [JGH+08]). It has
also been shown that allowing consistently dedicating physical actions to
interface functions affords kinesthetic learning and memorization [KHT06].
Since data and information are physically represented, they can also be
concurrently manipulated by multiple users, making TUIs also well suited for
collaboration [I08@ $V XVHUV FDQ VHH DQRWKHU·V SK\VLFDO DFWLRQV WKH\ EHFRPH
more aware, leading to fluid interaction and coordination [HMD+08] as well

13  
 
as promoting discussion [ZAR05]. ,W·V DVVXPHG WKDW WKH 78, IHDWXUHV RI
interaction can lead users to feel more engaged and reflective, as well as to
lowering the threshold of participation in such systems [M07]. This way, TUIs
are making the interaction with computers less cognitively taxing, allowing
users to focus more of their brain power on the task at hand [SGH94].
TUIs became an important research area mostly due to the work of Hiroshi
Ishii and his Tangible Media Group. Nowadays, the most valuable
contributions are found in the work of Orit Shaer and Eva Hornecker, as they
aim to establish solid methods and frameworks for TUI development. It is
QRZYHU\FRPPRQWRVHHWKHZRUG¶TDQJLEOH·LQPDQ\FRQIHUHQFHSDSHUVDQG
session titles. The first conference fully devoted to Tangible Interfaces and
Tangible Interaction took place in Baton Rouge, Louisiana, back in 2007.
Named TEI (Tangible, Embedded and Embodied Interaction), it annually
gathers the work of a diverse community, made of HCI researchers,
technologists, product designers, artists, and others. Now in its fifth year, the
conference will be held for the third time in Europe, Portugal, with the help of
the newly created Madeira Interactive Technologies Institute (M-ITI).
For a more comprehensive introduction and review of TUIs, please refer to
WKH·VPRQRJUDSKIURP6KDHUHWDO. [SH10].

2.2 HISTORY
7KH WHUP ¶7DQJLEOH ,QWHUIDFH· GHULYHG IURP WKH LQLWLDO PRWLYDWLRQ IRU
Augmented Reality and Ubiquitous Computing [SH10]. Back in 1993, the issue
´%DFN WR WKH 5HDO :RUOGµ >WMG93] argued that the current desktop
computers and virtual reality were very different from thH KXPDQV· QDWXUDO
environment. The issue suggested that users VKRXOGQ·W EH IRUFHG WR HQWHU D
virtual world, but that the real world should be enriched and augmented with
digital content and functionality. TUIs emerged as part of a trend that
followed a motivation to retain the richness of physical interaction, and by
HPEHGGLQJ FRPSXWLQJ LQ WKH XVHUV· WRROV HQYLURQPHQWV DQG SUDFWLFHV ²
enabling a real-time parallel between the digital and the real [SH10]. Ideas
from ethnography, situated and embodied cognition, and phenomenology
EHFDPH FHQWUDO WR WKH ELUWK RI 78,V DV ´KXPDQV DUH RI DQG LQ WKH HYHU\GD\
ZRUOGµ>W93].
It took a couple of years for these ideas to materialize in an interaction style
of its own. In 1995, Fitzmaurice et al. [FIB95] introduced the concept of a
Graspable Interface, where graspable handles are used to manipulate digital

14  
 
objects. Two years later, Ishii and his students presented their work on
Tangible Bits [IU97]. This work focused on making the physical world an
interface by mapping objects and surfaces with digital data.
While Ishii and his students worked on their vision, other research groups
focused on developing applications for specific domains, by augmenting
existing tools and artifacts. Some of the resulting work of such groups was
later classified as TUIs. For example the work of Wendy Mackay, with the use
RI ÁLJKW VWULSV LQ DLU WUDIILF FRQWURO DQG RQ DXJPHQWHG SDSHU LQ YLGHR
storyboarding [MF99]. Other examples include the German Real Reality, that
allowed for the simultaneous construction of real and digital models [B93,
BB96], and the work of Rauterberg and his team, that developed Build-IT
[RFK+98], an augmented reality tabletop planning tool that was based on
)LW]PDXULFH·VJUDVSDEOHLQWHUIDFHLGHD In the domain of education, Suzuki and
Kato [SK93, SK95] developed the AlgoBlocks, that aimed to supporting
groups of children in learning how to program. Cohen et al. [CWP99] created
the Logjam, which supported video logging and coding.
Since TUIs have been proposed as novel interface style, development has
focused on exploring technical solutions and possibilities. It is now on a more
mature phase of research, where the goal isQ·W tied to a proof-of-concept, but
focuses on conceptual design, user studies, field tests, critical reflection,
theory, building design knowledge, connecting with different design
disciplines, and creating toolkits and frameworks to lower the threshold for
developing TUIs [SH10].

2.2.1 GRASPABLE USER INTERFACE


Graspable Interface was introduced in 1995 by Fitzmaurice et al. [FIB95]. It
was composed by a series of wooden blocks that worked as graspable handles
WR PDQLSXODWH GLJLWDO REMHFWV 7KH JRDO RI WKLV LQWHUIDFH ZDV WR ´LQFUHDVH WKH
GLUHFWQHVV DQG PDQLSXODELOLW\ RI *8,Vµ Users would interact with such
interface by placing physical blocks on top of graphical objects displayed on
screen, anchoring them together. As they would move and rotate the blocks,
the graphical objects would replicate the actions. This interface also allowed
for the kind of two-fingered interactions we nowadays know from multi-touch
surfaces, by allowing users to place two blocks on two corners of an object, in
order to zoom in or zoom out by dragging the blocks apart or together.

15  
 
A system that was strongly based on this interface was 5DXWHUEHUJ·V%XLOG-
IT [RFK+98]. It utilized the same input mechanisms in conjunction with
Augmented Reality visualizations for architectural and factory planning tasks.

2.2.2 TANGIBLE BITS


´Where the sea of bits meets the land of atoms, we are
now facing the challenge of reconciling our dual
citizenship in the physical and digital worlds.µ>85/@

A couple of years later, Ishii and his students work with Tangible Bits
sparked the idea of the TUI [IU97]. The work aimed at making bits physically
accessible and manipulable, using real world objects and tools as a medium for
manipulation, and the real world as a display. Ambient displays would
represent information and data through sound, light, air or water movement
HJ1DWDOLH-HUHPLMHQNR·V/LYH:LUH>WB95]).
 

 
Figure 2 &HQWHUDQGSHULSKHU\RIWKHXVHU·VDWWHQWLRQZLWKLQDSK\VLFDOVSDFH>IU97].

One of the first TUI prototypes developed by Ishii was still strongly
influenced by the GUI paradigm. The Tangible Geospace was an interactive
map of the MIT Campus projected onto a table. As users placed physical icons
on the table (e.g. a model of the Barker Engineering Library), the map
automatically reposition itself so that the physical icon was positioned over the
respective building on the map. Adding other tangible models made the map
zoom and turn to match the corresponding buildings. ,VKLL·V 8US SURMHFW
[UI99] deliberately diverted from the GUI metaphor, focusing on tangible
objects that served for manipulating and representing data, more specifically

16  
 
tangible buildings in the context of an urban planning system (see Figure 3).
Users were able to interact with wind and sunlight simulations by moving
physical building models on the surface of a table. These tangible buildings
casted digital shadows, while the simulated wind flow was projected as lines.
Users could HYHQ FKDQJH WKH EXLOGLQJV· PDWHULDO SUoperties (glass or stone
walls) and change the time of day, influencing the projection of shadows and
wind patterns.

2.2.3 EARLY TANGIBLE USER INTERFACES


Before the work of Ishii in Tangible Bits, several systems presented ideas that
later would inspire HCI researchers to define TUIs. These systems ranged
from domains such as architecture, product design, and educational
technology [SH10].

Figure 3 Urp, a TUI for urban planning that allows for users to determine the wind flow and shadow
projecting of buildings [UI99].

2.2.3.1 THE SLOT MACHINE


7KHILUVWV\VWHPWREHFODVVLILHGDVD78,ZDV3HUOPDQ·V6ORW0DFKLQH>P76].
The Slot Machine used physical cards to represent language constructs that
children used to program. Perlman believed that children faced bigger
problems than a lack of writing skills when programming, the user interface of
programming tools. In his system, each programming language construct was
represented by a plastic card. Children would create programs by creating
sequence of cards, and inserting them in specific racks on a machine. This
prototype allowed for the implementation of function calls as well as simple
recursion.

17  
 
2.2.3.2 THE MARBLE ANSWERING MACHINE
3UREDEO\ WKH PRVW NQRZQ LQVSLUDWLRQ IRU 78,V LV 'XUUHO %LVKRS·V 0DUEOH
Answering Machine [P95]. In this system, incoming calls are represented by
colored marbles. As users place the marbles in the different indentations of
the machine they can access different functions relating to the call represented
by the marble, such as playing messages or calling the number back (see Figure
4).

Figure 4 'XUUHO%LVKRS·V0DUEOH$QVZHULQJ0DFKLQH>3@. A user is picking a marble that represents


a recorded call, and placing it in the indentation that allows the system to play it.

By relying RQ SK\VLFDO DIIRUGDQFHV DQG WKH XVHUV· HYHU\GD\ NQRZOHGJH


%LVKRS·V GHVLJQ HDVLO\ FRPPXQLFDWHV WKH IXQFWLRQDOLW\ and interface of the
system [A99].

2.2.3.3 3D MODELING
Motivated by the fact that CAD systems in the 1980s were cumbersome and
awkward to use, both Robert Aish [A79$1@DQG-RKQ)UD]HU·VWHDP[F95,
FF80, FF82] started looking for alternatives using physical models as input
devices. Their results was a computer that could create a digital model of the
construction by scanning the assembly of blocks, taking into account the
location, orientation and type of each component. The computer simulation
could even provide suggestions on how to improve the users· design, and the
plans and drawings would be automatically printed once the users were
satisfied.

2.3 RELATED AND OVERLAPPING AREAS OF RESEARCH


Most post-WIMP interfaces have been influencing each other, resulting in
paradigms that combine different approaches and technological solutions for

18  
 
interaction [SH10]. Some of these interfaces were born from the work of Ishii
on Tangible Bits, such as ambient displays ² others can be seen as particular
types of TUIs.

2.3.1 TANGIBLE AUGMENTED REALITY


Tangible Augmented Reality (Tangible AR) interfaces
[KBP+01, LNB+04, ZCC+04] result from the combination
of tangible input with an augmented reality display or Figure 5 Virtual
airplane mapped to
output. Computer vision detects visual markers on tangible a physical card.
objects, which are mapped to virtual objects (e.g. photos,
videos). Users can then control the virtual by acting on the tangible. Examples
of this approach include augmented books [BKP01, ZCC+04] and tangible
tiles [LNB+04].

2.3.2 TANGIBLE TABLETOP INTERACTION


Tangible Tabletop Interaction combines interactive multi-
touch surfaces and TUIs. Although many Tangible Interfaces
use a tabletop surface as base for interaction, embedding Figure 6 Tangible
objects on a multi-
sensors in the surface, the term tabletop interaction is still touch surface.
mainly referred to multi-touch surfaces for finger or pen-
based interaction [SH10]. However, research with both technologies is
increasing, studying the differences for the user of using pure touch-based
interaction and tangible objects (e.g. [TKR+08]). Toolkits such as the
reacTIVision [JGA+07] (see section 2.6.4.5) enable developers to quickly
deploy prototypes that can handle tangible and touch inputs ² the most
famous example of such a prototype being the reacTable [JGA+07], a system
that allows users to easily compose music.

2.3.3 AMBIENT DISPLAYS


$OWKRXJKEHLQJSDUWRI,VKLL·V7DQJLEOH%LWV>IU97], Ambient
Displays were soon on a research area of their own. Most
Ambient Displays implementations are purely graphical Figure 7 Tangible
object as ambient
representations on big monitors or wall projections. A good display for the
H[DPSOHDQGRQHRIWKHILUVWLV-HUHPLMHQNR·V/LYH:LUHZLWK actual temperature.

its physical world realization [WB95].

19  
 
Greenberg and Fitchett [GF01] described a series of projects that used the
Phidgets toolkit to build physical awareness devices, such as a flower that
blooms every time a work colleague is available. More recent projects include
Tangible Interfaces as Ambient Displays, some supporting distributed groups
in maintaining awareness [BWD07] by using tangible objects for input and
output. Edge and Blackwell [EB09] suggested that tangible objects could also
EH XVHG LQ WKH SHULSKHU\ RI WKH XVHUV· DWWHQWLRQ UHVXOWLQJ LQ VPDOO $PELHQW
Displays ² e.g. tangible objects that represent tasks and documents on an
office secretary.

2.3.4 EMBODIED USER INTERFACES


Embodied User Interfaces [FKY05, FGH+00] is rooted on the
fact that computation is becoming embedded and embodied in
everyday tools and appliances. Manual and bodily interaction
can be an integral part of using physical devices, such as how Figure 8 Physical
devices are
the iPhone has standardized tilting as an interaction to change integrated with
WKHGHYLFH·VGLVSOD\RULHQWDWLRQ While in the case of the iPhone their digital
content.
WKHUH·V RQO\ RQH WDQJLEOH REMHFW LW FRXOG EH FRQVLGHUHG DV D
specialized type of Tangible Interface, as it allows for a direct embodiment of
computational functionality [SH10].

2.3.5 UNIFYING RESEARCH AREAS

2.3.5.1 TANGIBLE COMPUTING


The term Tangible Computing arose from multiple concepts Dourish [D01]
discussed as being fundamental for the goal of integrating computation into
our everyday lives. These concepts included TUIs, Ubiquitous Computation,
Augmented Reality, Reactive Rooms, and Context-Aware Devices. The
purpose of Tangible Computing is to distribute computation over many
specialized and networked devices in the environment, augment the real world
HQYLURQPHQW VR LW FDQ DFFXUDWHO\ UHDFW WR WKH XVHUV· DFWLRQV DQG HQDEOH
interaction by using tangible objects7KLVLQWHUDFWLRQVKRXOGQ·WEHIRFXVHGRQ
one input device, but combine a series of objects and tools. There also
shoulGQ·WEHDQHQIRUFHGVHTXHQFHRIDFWLRQVWKDWXVHUVPXVWIROORZDQGWKH
design of the tangible objects should inform the users of their purpose and
functionality.

20  
 
Tangible Interfaces differ from the other approaches as they allow for
representations to be artifacts in their own way, since users can directly act
upon, lift up, rearrange, sort and manipulate [D01]. At one moment in time,
even several levels of meaning can be present at once (e.g. moving a prism
token in Illuminating Light [UI98] can be done simply to make space, to
explore the V\VWHP·VRXWSXW, to use it as a tool, or to explore the entire system
as tool). The user can freely change his attention between all these diơerent
nested levels because of the embodiment of computation.

2.3.5.2 TANGIBLE INTERACTION


Hornecker and Buur [HB06] have suggested the term Tangible Interaction to
describe the group of fields that relate to, but are broader than TUIs. They
give the example of the systems developed by artists and designers, which
have very rich physical interactions, and thus, share some of the characteristics
of TUIs. These systems normally enable rich full body interactions [BJD04],
focusing more on the expressiveness and meaning of the body movement, and
less on the tangible objects and the information being manipulated.
Tangible Interaction can be classified in three different categories:

o Data-centered view ² WKH PRVW ´WUDGLWLRQDOµ RI WKese views, where


tangible objects represent and manipulate digital information.
o Expressive-movement view ² this view focuses on bodily movement,
rich expression and physical skill ZKHUH ´PHDQLQJ LV FUHDWHG LQ WKH
LQWHUDFWLRQµ
o Space-centered view ² more relevant in the arts, this view emphasizes
on interactive and reactive spaces, combining tangibles objects with digital
displays or sound installations. Full-body interaction and the use of the
body as both an interaction device and display are also characteristics of
this view.

Tangible Interaction is then a terminology preferred by the design


community, as it focuses on the users experience with novel systems [BG10,
HFA+07]. It encompasses tangibility, physical embodiment of data, full-body
interaction, and systems that are embedded in real spaces, environments and
contexts.

21  
 
2.3.6 REALITY-‐BASED INTERACTION
Proposed by Jacob et al. [JGH+08], Reality-Based Interaction has the goal of
serving as a unifying framework for a large subset of emerging interaction
styles in the field of HCI. These styles include virtual and augmented reality,
ubiquitous and pervasive computing, and handheld and Tangible Interaction.
The common denominator between all these interaction styles is the fact
WKDWWKH\DOODLPWRWDNHDGYDQWDJHIURPWKHXVHUV·UHDO-world skills, as they try
to narrow the gap between the differences in engaging with digital and
physical tools.

Figure 9 The four themes of Reality-Based Interaction [JGH+08].

Jacob et al. [JGH+08] identified four themes of interacting in the world


that are usually leveraged (see Figure 9):

o Naïve Physics ² the informal human perception of the basic physical


principles. These include concepts like gravity, friction, velocity, the
persistence of objects, and relative scale.
o Body Awareness and Skills ² refers to the familiarity and understanding
that people have of their own bodies, independent of the environment
(e.g. coordination, range, relative position of limbs).
o Environment Awareness and Skills ² the sense of physical presence
people have in their spatial environment, surrounded by objects and
landscapes. The clues that are embedded in the natural and built
environment that facilitate peRSOHV· VHQVH RI RULHQWDWLRQ DQG VSDWLDO
understanding.
o Social Awareness and Skills ² the awareness people have that other
people share their environment, and the skills necessary to interact with
them. These include verbal and non-verbal communication, the ability to

22  
 
exchange physical objects, and the ability to work with others to
collaborate on a task.

Jacob et al. suggests that this trend towards Reality-Based Interaction is a


good one, since basing interaction on real-world skills and knowledge may
reduce the mental effort still required to operate present systems. The aim of
these emerging interaction styles is to reduce the gap between WKHXVHUV·JRDOV
for actions and the means to execute those goals. Jacob et al. believes that
interaction designers should only give up reality explicitly and in return for
other desired qualities, such as: expressive power (e.g., users can perform a
variety of tasks within the application domain), efficiency (users can perform a
task rapidly), versatility (users can perform many tasks from different
application domains), ergonomics (users can perform a task without physical
injury or fatigue), accessibility (users with a variety of abilities can perform a
task), or practicality (the system is practical to develop and produce).
The descriptive nature of the Reality-Based Interaction framework enables
TUI developers to analyze and compare alternative concepts and designs,
bridge gaps between Tangibles Interfaces and other research areas, and apply
lessons learned from developing in other interaction styles. To date, most TUI
GHVLJQVDQGSURWRW\SHVUHO\RQWKHXVHUV·XQGHUVWDQGLQJRI1DwYH3K\VLFV HJ
grasping and manipulating tangible objects) and Social Awareness and Skills
(e.g. sharing of interaction space and of tangible objects). The Reality-Based
Interaction framework allows for TUI development in wider areas, such as
creating richer interactions using body skills and the knowledge of the
environment.

2.4 APPLICATION DOMAINS


Nowadays there many systems that can be considered without a doubt
traditional TUIs, while some only possess TUI-like characteristics. The most
popular domains for such systems are in the areas of learning, planning and
problem solving, programming and simulation tools, information visualization
and exploration, entertainment, play, performance and music, and social
communication [SH10]. The domains are not mutually exclusive, as a TUI can
be a both a learning and an entertaining tool. Recently, TUIs have also started
to appear in the domain of the work environment, with systems for managing
invoices [HIW08] and to control office work [EB09]. For a very thorough
RYHUYLHZRQOHDUQLQJZLWKWDQJLEOHVSOHDVHUHIHUWR)XWXUHODE·VUHSRUWRQ

23  
 
Tangibles and Learning [OF04]. Jordà [J08], on the other hand, offers an
overview over music performances with TUIs.

2.4.1 TANGIBLE INTERFACES FOR LEARNING


There are a large number of TUIs in the domain of learning mainly for two
reasons: there was always a huge drive from researchers and designers to
augment toys, increasing their functionality and attractiveness;; and also
because physical learning environments support the overall development of
children by engaging all their senses [SH10].
Learning researchers such as Bruner and the late Piaget emphasized that
embodiment, physical movement, and multimodal interaction are of
paramount importance to the development of children [A07, OF04]. Other
studies argue that gesturing can support thinking and leaning [G03], and that
TUIs can help children plan and reflect on their activities [M07].
Digital Manipulatives [RMB+98, ZAR05] are TUIs that use educational tools
such as construction kits, building blocks, and Montessori materials to help
children explore the concepts of temporal processes and computation. One
example is the academically and commercially used Lego Mindstorms (see
Figure 10), born from the work conducted at the MIT Media Lab Lifelong
Kindergarten group [R93]. A more recent system in the line of Lego
Mindstorms is PicoCrickets [URL2], that allows children to build their own
scientific experiments, using sensors, actuators, and robotic parts.

Figure 10 From left to right: a Lego Mindstorms [URL6] robot that can detect and grab colored balls;;
a Topobo [RPI04] assembly, with physical parts that can be programmed in separate by simple
demonstration.

Computationally enhanced constructions kits are TUIs that can help children grasp
concepts that are normally considered to be beyond their cognitive and
abstract thinking capabilities [SH10]. Examples include the Smart Blocks, that

24  
 
allows children to explore the concepts of volume and surface of tangible
objects they assemble [GSH+07];; the Curlybot [FMI00], a robotic ball that is
able to record the movement children imprint on it and then replay it;; Topobo
[RPI04], a TUI that enables children to learn about balance, movement
patterns, and anatomy, by allowing the construction of robots from physical
parts that can be programmed individually though demonstration (see Figure
10).
Another compelling area for TUIs in learning is the area of storytelling,
supporting literacy education by augmenting books and toys. An example is
the StoryMat by Ryokai and Cassell [RC99], which is composed by a carpet
WKDW FDQUHFRUG DQG UHSOD\ FKLOGUHQ·VVWRULHs by detecting which toys that are
placed upon it. Some projects use various techniques in tandem with TUIs, as
Augmented Reality (e.g. [ZCC+04]), or multi-WRXFK VFUHHQV HJ ´(O\ WKH
([SORUHUµ>ABL+04]).
TUI systems have also started to be used as aids for children with special
needs [VSK08], such as the Topobo [RPI04], the Lego Mindstorms (see Figure
10), and the LinguaBytes project [HHO08, HHO09] by Hengeveld, that used
story-reading as a way of tackling speech impairment in handicapped children.
Operating tangible objects slows down interaction, provides a sensorial
experience, supports collaboration, and allows children to train their
perceptual-motor skills DQG WR EH LQ FRQWURO RI ZKDW·V KDSSHQLQJ ,W·V DQ
accepted fact that TUIs can provide children with access to a rich learning
environment, with more opportunities for cognitive, linguistic and social
learning. Some researchers even fear the impact that traditional GUIs can have
RQFKLOGUHQLILW·VWKHRQO\DYDLODEOHWRROIRUSOD\DQGOHDUQLQJsince an early age
[URL3].
Some TUIs have also been developed as diagnostic tools, allowing for a
perception of the level of cognitive and spatial abilities a child possesses, or
detecting the effects of brain damage on adults [SIW+02]. This is achieved by
analyzing the steps and mistakes taken during the building of spatial structures
using tangible objects. Other projects include tangible toys that record
LQWHUDFWLRQ SURYLGLQJ YDOXDEOH GDWD WR GHWHUPLQH WKH FKLOG·V RYHUDOO
development [WKS08].

2.4.2 PROBLEM SOLVING AND PLANNING


The aspects of TUIs which are effective in aiding users at problem solving are
the affordances for epistemic actions (e.g. rotation, arrangement of tangible

25  
 
objects), the existence of physical constraints in the interface, and the
tangibility of the problem. Epistemic actions [B07] are the manipulations of
physical or digital artifacts not for the sake of the goal of the activity, but with
the aim of better understanding the DFWLYLW\·V context. Such actions should
alleviate mental work [KM94], and facilitate the successful conclusion of the
task at hand [MWC03]. Physical constraints use physical affordances (e.g.
UDFNV VORWV  WR FRPPXQLFDWH WKH V\VWHP·V V\QWD[ GHFUHDVH WKH OHDUQLQJ time
for how to use the interface, and consequently lower the threshold for using
the system [UIJ05].

Figure 11 The SandScape [I08b], a TUI that allows users to interact with physical sand and see the
results projected onto the landscape in real-time.

Finally, it has been shown that having a tangible representation of a


SUREOHP HJXUEDQSODQQLQJ FDQVXSSRUWWKHXVHUV·VSDWLDOFRJQLWLRQUHGXFH
their cognitive load, and enable more creative immersion in the problem
[KM08]. ,W·V DOVR DUJXHG WKDW 78,V with an abstract representation of the
problem may also EHQHILWWKHXVHUV·SHUIRUPDQFH>JIP02, PI00].
Urp [UI99] (see Figure 3), a TUI system described earlier, falls in the
category of TUIs for Problem Solving and Planning. Other examples include
the MouseHaus Table [HYG03], a system that allows users to collaboratively
interact with a pedestrian simulation by placing and moving everyday objects
upon a surface;; and the SandScape and Illuminating Clay [I08b], TUIs for the
design and understanding of landscapes (see Figure 11) that allow users to
interact with physical sand or clay and see the results projected in real-time
onto the landscape created.
TUIs have also been used for collaborative IP network design [KHN+03],
letting users directly manipulate network topologies, control parameters of

26  
 
nodes and links, and simultaneously see the simulation results projected onto
the table in real-time (see Figure 12).

Figure 12 The IP Network Design Workbench System [KHN+03]

An example of a TUI that has an abstract representation of the problem is


the Physical Intervention in Computational Optimization (Pico) [PI07]. This
system is based on a tabletop surface that can sense and move objects
autonomously. 7KH WDQJLEOH REMHFWV· SRVLWLRQV RQ WKH VXUIDFH UHSUHVHQW DQG
control a different number of application variables, and the user interacts with
the system by constraining these objects as they move. When compared with
similar systems that lacked actuation, the Pico TUI allowed its users to be
more effective at solving complex spatial layout problems [SH10]. Another
example is the TinkerSheets [ZJL+09], a simulation environment for
warehouse logistics (see Figure 13). Finally, the Senseboard [JIP+02] is a
famous TUI that affords the organization and grouping of pieces of abstract
data by manipulating tangible objects within a grid. The evaluation of the
Senseboard showed that the system is more effective than a purely physical or
purely graphical representation.

Figure 13 TinkerSheets [ZJL+09], a simulation environment for warehouse logistics.

27  
 
2.4.3 INFORMATION VISUALIZATION
$OWKRXJK PRVW 78,V DUH FRPSRVHG RII SK\VLFDO ´LPPXWDEOHµ REMHFWV WKH\
offer a very rich multimodal representation and allow for two-handed input,
enhancing the interaction with visualizations [SH10].
The Props-Based Interface for 3D Neurosurgical Visualization [HPG+94]
is a TUI for neurosurgical visualization that supports the physical
manipulation of handheld tools in free space. Surgeons can simulate slices by
simply holGLQJDSODVWLFSODWHXSWRDGROO·VKHDG. It was shown that users of
this system could understand and use the interface within one minute of
touching the physical props. GeoTUI [CRR08] is a TUI that allows
geophysicists to use tangible props to cut planes on digital geographical maps
that are projected upon a surface. The evaluation against a standard GUI
system showed that the users performed significantly better with the TUI
counterpart.
Ullmer et al. [UIJ05] developed two famous tangible query interface
prototypes that use tangible objects as database parameters. These objects are
manipulated in physical constraints such as tracks or slots that represent
database queries, views, or Boolean operations.

2.4.4 TANGIBLE PROGRAMMING


The concept of Tangible PrRJUDPPLQJKDVEHHQDURXQGVLQFH3HUOPDQ·V6ORW
Machine [P76], almost thirty years ago. The term was coined in 1993 by Suzuki
and Kato, when they used it to describe their AlgoBlocks system [SK93,
SK95]. In this TUI, children learn how to program by playing a video-game.
The game consists of guiding a submarine underwater by connecting physical
blocks together, which represent constructs of the educational programming
language Logo. This system was argued to improve coordination and
awareness in a collaborative learning.
Topobo [RPI04], Curlybot [FMI00], and StoryKits [SDM+01] are examples
of TUIs that allow children to teach electronic robots to move by repeating a
set of motions or gestures. This approach for programming is known as
programming by demonstration [C93] or programming by rehearsal [L93]. On
the other hand, TUIs that are classified as constructive assemblies [UIJ05] are
V\VWHPVLQZKLFKWKHXVHUV·FRQQHFWPRGXODUSLHFHVWRFUHDWHDVWUXFWXUH HJ
the AlgoBlocks [SK93, SK95], Digital Construction Sets [M04], Electronic
Blocks [WP02], and the Tern [HSJ08]).

28  
 
In most cases, the syntax of the programming language replicated in a TUI
is enforced by physical constraints on the interface (e.g. the physical form of
WKH7HUQ·V>HSJ08] pieces determines what type and how many blocks can be
connected together).
Also important to note is that many Tangible Programming systems can be
perceived as entertainment interfaces, since their design allows for free play
and exploration. This might explain the fact this approach to programming is
so popular and widespread. A study conducted with 260 museum visitors
[HSC+09] showed the children are the ones who respond best to this
approach, especially girls. It clearly points to the fact that carefully designed
Tangible Programming systems can offer concrete educational EHQHÀWV
In some rare exceptions, Tangible Programming can be found in
applications not related to learning and play. Researchers at the Mads Clausen
Institute in Denmark put Tangible Interfaces at work in the context of
LQGXVWULDOZRUNVXSSRUWLQJFRQÀJXUation work by service technicians [SCB07].
This work makes an important attempt to bring back some of the advantages
of traditional mechanical interfaces, advantages shared with TUIs: motor
memory, real-world skills and visibility of action.

2.4.5 ENTERTAINMENT, PLAY AND EDUTAINMENT


The TUI fields of entertainment and edutainment are overlapping application
areas [SH10]. While not being a traditional TUI, the phenomenal success of
the Nintendo Wii shows the market potential of TUI-related systems. Other
marketed systems, resembling more of TUI definitions, employ the principles
of physical input, tangible representation, and digital augmentation (e.g.
1HXURVPLWK·V0XVLF%ORFNV, and SonicTiles, which allows children to play with
the alphabet) [SH10].
Many museum use TUIs with digital displays to engage their visitors. For
example, WKH 7RGG 0DFKRYHU·V %UDLQ 2SHUD LQVWDOODWLRQ at the Vienna Haus
der Music (Museum of Sound) offers a room full of tangible objects that
JHQHUDWHVRXQGLQUHVSRQVHWRYLVLWRUV·PRYHPent, touch and voice [SH10].
As it been demonstrated in this dissertation work, augmented toys are a
strong focus of the TUI research (e.g. storytelling, robots). Another reasonable
approach in the area of entertainment and play is augmenting traditional board
games (e.g. Philips EnterTaible project [LBB+07]). These combine the social
atmosphere of board games with augmented gaming experiences.

29  
 
Figure 14 The Philips EnterTaible project [LBB+07].

Leitner et al. [LHY+08] presented a mixed reality gaming table that tracks
real objects using a depth camera and transforms them into obstacles or a
ramps in projected virtual car race. Zigelbaum et al. [ZHS+07] introduced the
Tangible Video Editor, a Tangible Interface for physically editing digital video
clips. These video clips are represented by tangible objects which can be easily
sorted and placed into a sequence. The system also allows for tangible objects
representing transitions to be attached between clips. The IOBrush [RMI04]
TUI is a drawing tool for children that allows them to explore color, texture,
and movement via a physical paintbrush with an embedded video camera (see
Figure 15).

Figure 15 The IOBrush [RMI04], with its physical paintbrush embedded with a video camera.

Sturm et al. [SBG+08] identified key design issues that should be present in
entertainment and edutainment TUIs, which are the support of social

30  
 
interaction, simplicity combined with adequate challenge and goals, and
motivating system feedback.

2.4.6 MUSIC AND PERFORMANCE


Music applications are one of the oldest and probably the most popular area
for TUI research, most recently because of systems such as the AudioPad
[PRI02], BlockJam [NNG03], or the Squeezables [WG01]. Jordà [J08] has
identified several properties that a must TUI possess to be an ideal platform
for musical performance: support for collaboration and sharing of control;;
uninterrupted, real-time interaction with multidimensional data;; and support
of complex, skilled, expressive, and explorative interactions. Normally there
are two different kinds of TUI systems for musical performance, those that
are fully controllable sound generators or synthesizers, sequencer TUIs that
mix and play audio samples (e.g. the reacTable [JGA+07], described as
´DWWUDFWLYH LQWXLWLYH DQG QRQ-intimidating musical instrument for multi-user
HOHFWURQLFPXVLFSHUIRUPDQFHµ), and those that are simple physical controllers
that remotely control a regular synthesizer [SH10].
Musical TUIs target also two different kinds of users;; the novice, that want
an intuitive and easily accessible toy;; and the professional that look for
expressiveness, legibility, and visibility when performing publicly for a crowd
[SH10].

Figure 16 The reacTable [JGA+07], a musical TUI.

Another commercially available system is the AudioCubes [SV08], which is


composed by a handful of physical cubes that can detect and communicate
with each other. Each cube is capable of sending and receiving audio through
its faces, can also act as speakers or light up in colors according to their spatial

31  
 
disposition. Similarly, the Block Jam [NNG03] is a dynamic sequencer built
from physical cubes that can be attached to each other. mixiTUI [PH09] is a
tangible sequencer for sound clips and music that is able to adds loops,
controls, and eơects to its sounds, utilizing the interaction mechanisms from
the ReacTable [JGA+07].
Kaltenbrunner [K09] has classified the different musical TUIs as the ones
who have PXVLF ´FRQWDLQHGµ ZLWKLQ SK\VLFDO DUWLIDFWV WKDW FDQ EH UXEEHG
squeezed or moved (e.g. the Squeezables [WG01]);; musical building blocks
(e.g. Block Jam [NNG03]), that consists on groups or individual physical
blocks that repeatedly generate or manipulate sound, and can be stacked,
attached, or simply placed close to each other;; with token-based sequencers,
the surface of the system if continuously scanned and sound is produced
depending on the spatial position and the physical properties of the tokens
(e.g. color);; there are musical TUIs that consist on interactive, touch-based
surfaces, where the music is produced based on the interactions with the
tangible objects (e.g. AudioPad [PRI02], reacTable [JGA+07]);; finally, there
are simple commercial alternatives (e.g. NeXURVPLWK·V 0XVLF%ORFNV )LVKHU-
3ULFH·V SOD\ ]RQH PXVLF WDEOH  WKDW JHQHUDWH PXVLF E\ GHWHFWLQJ WKH SUHVHQFH
and sequence of tangible objects in specific physical constraints, such as slots.

2.4.7 SOCIAL COMMUNICATION


TUIs have been used as communication tools by representing people in the
periphery of attention of its users, so as to support ambient awareness [SH10].
For example, with Somewire [SHS99] audio-only out, tangible figures would
be positioned on a rack to determine audibility and directionality of sound.
This TUI was evaluated as more productive for the task than two different
GUI prototypes.
Recent projects in this area of research focus more on remote awareness
within social networks (e.g. groups of friends in the case of Connectibles
[KB08], or distributed work groups [BWD07]). Edge and Blackwell [EB09]
employed a TUI for task management in office work, where grabbing a
tangible object represents having responsibility over a task or a document.
A particular range of TUI prototypes address remote intimacy. LumiTouch
communicates touches on a picture frame [CRK+01], while Lovers Cup
[CLS06] consists on a couple of glasses that light up when the remote partner
uses his/hers. On the other hand, InTouch [BD97] uses two interconnected
rollers to transmit movement, being one the earliest promoters of using haptic

32  
 
or tactile modality for remote communication and intimacy [SH10]. Another
example is the United Pulse [WWH08], which is used to transmit the partners·
pulse between two wearable rings.

Figure 17 From left to right: the LumiTouch [CRK+01], Lovers Cup [CLS06], and InTouch [BD97].

2.4.8 TANGIBLE REMINDERS


TUIs are often used as holders of memory and important information, which
the users can trigger when needed. This use of digital linkages is in some ways
related to the vision of the ´LQWHUQHW RI WKLQJVµ EXW GRHV QRW LQFOXGH
autonomous and ´LQWHOOLJHQWµ REMHFWV >6+@ Instead, and in good TUI
faction, it required explicit interactions, such as placing a particular object in
proximity of a reader or sensor.
Holmquist et al. [HRL99] explored the use of tangible objects to bookmark
and open WebPages. Want et al. [WFG+99] explored a series of scenarios
where tangible objects were digitally tagged, such as a business card with an
home page or an e-mail. Van den Hoven and Eggen [HE04] and Mugellini et
al. [MRG+07] worked on prototypes that would open photo collections as
users placed tangible souvenirs from their trips.

2.5 CURRENT FRAMEWORKS AND CLASSIFICATIONS


Given the years since the concept of TUI was created, a considerable amount
of frameworks for TUI development were developed. These frameworks
provide TUI developers with explanatory power, enabling them to analyze and
compare different TUI prototypes and to apply lessons learned from previous
work in future efforts. Other frameworks may have a generative role,
suggesting new directions to explore in the design process, and uncovering
open opportunities for TUI development. Ultimately, frameworks inform and

33  
 
guide design and analysis, providing a conceptual structure for thinking
through a problem or application [SH10].

2.5.1 PROPERTIES OF GRASPABLE USER INTERFACES


A term Graspable User Interface was introduced by Fitzmaurice [F96] in 1996,
and was used to classify intHUIDFHVWKDWSURYLGHD´SK\VLFDOKDQGOHWRDYLUWXDO
function where the physical handle serves as a dedicated functional
PDQLSXODWRUµ ZKLOH XVHUV KDYH ´FRQFXUUHQW DFFHVV WR PXOWLSOH VSHFLDOL]HG
input devices which can serve as dedicated physical interfacH ZLGJHWVµ that
can afford physical manipulation and spatial arrangement.
Graspable User Interfaces have five basic properties:

o Space-multiplexing ² a very powerful concept, enabling the four other


properties. Each function has a physical handle or action, which has a
space and time of its own. Empirical tests [FB97] showed that space-
multiplexing is eơective as it redXFHV´VZLWFKLQJFRVWµDQGH[SORLWV innate
motor skills and hand²eye coordination.
o Concurrent access and manipulation ² although the GUI allows for
two handed interactions (e.g., keyboard typing), Graspable User Interfaces
allow for an uncountable different number of two handed interactions
(and are not constrained to hand interactions).
o Strong-VSHFLÀF GHYLFHV ² physical objects are designed to be more
specialized and tailored for work on a given task, due to the physical
affordances of their shapes and sizes.
o Spatial awareness of the devices ² Fitzmaurice [F96] defended that
physical interface elements should be embedded with a central processing
unit, so they could become aware of their surroundings and able to
communicate with each other.
o Spatial reconfigurability ² the contextual space must contribute to the
overall functions and use of the physical objects that are part of the
interface. Users should also be able to customize the application space in
order to facilitate task workflows and rapid task switching.

2.5.2 CONCEPT AND THE MCRIT INTERACTION MODEL FOR TUIS


Back in 2001, Ullmer and Ishii took the firsts steps to make TUIs a concrete
research field [UI00]. They highlighted the desired characteristics for the

34  
 
paradigm and developed an interaction model for systems with a TUI ² the
MCRit model (see Figure 1), based on the GUI MVC (Model, View, Control)
model. According to their work, TUIs are systems that give physical form to
digital information, using tangible objects as both input and output for
computational media. While the core of the MVC model is the separation
between the graphical representation and the control (by input devices such as
a mouse and a keyboard), the MCRit model basically eliminates this distinction
as TUIs integrate physical representations and control, blurring the barrier
between solely input or output devices.
)RXUSURSHUWLHVWKDWDURVHIURP8OOPHUDQG,VKLL·V work were:

o Physical representations are computationally coupled to digital


information.
o Physical representations embody mechanisms for interactive control (e.g.
moving or manipulating the tangible objects).
o Physical representations are perceptually coupled to actively mediated
digital representations (e.g. tangible objects augmented with sound or
graphical information).
o Physical state of the tangible objects embodies key aspects of syVWHP·V
digital state. Even when the system is off, the persistence of physical
objects can convey partial functionalities.

2.5.3 CLASSIFICATION OF TUIS


Later in 2005, Ullmer et al. [UIJ05] classified different approaches or types of
TUIs developed until that point into three major areas:

o Interactive Surfaces ² TUIs that allow users to interact with tangible


objects on an augmented planar surface (e.g. Urp [UI99]).
o Constructive Assemblies ² a TUI that is constructed by connecting
PRGXODU ´EORFNVµ WR HDFK RWKHU HDFK RI WKHP PD\ KDYH HOHFWURQLF GDWD
associated with it. This approach if often found in systems that express
models of the physical world (e.g. Topobo [RPI04]) or to describe abstract
structural logic such as programming languages (e.g. AlgoBlocks [SK93,
SK95]).
o Token + Constraints ² TUIs that are composed of tangible objects
(tokens) and their constraints. These constraints can be other tokens or
the physical properties of the interface (e.g. rack, slot), and the tokens can

35  
 
be attached or detached from the constraints to conduct digital operations
(e.g. moving a token from a slot to another). The classical example is the
Marble Answering Machine [P95].

Figure 18 In this picture, each ball is both a token and a constraint.

2.5.4 MAPPINGS BETWEEN THE PHYSICAL AND THE DIGITAL


Ullmer and Ishii [UI00] recognized a wide range of digital information that
could be associated with tangible objects. These include static digital media
(e.g. pictures), dynamic digital media (e.g. live video), digital attributes (e.g.
color), computational operations and applications, data structures (e.g. lists of
media objects or combinations of data), and remote people, places, and things.
They also described two methods of binding information onto tangible
objects:

o Static binding ² VSHFLILHG E\ WKH V\VWHP·V GHVLJQHU Dnd cannot be


changed by the user.
o Dynamic binding ² typically specified by the user through the tangible
objects.

Holmquist et al. [HRL99] also worked in classifying physical objects that can
be linked to digital information, suggesting three different sets of tangible
objects:

o Containers ² tangible objects that can be temporarily linked to digital


information to simplify the task of moving information around the
interface (e.g. the pick-and-drop approach [R97]). Unlike other forms of

36  
 
tangible objects, a FRQWDLQHU·VIRUPLVJHQHULFQRWUHIOHFWLQJWKHQDWXUHRI
the digital information it is associated with.
o Tokens ² tangible objects that physically resemble the information they
represent in some way. Tokens are typically used to access information.
o Tools ² tangible objects used as representations of computational
functions (e.g. zoom).

Ullmer and Ishii [UI00] suggest a slightly diơerent approach, where they
consider a tangible object as a token, and then use the concept of containers
and tools as subtypes of tokens.

2.5.5 TOKENS AND CONSTRAINTS


The TAC paradigm [SLC+04] was created so it could identify the structure,
functionality, and the core elements of a TUI. By doing so, it aims to allow
TUI developers to specify and compare designs. Drawn upon UllmeU·VTokens
+ Constraints approach [UIJ05], the TAC paradigm describes the structure of a
TUI as a set of relationships between physical objects and digital information
using four core concepts:

o Pyfo ² A physical object.


o Constraint ² A pyfo that limits the behavior of other pyfo(s).
o Token ² A pyfo with which a user has Tangible Interactions, performs a
task and is limited by constraints. The task performed has a direct impact
RQWKH78,·VDSSOLFDWLRQ
o TAC ² A token and its constraints.
o Variable ² Digital information associated with TACs.

To specify a TUI using the TAC paradigm, a TUI developer defines the
possible TAC relationships within a TUI. The product of this activity is a
palette of ways in which objects can be combined together to form meaningful
expressions for the system (and the user).

37  
 
Figure 19 The TAC palette for the Marble Answering Machine [P95].

2.5.6 TANGIBLE INTERACTION


This framework from Hornecker and Buur [HB06] focuses particularly with
the social interaction with and around TUIs. Tangible Interaction
encompasses research on whole-body interaction, interactive spaces, and
gestural input methods. Four themes were developed for use in systems with a
strong social component:

o Haptic Direct Manipulation ² refers to the material qualities and the


manual manipulability of the interface. Can the users grab, feel and move
the elements of the interface?
o Spatial Interaction ² refers to the spatial qualities of the interface,
including whole-body interaction and the overall gains of interacting in
space.
o Embodied Facilitation ² highlights how the configuration of objects
and space affects and directs emerging group behavior.
o Expressive Representation ² focuses on the material and digital
representations employed by TUIs, their expressiveness and legibility.

38  
 
2.6 TECHNOLOGIES FOR BUILDING TUIS
Since to this date there are no standard input or output devices for developing
TUIs. Because of that, TUI developers employ a wide range of pre-existing or
custom-made technologies that detect objects and gestures.

2.6.1 RADIO-‐FREQUENCY IDENTIFICATION


Radio-)UHTXHQF\ ,GHQWLÀFDWLRQ 5),'  LV D ZLUHOHVV UDGLR-based technology
WKDW·VDEOHWRGHWHFWWDJJHGREMHFWVZKHQLQUDQJHRIDWDJUHDGHUNormally, an
RFID tag contains an integrated circuit for storing and processing
information, and an antenna for receiving and transmitting signals. The
communication between a tag and a reader only occurs when both are
proximate enough. Most TUIs employ cheap RFID tags with small antennas,
resulting in tangible objects which are constrained to short distance detection,
requiring them to be placed directly on or swiped past the reader [SH10].
When a tag is detected the tag reader passes the ID of the tag to the computer
that then can interpret and, in the application context, determine what to do.
Examples of TUIs that use RFID technology include the MediaBlocks
[UI99b], a TUI that consists of a set of tagged blocks that serve as containers
for digital media, and Senseboard [JIP+02], a TUI that affords the
organization of pieces of abstract data by manipulating tagged objects within a
grid.

2.6.2 COMPUTER VISION


Computer Vision is often used in TUI development due to its capacity for
detecting the position, orientation, color, size or shape of multiple objects on a
2D surface in real time. The cheapest, most robust and accurate way of
achieving this is by tagging the tangible objects with visual tags (fiducial
markers). TUIs that employ Computer Vision are typically composed by a
high-quality camera, a light-weight LCD projector for providing real-time
graphical output, and a software package that is responsible for tracking the
objects [SH10].
Examples of such systems include the Urp [UI99], a TUI for urban
planning, and the reacTable [JGA+07], a TUI for musical performance. There
are also several examples of libraries that support the development of
Computer Vision-based TUIs, such as the ARToolKit [KB99, KB00] and the
reacTIVision [JGA+07], that support WKH WUDFNLQJ RI ÀGXFLDO PDUNHUV, and

39  
 
Papier-Mâché [KLL+04], that is capable of detecting electronic tags and
barcodes.

2.6.3 MICROCONTROLLERS, SENSORS AND ACTUATORS


Microcontrollers are small and inexpensive computers that can be embedded
in a tangible object or in the physical environment. They receive information
from their surroundings through sensors, and respond to that data through
actuators. Their sensors are capable of detecting light intensity, reflection,
noise level, motion, acceleration, location, proximity, position, touch, altitude,
direction, temperature, gas concentration, and radiation. Actuators on the
other hand, are capable of producing light (e.g. LEDs), sound, motion (e.g.
motors, electromagnets), or haptic feedback [SH10].
TUIs that employ microcontrollers onto their tangibles objects are, for
example, the Senspectra [LPI07], a physical modeling toolkit for analyzing
structural strain;; Pico [PI07], an interactive surface that uses actuation to move
tangible objects over a surface;; or the Navigational Blocks [CDJ+02], a TUI
for navigating and retrieving of historical information using haptic feedback.
Examples of developing environments include the Arduino [B09], the Handy
Board [URL4], the Handy Cricket [URL5], or Lego Mindstorms NXT [URL6].
Table 1 shows a comparison between TUI implementation technologies
(RFID, Computer Vision, and Microcontrollers).

2.6.4 TOOLS FOR TUI DEVELOPMENT

2.6.4.1 PHIDGETS
Phidgets [URL7] are commercially VROG´SOXJDQGSOD\µGHYLFHV HJVHQVRUV
boards, actuators, I/O) that support software developers in the
implementation of TUI prototypes that aim at using tangible objects capable
of both physical input and physical output. One of the advantages of using
Phidgets is that they are centrally controlled through a computer rather than
through microprocessors embedded in the tangible objects. Phidgets also
offers an API for a variety of development environments.
iStuff [BRS+03] is very similar to Phidgets in concept, but uses a set of
wireless physical devices controlled in Java. Through an intermediary software,
developers can define high-level events and dynamically map them to input
and output events.

40  
 
Table 1 Comparison of TUI implementation technologies [SH10].

41  
 
Exemplar [HAM+07] is toolkit similar to Phidgets in the way it leverages
central control through a computer. With Exemplar, a developer demonstrates
a sensor-based interaction to the system (e.g. shaking an accelerometer), while
the system graphically displays the resulting sensor signals. By iteratively
refining the recognized action, the developer use the desired sensing pattern in
prototyping or programming applications.

2.6.4.2 ARDUINO
Arduino [B09] is a toolkit consisting of a special board and a programming
environment. The Arduino board differs from Phidgets by interfacing with
standard electronics parts, requiring the developer to physical wire, circuit
build, and solder.

2.6.4.3 LITTLEBITS
Currently under development, littleBits [URL8] offers a toolkit that consists of
electronic components pre-assembled on tiny circuit boards that are able to
snap together through the use of magnets.

2.6.4.4 ARTOOLKIT
As mentioned before, the ARToolKit [KB99, KB00] is a Computer Vision
marker tracking library that allows developers to create augmented reality, and
consequently, TUI applications. A part from tracking the position and
orientation of visual markers, ARToolKit also allows computer graphics to be
drawn exactly over the real marker.

2.6.4.5 REACTIVISION
The reacTIVision [JGA+07] is a Computer Vision framework designed for the
development of TUIs that require a multi-touch surface, capable of tracking
tangible objects through the use of fiducial markers (see Figure 20). One of
the differences between reacTIVision and other toolkits is its distributed
architecture, separating the tracker from the actual application.

42  
 
Figure 20 The reacTIVision [JGA+07] framework diagram..

2.7 BENEFITS AND LIMITATIONS OF TUIS


As with other interaction styles, TUIs present benefits and limitations
GHSHQGLQJRQZKDWWDVNWKHXVHUV·are required to complete. A good design is
one that aims to bring out the benefits and to alleviate the downfalls of using a
particular system.

2.7.1 STRENGTHS

1.7.1.1 COLLABORATION
o TUIs have been shown to lower the threshold for participation
[HSC+09] when compared to traditional GUIs, being particularly
successful in regard to children and females.
o By offering multiple access points through the tangible objects, TUIs
HQVXUH WKDW WKHUH LV QR ´ERWWOHQHFN IRU LQWHUDFWLRQµ DOORZLQJ IRU
simultaneous interaction and easy participation [SH10].
o Physical interaction is visible to others, and tangible objects enhance
the legibility and comprehension of such actions. This ultimately supports
group awareness and coordination [KHT06].
o The TUI design and setup can provide ´embodied facilitationµ, subtly
constraining what the users can do, and guiding their actions [SH10].
o Tangible objects are easier to share than computer graphics [GSO05],
fostering discussion.

43  
 
o Tangible objects are normally understood as resources for shared
activity [FT06, FTJ08, FTJ08b].
o By being consisted of multiple tangible objects, TUIs allow users to
distribute tasks DPRQJVWHDFKRWKHU·VE\UHDUUDQJLQJWKHREMHFWVDURXQG
the interface [SH10].

2.7.1.2 SITUATEDNESS
o The situated nature of TUIs makes them very powerful UbiComp
devices [WMG93].
o Interaction with tangible objects can have different meanings
depending on the context they are being used. On the other hand, these
objects can also change the meaning of the location they are place onto.
o TUIs offer vast VXSSRUWIRU´RIIOLQHµDFWLYLWLHV, normally directed at the
social and physical setting [FTJ08].

2.7.1.3 TANGIBLE THINKING


One of the strengths of TUIs compared to traditional interfaces is that they
leverage the connection of body and cognition by facilitating thinking
through bodily actions, physical manipulation, and tangible representations
[SH10]. Our understanding of the world is shaped by our physical body and
how it interacts with the world and the objects that inhabit it [KHT06, T07]. It
is locomotive experiences that allow infants to develop their spatial cognitive
skills and bodily interactions with physical objects that allow children to learn
abstract concepts [RMB+98].

o By allowing users to keep their physical mobility (as their hands are not
confined to the keyboard and mouse) TUIs allow the use of
unconstrained gestures while interacting with a system, lightening the
cognitive load of their users [AKY00]. Furthermore, TUIs that employ
JHVWXUH DV SDUW RI WKH LQWHUDFWLRQ WDNH DGYDQWDJH RI XVHUV· NLQHVWKHWLF
memory [S99].
o Tangible objects have been shown to support cognition by serving as
´WKLQNLQJ SURSVµ DQG allowing the offload of memory [SH10]. These
qualities arise from the fact they work as epistemic tools, allowing users
to perform actions that have no functional consequence and DUHQ·W

44  
 
computationally interpreted (e.g. annotating and counting, turning or
occluding) [KM94].
o Physically representing a task can have very positive effects on the
reasoning abilities and performance of the users [ZN94]. This tangible
representation is intrinsic components of a different number of cognitive
tasks, as they guide, constrain, and determine cognitive behavior [Z97].
Additionally, as the syntax of the interface is physically represented users
can perceptually infer what they can do and how, decreasing the need for
explicit rules [ZN94].

2.7.1.4 DEDICATED AND SPECIFIC TANGIBLE OBJECTS


Since in a TUI the tangible objects have persistent mappings (strongly
VSHFLÀF REMHFWV), their appearance can directly indicate their meaning or
function, as well as how to interact with them through physical affordances
(e.g. handle) [F96]. These physical affordances constrain manipulation, inviting
users to focus on actions with meaningful results [N88]. Furthermore, strongly
VSHFLÀF REMHFWV LPSURYH WKH XVHUV· PDSSLQJ RI DFWLRQV WR HIIHFWV >B00,
WDO04].

2.7.2 LIMITATIONS
o One of the problems with TUIs is scalability, both in terms of the
physical space required for a big system (in terms of multiple tangible
objects or a complex physical interface) and the support and functionality
the developing tools currently provide [SH10].
o While digital objects are malleable, easy to create, modify, replicate, and
distribute, tangible objects are normally rigid and static [PNO07].
Many problems arise from this fact, like a difficulty in supporting an undo
and history function, or a replay of actions [KST+09].
o Tangible objects need to be well designed so they are easy to grasp,
lift, or position. Ultimately, they need to address the purpose of the
system. A TUI game might not always require objects that are easy and
effortless to use, as gamers enjoy a challenge and the feeling of getting
skilled as they play [SH10].
o Normally, each time a TUI is prototyped the developers are required to
learn a new toolkit or software library, as well as rewrite most of the code.
Since research into new interaction techniques and technological solutions

45  
 
still continues, software tools that can be easily extended are needed.
Ultimately, although toolkit programming substantially reduces the time
and effort required for developers to build functioning TUIs, it falls short
of providing a comprehensive set of abstractions for specifying,
discussing, and programming Tangible Interaction within an
interdisciplinary development team [SH10].
o So far QR HYDOXDWLRQ PHWKRGV VSHFLÀF WR78,V have been developed,
with the methods used being very similar to those used within HCI.

2.7.2.1 THESIS GOALS AND HOW THEY ADDRESS TUI LIMITATIONS


This dissertation will tackle two limitations identified during the process of
documenting the current state of the art for TUIs (see Section 1.1). Most
Tangible Interfaces take the form of prototypes that function in specific
physical environments (such as augmented rooms [e.g. BGS+09]) or on
particular surfaces (such as back-projected screens [e.g. JGA+07]). Those that
can be used outside of a fixed location (such as the Siftables [MKM07]) are
typically composed of a number of individual elements that relate only to one
another 8OOPHU·V Constructive Assemblies [UIJ05]). These essentially
technological limitations have constrained the kinds of tasks and problems
that researchers exploring Tangible Interaction have tackled ² for example,
there is a longstanding focus on tabletop interaction in specialized domains as
diverse as architectural planning (e.g. Urp [UI99]), music performance [e.g.
PHB01] and physical simulation [UI98]. The first goal of this dissertation will
be of supporting the investigation of TUIs that work in real world scenarios,
maximizing their applicability to most users and commonplace, everyday tasks.
This will be achieved by documenting the conceptual, methodological, and
technical steps required for building Mementos (see Section 4), a tangible
system supporting travel and tourism activities, and Eco Planner (see Section
5), the first TUI dedicated to sustainability. This approach represents first
steps towards transitioning TUIs from a promising lab paradigm to a realistic
approach to the design of interactive systems capable of tackling the everyday
problems of everyday users.
While this dissertation has presented a series of frameworks and methods
for classifying TUIs ² and clearly understands their utility in elucidating the
mechanisms by which digital and physical representations can be linked
through interactive systems ² it argues that they focus largely on
implementation issues;; that they deal with how a TUI can be constructed and

46  
 
attempt to represent the range and scope of systems that fall under this
banner. They say little regarding how a TUI can be designed in order to
maximize usability, or to reflect aspects of human cognition. This dissertation
argues this is an important omission and one that currently requires work in
order to address.
Some work has considered how TUIs can aid cognition in particular,
isolated, situations. For example, there is considerable work exploring how
TUIs can be deployed in learning scenarios [e.g. MPR03] to support tasks as
diverse as connecting abstract concepts to physical analogies [GSH+07] and
encouraging students to plan and reflect upon their activities [M07]. Raffle et
al. suggest that haptic TUIs may support information retention [RJT03]. Other
systems have focused on taking into account the cognitive load of interaction
and advocated strategies to ensure it is minimized through leveraging attention
theory [BLS05], designing for peripheral display [MDM+04, IU97] or relying
on multi-sensorial information processing [PMR02]. Finally, Sharlin et al.
provided a compelling discussion of mechanism to leverage human
understanding of spatiality and the importance of supporting exploration
strategies such as the distinction between pragmatic and epistemic action
[SWK+04]. However, despite this laudable body of work, there is no unifying
framework or comprehensive set of guidelines for TUIs which aims support
designers in ensuring that their systems reflect the cognitive strengths and
weakness of everyday users [M07]. This paper argues that this lack is
hampering the development of TUIs suggests that the literature on embodied
cognition [A03] is a suitable starting point from which to develop a more
complete set of design recommendations for TUIs to achieve these aims.
The second goal of this dissertation will be of presenting an initial set of
guidelines directly informed by the literature on embodied cognition. The
usefulness of these guidelines will be explored through subsequent design
iterations over the development of two TUI systems. The ultimate goal is
produce guidelines which will inform the design of TUIs which take full
advantage of the cognitive capabilities of their users. These guidelines will
enable future designers to more easily produce tangible interfaces which are
simple, effective and pleasant to use.
 

47  
 
3. E MBODIED C OGNITION

´7KH SULPDU\ VRXUFH >RI RXU JUHDWHU LQWHOOLJHQFH@«,V RXU


habit of off-loading as much as possible of our cognitive
tasks into the environment itself ² extending our minds
LQWRWKHVXUURXQGLQJZRUOGµ. [C01]

3.1 INTRODUCTION AND RELEVANCE


Following is a literature review of the cognitive theories of Embodied
Cognition, in order to better understand their potential as a basis for the
creation of guidelines for TUI design and development. Embodied Cognition
is a perspective in cognitive science that grants the body a central role in how
the mind works. Traditionally, researchers have viewed the mind as an abstract
information processor, regarding connections to the outside world as of little
theoretical importance. Perceptual and motor systems were thought to serve
merely as peripheral input and output devices, not relevant to understanding
´FHQWUDOµ FRJQLWLYH SURFHVVHV [W02]. ,Q WKH ·V OLQJXLVWV argued that
abstract concepts were based on metaphors for bodily and physical concepts
[LJ80]. Around the same time, within the field of artificial intelligence,
behavior-based robotics began to work with routines for interacting with the
surrounding environment rather than internal representations used for abstract
thought [B96]. The theories of Embodied Cognition arose from these
approaches, and recently have attained high visibility. It argues that we evolved
from animals whose neural resources were devoted primarily to perceptual and
motoric processing, and whose cognitive activity consisted largely of
interactions with the environment. Human cognition is neither centralized nor

48  
 
abstract;; it is instead rooted in sensorimotor processing [W02]. Embodied
Cognition further suggests that intelligence lies with the social and cultural
worlds, arguing that they are central to human cognition, cognition that
exploits repeated interaction with the environment, creating structures that
advance and simplify cognitive tasks [A03].
Embodied Cognition also focuses on how humans store and access
memory. Glenberg [G97] argued that traditional accounts of memory focus
too much on the passive storage of information. He defends the idea that
pDWWHUQVVWRUHGLQPHPRU\UHÁHFW the nature of bodily actions and their ability
to mesh with situations during goal pursuit. Perception of relevant objects
triggers affordances for action stored in memory.
Conversely, reasoning about future actions relies on remembering
affordances while suppressing perception of the environment. Simulation also
appears central to constructing future events based on memories of past
events [SA07]. :KHQSHRSOHYLHZDVWDWLFFRQÀJXUDWLRQRI gears, for example,
they use simulation to infer the direction in which a particular gear will turn.
Numerous sources of evidence support the use of simulation in these tasks
[B08]. The time to draw an inference is often correlated with the duration of a
physical event, such as how long a gear takes to turn. It has been shown that
carrying out associated actions (e.g. moving your hand like a gear) can improve
inference [S99b].
Clark classified of external ´VFDIIROGLQJµKRZWKHEUDLQoffloads some of its
cognitive duties onto the environment, instead of doing all of the
computational work on its own [C01]. An example of this mental shortcut is
how people look for Kodak equipment in a store, focusing on finding a
distinct yellow package instead of a specific textual element [C97]. Landmarks
could also be an example of external cognition in cities. By offloading mental
effort onto the complex urban environment, the residents and visitors of cities
are able navigate without a mental overload [M06]. A particular kind of
´VFDIIROGLQJµLVNQRZQDVHSistemic actions. These actions aim at altering the
world so as to aid and augment cognitive processes (e.g. rotating a Tetris piece
to see where to fit it [KM94]). They contrast with pragmatic actions that alter
the world because some physical change is desirable for its own sake (e.g.
fitting a Tetris piece in place [KM94] [CC98]. The properties of epistemic
actions are reducing space complexity (memory involved in mental computation),
time complexity (number of steps involved in mental computation), and
unreliability (probability of error of mental computation) [KM94].

49  
 
7KLV H[WHUQDO ´VFDIIROGLQJµ FDQ DOVR EH DFKLHYHG RQWR WHFKQRORJ\, which
can have profound effects on how we think and encode information, on how
we communicate with one another, on our mental states, and on our very
nature. ,W·V FOHDU WKDW LI XVHUV FDQ SK\VLFDOO\ LQWHUDFW ZLWK WHFKQRORJ\ it will
increase their cognitive capacities and thus enhance their efficiency, in the
same way they would see further with telescopes or move faster with cars
[DH09].

3.2 FOUR VIEWS OF EMBODIED COGNITION

3.2.1 COGNITION IS SITUATED


Situated cognition is the cognitive activity that takes place in the context of a
real-world environment, and inherently involves perception and action.
Examples include driving or holding a conversation, interacting with those
things that the cognitive activity is about [W02].

3.2.2 HUMANS OFF-‐LOAD COGNITION ONTO THE ENVIRONMENT


Because of WKHXVHUV·limits on information-processing abilities they exploit the
environment to reduce the cognitive workload. Users make the environment
hold or even manipulate information for them, and they harvest that
information only on a need-to-know basis. This can be easily achieved by
through epistemic actions [W02].

3.2.3 COGNITION IS FOR ACTION


The function of the mind is to guide action, and cognitive mechanisms such as
perception and memory must be understood in terms of their ultimate
contribution to situation-appropriate behavior [W02]. Glenberg [G97] argues
that the traditional approach to PHPRU\ DV ´IRU PHPRUL]LQJµ QHHGV WR EH
UHSODFHG E\ D YLHZ RI PHPRU\ DV ´WKH encoding of patterns of possible
physical interaction with a three-GLPHQVLRQDO ZRUOGµ. This approach shows
that we conceptualize objects and situations in terms of their functional
relevance to us, rather than neXWUDOO\RU´DVWKH\UHDOO\DUHµ>:@

50  
 
3.2.4 OFF-‐LINE COGNITION IS BODY-‐BASED
Even when decoupled from the environment, the activity of the mind is
grounded in mechanisms that evolved for interaction with the environment,
that is, mechanisms of sensory processing and motor control. Mental imagery
(e.g. auditory imagery, kinesthetic imagery) is an obvious example of mentally
simulating external events [PFD+95], while the working memory appears to
be an example of a kind of symbolic off-loading where information is off-
loaded onto perceptual and motor control systems in the brain. In these cases,
rather than the mind operating to serve the body, we find the body (or its
control systems) serving the mind [W02].
Areas of human cognition previously thought to be highly abstract now
appear to be yielding to an embodied cognition approach. It appears that off-
line embodied cognition is a widespread phenomenon in the human mind
[W02].

3.3 GUIDELINES FOR TUI DEVELOPMENT


The following set of guidelines resulted from the review of both the literature
in Embodied Cognition and traditional Human-Computer Interaction (HCI).
These will serve as guide in the design process of both TUI prototypes
developed and documented in this dissertation (see Section 4 and 5), as a first
step in valuing the need for a TUI design process firmly rooted on theories of
Embodied Cognition.

3.3.1 THINKING THROUGH TUIS


o A system that uses tangible objects must allow trial-and-error activity to
allow unconstrained exploration and improvisation of digital problems in
DSK\VLFDOVSDFHLPSURYLQJWKHXVHU·VFRJQLWLYHXQGHUVWDQGLQJRIWKHWDVN
[SWK+04].

o It should be possible and make sense to use the tangible objects


outside of the space they are sensed by the system [M07].

o Users should be able to create meaning in the interaction as they use


their bodies to interact with the tangible objects. The system will need to
maintain coherency of the conceptual model in both the physical and
digital worlds [IML01].

51  
 
o A system that uses tangible objects must be, amongst other things, a
reservoir of resources for learning, problem solving, and reasoning.
Even when dealing with highly abstract mental concepts their resolution
may be rooted, though in an indirect way, in sensory and motoric
knowledge. Well-designed tangible objects become integrated into the way
people think, see, and control activities, part of the distributed system of
cognitive control [HHK00].

o Using tangible objects with more abstract physical representations might


encourage users to plan and reflect more, using sensorimotor
simulation [W02].

o Minimal effort should be required to understand how the system


works and more attention should be focused through the interface onto
the underlying domain. The more information a user has on the physical
properties of a tangible object, its uses and the intentions of the model,
the quicker he understands how to use it [M07].

o TUIs should work as meaningful containers, where the container


UHIOHFWVWKHFRQWHQW´7KHIRUPVDQG containers of items that we interact
with in the real world provide richer recognition clues than those on our
computers desktopsµ [GC97].

3.3.2 ACTING THROUGH TUIS


o TUIs should unify the input and output space. This link between
physical action and both physical and digital effect leads to increased
engagement and reflection as it saves the user the work of mapping one
space to the other. Unifying input and output can require a feedback loop
that is able to physically deform and actuate the tangible objects, without
rendering them too cumbersome or hindering their essential utility as
input devices [SWK+04].

Users expect the physical input components to mirror the state of the
corresponding digital information completely. This interaction is a
¶FRQYHUVDWLRQDO· VW\OH RI LQWHUDFWLRQ JLYLQJ FRQVWDQW IHHGEDFN DOORZLQJ

52  
 
users to proceed in small steps and to express and test their ideas quickly
[SWK+04].

o The system should provide exogenous cues to the user in order to


improve their confidence when using the tangible objects (e.g. TUIs with
physical constraints such as a rack or slot in which tangible objects can be
manipulated) [BLS05].

o Users should have good knowledge of state so they can monitor the
progress of their work [SWK+04].

o Intuitiveness of interaction should be balanced, in order to prevent


QHJOHFWLQJWKHXVHU·VVNLOOOHYHODQGQRWEHLQJDEOHWRVFDOHWRH[SHULHQFHG
users [HB06].
o The output of a TUI should be projected where they require the least
cognitive load (e.g. tactile display), in order to LPSURYH WKH XVHU·V
performance and confidence. This allows the user to be aware of
LQIRUPDWLRQ ´DW D JODQFHµ RU VXEFRQVFLRXVO\ ZKLOH DWWHQGLQJ WR VRPH
other primary task or activity [BLS05].

o Users should be able to perform realistic tasks realistically, and there


should be analogies for non-real world functionalities whenever possible.
If a task cannot be performed realistically, learned mappings should be
exploited ² the more primitive or hardwired the better. TUIs should be
based on pre-existing real world knowledge and skills, so that the
perception of an object triggers affordances for action stored in memory
[JGH+08].

o Using concrete rather than abstract tangible objects can often lead to
improved task performance. Limiting the types of interaction that are
suggested by the form of the object can also reduce the entropy of the
system and help the user deduce the inherent functionality of the object
from its physical qualities (e.g. an heavy object will make the user think of
using it to squash or hold something in place). Users see a tangible
object for what they can do with it and not by what the object really
is. This deduction can be based on cultural standards [M07].

53  
 
o Consistently dedicating physical movement to interface functions
affords kinesthetic learning and memorization over prolonged use.
Physical feedback can further help distinguish commands kinesthetically
[KHT06].

o ,QGLYLGXDO GLIIHUHQFHV LQWKH XVHUV· ERGLHV SURGXFH LQGLYLGXDO GLIIHUHQFHV


in space perception [B08].
 

3.3.3 SCAFFOLDING TO TUIS


o Users should be able to externalize their ideas onto a TUI in order to
facilitate objective reflective thought. Tangible objects can be used as
epistemic toolsDLGLQJDQGDXJPHQWLQJWKHXVHUV·FRJQLWLYHSURFHVVHVDV
they try to tackle and understand an activity [M07].

o Tangible objects should DLG WKH XVHUV· SURVSHFWLYH PHPRU\. A cue,


event, or a stimulus should be enough to quickly recall planned actions or
intentions [BG07].

o Personal TUIs should have tangible objects that hold personal


significance to the user. They should be holders of stories, memories
and ultimately, identity [GC97].

o TUIs should hold and manipulate information for the user, allowing
that information to be quickly harvested and shared in a need-to-know
basis, instead of forcing the user to fully encode it and store it internally.
The effective use of tangible objects for external storage of information
GHSHQGVRQWKHXVHU·VSHUFHSWLRQRIWKHPWKRX [W02].

In the real world people offload cognitive processes that are difficult (e.g.
visualizing) to external aids. Children, for example, use their fingers to
perform arithmetic operations (e.g. add, subtract). This way, people extend
their performance capacity beyond the limits of their own brain power,
the same way they see further with telescopes or move faster with cars
[DH09].

o Tangible objects can act as proxy reminders IRU RWKHU REMHFWV·


functions and purpose [CPP+05].

54  
 
o Space is a resource that must be managed, much like time, memory,
and energy. Users must always be facing some direction, have only certain
tangible objects in view, be within reach of certain others. When space is
ZHOOXVHGLWUHGXFHVWKHWLPHDQGPHPRU\GHPDQGVRIWKHXVHU·VWDVNDQG
increases the reliability of execution and the number of jobs the user can
handle at once [HHK00].

The functions of space can be categorized as spatial arrangements that


simplify choice, spatial arrangements that simplify perception, and spatial
dynamics that simplify internal computation [HHK00].

o Users should be able to organize the workspace in which they will


use the tangible objects, so that frequent tasks can be completed more
rapidly and reliably. A reliable environment allows the users to reduce
inner computational demands [G10].

3.3.4 COLLABORATING THROUGH TUIS


Our body allows us to off-load cognitive and memory work onto our
surroundings. By changing our environment we lower the cognition demands
of problem solving, to us and to whoever might be collaborating with us, as
both our actions and their results are visible and immediate.

o Users should be able to think and talk with through with their bodies and
by the actions performed on the tangible objects, using them as props to
act with ² people know more than they can tell. The tangible objects
should give discussions a focus and provide a record of decisions
[HB06].

o Users should have equal access and direct interaction with digital
information and the tangible objects, promoting collaborative decision-
making, social cohesion, engagement and encouragement [RHB+04].

o Users should be able to simultaneously use the system and see each
RWKHU·V SK\VLFDO DFWLRQV, enhancing awareness which in time can
support fluid interaction and implicit coordination. This lowers cognitive
effort, as it helps to decode what everyone else is doing or planning to do.

55  
 
This is a direct result from mirror circuits in the brain which help
perceivers infer intent, and not only recognize the actions performed
[B08].
 

High levels of awareness prevent users from having conflicts or collision


while working on small mediums such as interactive tabletops;; it also
provides users with a context for their own activities. When there is high
awareness little verbal communication is employed in coordinating
activities, as it has been shown that people intentionally structure their
behavior to present cues to other, providing a set of observable practices.
Another indicator of high levels of awareness is the division of labor
without previous negotiation or allocation of activities [HMD+08].

o Users should be able to use a second channel of information to enhance


the perceptibility and legibility of their actions, preferably using speech.
The system must afford actions that are easy to explain and inform (image
schemas) [HMD+08].

o If users are to use the tangibles objects as epistemic tools in a


collaborative environment, they must be able to quickly individualize
them in order to be able to take advantage of better cognitive
performance [DH09].

56  
 
4. F IRST PRO TOTYPE : M EMENTOS

4.1 INTRODUCTION
The motivation behind this first prototype is of documenting the conceptual,
methodological, and technical steps required for building a TUI prototype
capable of operating in complex real world scenarios, supporting multiple
tasks and operating in multiple contexts. By being able to operate in different
contexts and scenarios, this prototype will also aim of merging two areas of
TUI research, Ambient Displays [IU97] and Tangible Tabletop Interaction.
The domain chosen for building such a prototype was the domain of
tourism. Tourism and travel represent highly significant economic activities
worth an estimated 5751 billion USD worldwide in 2010 (approx 9.2% of
world GDP) [TC10]. They are also activities fraught with human-centric
challenges and problems (see Brown and Chalmers [BC03] for an informative
ethnographic description of the tourism experience). Independent travelers, in
particular, are typically in unfamiliar surroundings, often grappling with
unknown languages and dealing with unusual climates whilst they perform
complex tasks such as navigation and the collaborative planning of spatial-
temporal itineraries based on significant quantities of complex textual
information from guidebooks and timetables.
Although many of these activities are arguably core aspects of the travel
experience, the challenges they represent have attracted considerable attention
in the HCI research community. One focus has been the support of pre-
planning activities through mechanisms such as making itinerary suggestions
that reflHFW D XVHU·V SUHIHUHQFes [e.g. CL05]. Other authors have proposed
systems that provide contextually relevant information during trips. Such

57  
 
systems can provide customized navigation information, relay important news
(such as delayed flights) or support visits to specific sites (such as museums)
[e.g. GPS+04]. The dominant form factor for such systems is graphical
interfaces on mobile devices. Despite this wealth of work, relatively few of
these systems are currently in widespread use [BC03]. Potential reasons for
this include issues relating to a lack of availability or adoption of the
technology (typically high-end smart phones), distrust (either in terms of the
privacy of tracking systems [BCL+05] or potentially a reluctance to take
recommendations from a digital agent [R06]), a lack of support for
collaboration (most tourists travel in small groups of 2-4 people) [BC03] and
the fact that dealing with digital information presented on a mobile device
disrupts and distracts from the actual experience of travel [BLS05].
The remainder of this section documents the steps taken during the
concept, design, implementation and evaluation of Mementos. These use
traditional HCI methods combined with particular frameworks for TUI
development.

4.2 CONCEPT
The concept for Mementos was developed after an initial work of
understanding where computer systems could augment the tourism
experience. This was possible by combining literature review [BC03] with
questionnaires and semi-structured interviews (see Section 4.2.1), so as to
comprehend how people behave as tourists. The result of this work took form
after a brainstorming section (see Figure 21), followed by the development of
a mind map (see Section 4.2.2) and an initial system description (see Section
4.2.3), that was examined conceptually via a scenario (see Section 4.2.4).

Figure 21 Result from one of the brainstorm sessions for the Mementos TUI.

58  
 
4.2.1 QUESTIONNAIRE
In order to capture the behavior of travelers and tourists a questionnaire (see
Annex A) was designed and distributed both online and to six hotels situated
in Funchal, a coastal Portuguese city and popular tourist destination. A total of
118 responses were gathered (see Annex B) and qualitatively analyzed. Based
on the results, a semi-structured interview was designed (see Annex C) and
conducted with 12 participants to explore some of the issued raised in further
depth. These investigations were focused on capturing how people plan and
structure their day-to-day activities and exploration during travel, and how
they communicate such activities to others. It also explored how people store
and share the memories from their travels (such as through photos and videos
or objects and souvenirs). A literature review on ethnographic study of the
tourism experience [BC03] was then conducted to cement the conclusions
taken from the questionnaires and semi-structured interviews.
As expected, most answers to the online questionnaire were given from
people with ages between 16 and 30. Oppositely, more than half of the people
who answered the physical questionnaires distributed at hotels where over 50
years old (see Figure 22). Independently, most people appear to travel in
groups of two or three people (see Figure 23), that are composed of friends (in
the case of younger people) or husbands and wives (in the case of older
people, see Figure 24). Additionally, most trips take between one and two
weeks (see Figure 25).

Figure 22 Graphs with the age groups of typical tourists. The graph on the left represents the answers
to the online questionnaire, while the graph on the right represents answers to the physically
questionnaires handed in hotels.

Another difference between young and older people while travelling is that
the first occasionally feels lost or unsure of directions they have, while the

59  
 
second rarely encounters these problems (see Figure 26). This might be
because young people prefer to plan day by day (e.g. in the hotel) as they
travel, while older people rather plan before leaving home (see Figure 27).

Figure 23 Graphs with the group size of typical tourists. The graph on the left represents the answers
to the online questionnaire, while the graph on the right represents answers to the physically
questionnaires handed in hotels.

In both cases both young and older people prefer to travel by foot,
occasionally opting for the bus (see Figure 28). When the trip is actually over,
young people prefer to share their experiences as media through online social
networks (e.g. Flickr), while older people share them at home, to friends and
family (see Figure 29).

Figure 24 Graphs with the companions of typical tourists. The graph on the left represents the
answers to the online questionnaire, while the graph on the right represents answers to the physically
questionnaires handed in hotels.

Figure 25 Graphs with the duration of the trips of typical tourists. The graph on the left represents
the answers to the online questionnaire, while the graph on the right represents answers to the
physically questionnaires handed in hotels.

60  
 
Figure 26 Graphs with the answers to how often people feel lost on their trips. The graph on the left
represents the answers to the online questionnaire, while the graph on the right represents answers to
the physically questionnaires handed in hotels.

Figure 27 Graphs with when people prefer to plan for their trips. The graph on the left represents the
answers to the physically questionnaires handed in hotels (where people graded their preferences from
1 to 5), while the graph on the right represents answers to the online questionnaire.

Figure 28 Graphs with how people prefer to travel during their trips. The graph on the left represents
the answers to the physically questionnaires handed in hotels (where people graded their preferences
from 1 to 5), while the graph on the right represents answers to the online questionnaire.

61  
 
Figure 29 Graphs with how people share media from their trips. The graph on the left represents the
answers to the physically questionnaires handed in hotels (where people graded their preferences from
1 to 5), while the graph on the right represents answers to the online questionnaire.

4.2.2 MIND MAP


With the information gathered in the previous stage, a series of brainstorming
sessions unfolded (see Figure 21), leading to the creating of a mind map (see
Figure 30) that explored five important desired characteristics for Mementos.

Figure 30 Mind map for the TUI prototype Mementos. For an image with higher resolution, please
refer to Annex D.

62  
 
First it explored the possibility of tokens acting as ambient displays,
GLVFRQQHFWHG IURP DQ\ ´V\VWHPµ. Which possible interactions could be
developed and what kind of feedback could they convey? Second, it focused
RQ KRZ WKH WRNHQV FRXOG EH XVHG WR FUHDWH UHYLVH DQG VKDUH WKH WULSV· SODQV
with others. Third, it examined the different contexts and scenarios where the
tokens could be used to support the travelling experience. Forth, it tackled the
LVVXH ZLWK WKH WRNHQV· SK\VLFDO IRUP, that had to fit every interaction across
different possible contexts of utilization. Lastly, it took a look at which
embodied aspects could be embedded in the tokens.

4.2.3 DESCRIPTION
Following on the mind map work (see Section 4.2.2), the Mementos· concept
was fleshed out by envisioning a system composed of three parts: a set of
tokens, a kiosk interface and a home interface (see Figure 31). These three
different interaction spaces should tackle most of the problems and activities
that where identified during the literature review process [BC03] and
questionnaire analysis (see Section 4.2.1).
 

Figure 31 The Mementos· concept.

4.2.3.1 TOKENS (TRAVELLING)


The tokens are small physical objects intended to be held in the hand or stored
in pockets and key chains. Two classes are envisioned. One set of concrete
tokens represented and visually resembled specific tourist sites. For example,

63  
 
such a token might be linked to the Eiffel tower and take the form of a model
of this monument. The other class of abstract tokens represented more
general tourist infrastructure such as a set of cafes and transportation points
and appeared as neutral coin-like objects identified with graphical logos. The
goal of such sets of such tokens is for them to be distributed for particular
locations or cities in much the same way as guidebooks are currently.
The concept for the tokens is to provide information related to the object
they represent through non-intrusive feedback. Most significantly, this
feedback should be delivered via vibrotactile cues mediated by location
awareness ² the tokens would vibrate when approaching the location (or set of
locations) they represent. Furthermore, in the case of the concrete tokens, they
also should respond to the proximity of transportation links leading to their
location. This feedback could be silenced by touching or picking up the token.
The goal of this interface is three-fold. Firstly, to enable users to engage in a
simple form of collaborative planning based on selecting only relevant tokens
to carry with them. For example, a user wishing to travel by taxi, visit a
museum and stop for lunch would simply select the three tokens representing
these activities in order to receive relevant cues. Secondly, the tokens are
intended to support relatively undirected, exploratory travel experiences. For
example, a tourist strolling through a city with a token which responds to all
restaurants featured in a particular food guide could use the feedback to
opportunistically and discreetly highlight dining choices encountered during
the FRXUVHRIWKHLUZDON)LQDOO\LIDWRXULVW·VJRDOLVWRVHHNRXWRQHSDUWLFXODU
destination, the vibrotactile cues will highlight appropriate transportation links
(such as where to board or exit a bus) as well as proximity to the actual
location, thereby providing vital navigation information.

4.2.3.2 PUBLIC KIOSKS


7KH SXEOLF NLRVN LQWHUIDFH·V FRQFHSW LV RI DV D SXEOLF GLVSOD\ VKRZLQJ DQ
interactive map and capable of recognizing and responding to the tokens.
Public kiosks were intended to be distributed around a city and at key tourist
sites. Interaction is highly constrained and based on three spatially ordered
sensing zones ² running from left to right in front of the display. Users could
place one concrete token and one abstract token on each to visualize and
communicate their plans. The goal of this interface was to create simple
queries relating to the kiosk location and other areas or resources of interest
and transportation options between these sites. For instance, placing a

64  
 
concrete token on the leftmost sensing zone causes transportation information
between the kiosk and concrete site to be presented (e.g. estimates for
travelling time and cost by taxi, bus and foot). Adding an abstract café token
to the same sensing zone caused dining options to be displayed in the
proximity of the concrete site. In a similar manner, the three zones could be
used to create multi-leg travel plans.

4.2.3.3 HOME (PERSONAL COMPUTER)


The home system will allow users to quickly access photos and videos taken
during trips on their home PCs. Placing one of the concrete tokens used while
travelling on a sensing zone attached to a computer showed the media
recorded in the associated site. In this way, the tokens take on a role not only
of souvenirs, but also as true keepsake objects, holders of stories and
memories. Users would also be able to distribute their tokens to friends and
family as a personalized way of sharing mementos of their trips.

4.2.4 SCENARIO
Task: Visiting some specific places around the city of Funchal and sharing
photos of that trip.
Persona: Johnny Bravo.

Johnny Bravo and his wife have just arrived at the Pestana Grand Hotel in the
Lido area of the city of Funchal. They are 28 and 26 year-old respectively, and
are both nurses in the United Kingdom. They use their personal computers to
update their Facebook page, watch some movies and send e-mails.
Upon check-in, they are offered the possibility of paying a bit more for a
brand new technology called Mementos. After hearing all the benefits of using
such system, they decide to give it a try. They go to their Hotel room carrying
a set of small objects, each representing a place around the Island.
The next morning, John and his wife are ready to start visiting the city of
Funchal. Before leaving the UK they had already chosen some spots they
GLGQ·WZDQWWRPLVVRXWDQGDFFRUGLQJWRWKHLUSODQWRGD\WKH\ZDQWWRYLVLW
Monte and the Cathedral of Funchal.
Johnny looks at the objects they were given the previous night, and notices
one that is the miniature version of the Cathedral of Funchal, and another,
that is a miniature of a wickerwork sledge, which John and his wife always

65  
 
wanted to try. As he picks them up, his wife is remembered of what is in store
for them today, and in excitement, asks to keep the wickerwork sledge object.
As they leave the Hotel, they ask the clerk to point them to the nearest bus-
stop which can take them to the city center, where they know they can find
the Sé Cathedral. As they walk down the road the clerk pointed out, unsure of
if WKH\·UH JRLQJ LQ WKHULJKW SDWK -RKQ·VREMHFW VWDUWV YLEUDWLQJLQ KLV SRFNHW
indicating that the bus-VWRSLVQHDU7KH\·UHQRZVXUHWKH\·UHFORVHDQGZLWKD
VLPSOHWDS-RKQ´VLOHQFHVµWKHREMHFW$IHZPHWHUVDKHDGWKH\ILQGWKHEXV-
stop, and a couple RIPLQXWHVODWHUWKH\·UHRQWKHEXVWRWKHFLW\FHQWHU
As the clerk explained them, each object will indicate the best bus-stop for
them to leave the bus. Decided to trust the objects, John and his wife enjoy
the ride and the scenery as they travel into Funchal, not worrying about
GLVFRYHULQJZHUHWROHDYHWKHEXVPLQXWHVLQWRWKHWULS-RKQ·VREMHFWVWDUWV
to vibrate again, alerting him to ask for the bus to stop.
They are dropped in the city center, and as John planned, they now have to
move north WKURXJKDFHQWHUSOD]D$IWHUDFRXSOHRIPLQXWHVZDONLQJ-RKQ·V
object starts to vibrate again, this time a more spaced-out and gentle vibration.
He remembers the clerk mentioning that this kind of vibration shows him that
a public kiosk is available nearby. His wife notices him squeezing his object to
stop the vibration, and takes part in the visual search for the kiosk.
They easily find it, and after arriving at the kiosk, it seems to be aware of
WKH\·UH SUHVHQFH 7KH\ DUH LQVWUXFWHG WR SODFH WKH REMHFWs they wish to get
information about on the surface of the kiosk. Right after following these
instructions, the screen shows them a set of information around both the Sé
Cathedral and the wickerwork sledge REMHFWV WKH\ KDYH SODFHG LQ WKH NLRVN·V
screen. They find out the best way to reach the Cathedral, and where to go
next in order to reach Monte.
2QFHWKH\UHPRYHWKHREMHFWVIURPWKHNLRVN·VVXUIDFHWKH\VWDUWZDONLQJLQ
WKH ULJKW GLUHFWLRQ $V WKH\ DSSURDFK WKH &DWKHGUDO -RKQ·V REMHFW VWDUWV
vibrating again, indicating that they are near. This time the vibration reminds
him of a submarine·V sonar. After tapping the object, John and his wife are
aware of the impending arrival at their destination, and after walking inside
they enjoy the architecture and mood of the Sé Cathedral.
Two weeks later, in the UK, Johnny uploads all of the photos taken during
the trip to Funchal to his Flickr page. As instructed in the manual, he tags the
photos with a special tag according to each of the tokens he wants to map
them to. Every time John hosts someone at his place he shows them the

66  
 
photos from the trip by simply allowing his visitors to pick up the intriguing
objects, and connecting them to his computer.

4.2.5 STAKEHOLDER·S DIAGRAM


In order for Mementos to have a greater chance of being successful in helping
its users it has to have broad set of stakeholders that can help the system be
SDUW RI WKH FLW\·V touristic experience. For example, both hotel clerks and
tourist guides can hand their clients the Mementos· WRNHQV LQVWUXFWLQJ WKHP
on how to use them;; bus companies and local governments need to allow the
LQVWDOODWLRQRIWKH0HPHQWRV·WUDFNLQJV\VWHPRQWKHLUGRPDLQV;; while the local
community should be able to understand the necessities and problems of
tourists just by looking at the tokens they are carrying.

Figure 32 7KHVWDNHKROGHU·VGLDJUDPIRUWKH0HPHQWRV78,

67  
 
4.3 DESIGN

4.3.1 THE TAC PARADIGM


The TAC paradigm [SJ09] was used to define the relationships between the
tokens and their constraints (such as physical constraints in the interface or
other tokens, see Table 2). Each of these relationships is called a TAC, and the
Mementos design process represented three TACs ² WRNHQV RQ WKH NLRVN·s
VXUIDFH RQ WKH 3&·V UHDGHU DQG RQ WKH XVHUV· KDQGV ZKLOH WUDYHOOLQJ VHH
Section 4.3.1.1). The next step was creating a TAC palette (see Section 4.3.1.2),
in which all interactions possible within each TAC were represented. The first
7$&XVHUVFDQDGG RUUHPRYH WRNHQVWRWKHNLRVN·Vsurface, and move them
around. In the second TAC, users are constrained to add (or remove) a single
WRNHQ IURP WKH 3&·V UHDGHU /DVWO\ XVHUV FDQ JUDE RU GURS  DQG VTXHH]H
individual tokens. These interactions are modeled using special state diagrams
as they afford the representation of parallel tasks in each state (Dialogue
diagram, see Section 4.3.1.3). Mementos interaction is represented by three of
these states, each representing one of the TACs (see Figure 33). The state
representing the interaction around the PC is the simpler one, as it only
represents adding or removing tokens.
Finally, in the Interaction diagram two actions are modeled (from the more
complex states in the Dialogue diagram (see Figure 34): moving the token on
WKH NLRVN·V VXUIDFH (see Figure 35);; and tapping/squeezing the tokens as the
users walk around a city (see Figure 36). This final diagram allows for the
GHVLJQ RI WKH V\VWHP·V UHVSRQVH LQ ERWK WKH GLJLWDO DQG SK\VLFDO ZRUOGV VHH
Section 4.3.1.4).

Table 2 Part of the TAC paradigm for the Mementos TUI.

68  
 
4.3.1.1 TACS

 
Figure 33 The TACs for the Mementos TUI.

4.3.1.2 THE TAC PALETTE

Table 3 The TAC palette for the Mementos TUI.

69  
 
4.3.1.3 DIALOGUE DIAGRAM

 
Figure 34 The dialogue diagram for the Mementos TUI.

4.3.1.4 INTERACTION DIAGRAM

Figure 35 The interaction diagram for the Mementos TUI (at the public kiosks).

70  
 
Figure 36 The interaction diagram for the Mementos TUI (while travelling).

4.3.3 TOKENS· DESIGN


As tokens could operate in different scenarios and contexts, their form should
invite participation in every case to a different range of users. They needed to
be small and easy to carry (e.g. in a keychain), be representative of their
function in all of the three domains, and easy go grasp and handle. Allied with
some hardware limitations (see Section 4.1.), a series of different tokens were
built (see Figure 37), ranging from a desired form to what was possible to
build in order to contain the necessary hardware.

Figure 37 Some of the considered tokens.

71  
 
4.3.4 WIREFRAMES
With the help of the TAC paradigm, the richer interaction space was present
in the public kiosk. This concept was further iterated over via the HCI method
of wire-framing. The next group of images serves as example to a possible
interaction with the kiosk. As the user arrives at the kiosk, he is informed of
his actual location on a map of the city he is visiting (see Figure 38). As he
places one of the tokens he was carrying on the first slot of the kiosk, he is
informed of transportation options (and their time costs) between his location
and the location represented by the token (see Figure 39). The user then places
a second token from the ones he was carrying on the second slot of the kiosk,
being informed of the transportation time costs between the location
represented by the first token and the location represented by the second (see
Figure 40). The user then decides to use the tokens available on the kiosk,
which represent general facilities such as cafes or markets. He picks the one
that represents a bus stop and places it bellow the token on the first slot of the
kiosk. The system then informs him of bus stops in the proximity of the
location represented by the first token (see Figure 41). As the user removes
the token from the first slot, the system automatically looks for transportation
options (and their WLPHFRVWV EHWZHHQWKHXVHU·VORFDWLRQDQGWKHWRNHQLQWKH
second slot of the kiosk (see Figure 42).

Figure 38 Wireframe #1 for the public kiosk of Mementos: the user is informed of where he is in
relation to the city he is visiting. For an image with higher resolution, please refer to Annex E.

72  
 
Figure 39 Wireframe #2 for the public kiosk of Mementos: the user has placed one of the concrete
tokens he was carrying on the first slot of the kiosk;; he his informed of transportation options (and
the time costs) close to his current location. For an image with higher resolution, please refer to
Annex E.

Figure 40 Wireframe #3 for the public kiosk of Mementos: the user has placed another of the
concrete tokens he was carrying on the second slot of the kiosk;; he his informed of transportation
options (and the time cost) between his location and the two desired destinations. For an image with
higher resolution, please refer to Annex E.

73  
 
Figure 41 Wireframe #4 for the public kiosk of Mementos: the user has placed one of the abstract
tokens found in the kiosk under the concrete token on the first slot;; he his informed of bus stops
close to the first destination. For an image with higher resolution, please refer to Annex E.

Figure 42 Wireframe #5 for the public kiosk of Mementos: the user has removed the concrete token
from the first slot of the kiosk;; he his informed of transportation options (and the time cost) close to
his current location;; a history of previous destinations is kept on-screen. For an image with higher
resolution, please refer to Annex E.

74  
 
4.4 IMPLEMENTATION

4.4.1 TOKENS (TRAVELLING)


Mementos aims of helping users navigate unfamiliar locations by triggering
events in the tokens they care as they pass through or near relevant real world
locations (such as bus or metro stops) in direction to their goals (as
exemplified in Figure 43). These events are composed of vibration patterns
programmed in the Processing [URL9] environment, and are sent from
infrastructure at the relevant locations to the tokens the users carry via
Bluetooth.
Prior to embarking on a trip, as users select objects to carry with them, they
make their daily plans visible to other users, aiding awareness and fostering
collaboration. This allows for maximum IOH[LELOLW\ E\ VXSSRUWLQJ HDFK XVHU·V
plans through low-impact reminders which provide awareness while not
enforcing particular preset plans or courses of action.
These tokens are embedded with a SHAKE [URL8] device, capable of
receiving Bluetooth messages and playing them through a vibrating motor.
Users are able to respond to such events by simply tapping or squeezing the
tokens, allowing the tokens tR DFW DV DPELHQW GLVSOD\V WKDW GRQ·W GHWUDFW WKH
users from the actual trip. These taps or squeezes are detected on the
6+$.(·VVXUIDFHYLDtwo capacitive sensors.
The code for this application is available on Annex F.

Figure 43 Users receive vibrotactile messages on the tokens they carry as they pass through relevant
real-world settings. This is achieved by sending the messages via Bluetooth, from the relevant spots
(who scan for the correct tokens) to the tokens, embedded with a SHAKE [URL8] device.

75  
 
4.4.1.1 SHAKE
SHAKE is a device that provides any
computing platform with movement sensing
and vibrotactile feedback. It incorporates 6
degree of freedom inertial sensing, compass
heading sensing and electric field sensing in
a matchbox size enclosure. This device is
regularly used as a valuable tool for
researchers in the areas of HCI. SHAKE is Figure 44 An open SHAKE device
[URL8].
capable of sensing linear and rotational
movements, absolute orientation / direction and human body proximity, and
can transfer this information to any computing device that has Bluetooth
wireless connectivity (e.g. mobile phones, laptop computer) [URL10]. This
application used the SHAKE SK6 [URL8], with the 2.72 firmware version.

Figure 45 The set of tokens used in the kiosk evaluation: four abstract tokens (representing markets,
taxis, cafes, and municipal WiFi) and two concrete tokens (representing a famous church and
botanical garden).

4.4.1.2 PROCESSING LANGUAGE


Processing is an open source programming language and environment based
on Java, for people who want to create images, animations, and interactions.
Initially developed to serve as a software sketchbook and to teach
fundamentals of computer programming within a visual context, Processing
also has evolved into a tool for generating finished professional work [URL9].

76  
 
This application was developed using Processing version 1.0.9, in both
Windows 7 and Mac OS X Snow Leopard.
The following Processing libraries were used:

o bluetoothDesktop ² a Bluetooth library for Processing that allows


applications to send and receive data via Bluetooth wireless networks.
Using this library, a Processing application running on a computer with a
JSR-82 implementation can connect to other Bluetooth devices as well as
act as a service that other devices can connect to [URL11].

4.4.2 PUBLIC KIOSKS


Mementos also allows for users to use the tokens they carry on public kiosks
that would be spread around a city. Users interact with these kiosks by placing
the tokens the surface. The objects are detected using radio-frequency
identification (RFID), the limited range of which also constrains their use to
key areas of the kiosk surface (RFID readers). These specific areas are
physically identifiable as slots in which the users can place their tokens.
By combining information relating to the physical position of the display
and the particular objects placed on its surface, users are presented with
information tailored to the place represented by each of the tokens,
transportation options and close-by amenities, and also be provided with a
spatial sense of where they are in relation to the various places they wish to
visit, up to and including providing navigational advice and directions for
activities. As with the previous context of using tokens while travelling, the
kiosk interface also allows for high levels of collaboration as multiple users are
allowed to fluidly interact around a tabletop.
The kiosk interface was developed in Processing, and the RFID hardware
chosen was the touchatag reader and tags [URL12]. The application code is
available in Annex G and the XML schemas on Annex H.

4.4.2.2 TOUCHATAG
touchatag enables service providers and
enterprises to leverage ubiquitous identity ³ in
contactless RFID cards, and NFC mobile
devices ³ for wallet 2.0 services such as mobile Figure 46 A touchatag RFID reader
and five tags [URL12].
payment, fidelity and interactive advertising

77  
 
[URL12]. The touchatag reader is based on the ACR122(U) NFC Reader from
Advanced Card Systems Limited, and its tags are ISO/IEC 14443 Type A
MIFARE Ultralight stickers.

Figure 47 Screenshot from application that runs on the public kiosk, as part of the Mementos TUI.
For an image with higher resolution, please refer to Annex I.

4.4.2.3 PROCESSING LIBRARIES


o FullScreen ² a Processing library that allows for better full screen support
[URL13].
o touchatag-processing ² a Processing library that was developed by the
author of this dissertation to address the restrictions and limitations of the
official touchatag software client. This library affords the simultaneously
connection of multiple touchatag readers, and up to three tags on each
one [URL14].

At the moment this dissertation LVEHLQJZULWWHQWKLVOLEUDU\·VWKUHDGLQWKH


official Processing forums has already more than 1300 views [URL16],
and counts with reviews such as:

7KDW·VSUHWW\QLFH « ,ZLOOGHILQLWHO\XVHLWIRUWKHQH[W5),'SURMHFW

I think it's really great that you made a library so we don't have to rely on the crappy
software provided.

78  
 
This library was written in C, using the Java Native Interface (JNI) and
libnfc (an open source library for Near Field Communication) [URL17].
As a result of this work, the author of this dissertation wrote a tutorial on
how to compile libnfc on Windows, which is the only tutorial for
Windows in the libnfc main website [URL15]. The programming code for
this library is available in Annex J.

4.4.3 HOME (PERSONAL COMPUTER)


A final conclusion from the user study was that many travelers bring home
tokens and souvenirs to remind them of their vacations. The same RFID tag
infrastructure used in the kiosk was also deployed in the home to add value to
this activity. Users can access the photos and videos taken during their trips by
placing of one the tokens on a reader attached to a PC. Users map media to
their tokens by tagging it using key phrases on the Flickr image and video
hosting website [URL18] (e.g. tagging a photo of the Eiffel Tower as Mementos:
Eiffel Tower will display this photo on the computer when the Eiffel Tower
token is placed on the reader).

Figure 48 6FUHHQVKRWIURPDSSOLFDWLRQWKDWUXQVDWWKHSHUVRQDOFRPSXWHULQWKHXVHUV·KRPHDVSDUW
of the Mementos TUI.

79  
 
This way, tokens end up serving not only as a souvenir from the trip, but
also a link to a user·VPHPRULHVWKURXJKWKHLUDELOLW\WROLQNWRFDSWXUHGGLJLWDO
content. In this way, they could become cherished keepsakes, kept over time
with their value rooted in their personal history, and literally holding stories
and memories [GC97]. The tangible objects also support users as they share
their media, allowing them to send their objects to friends and family, and
allowing for multiple users to interact with the objects and participate in
discovering the memories and stories each holds.
The kiosk interface was developed in Processing and the RFID hardware
chosen was the same as before, the touchatag reader and tags [URL12]. The
application code is available in Annex K and the XML schemas on Annex L.

4.4.3.1 PROCESSING LIBRARIES


In conjunction with both the libraries used in the previous domain of public
kiosks, the home application used:

o flickrj ² a Java API wrapper for the REST-based Flickr API [URL19].

4.5 EVALUATION
In order to provide a validation of the system design, an observational study
was conducted on two groups of three people (all University students) using
the Mementos public kiosk interface. 23 tokens were used in this test: five
represented prominent tourist sites around the city of Funchal, while the
remaining 18 represented the generic categories of restaurants, payphones,
wireless internet, markets, bus stops and taxi ranks (three tokens for each
category). Short paper brochures were provided for each of the tourist sites
providing a textual description and indicating opening and closing times and
likely visit durations (in Annex M).
The study lasted 30 minutes for each group. The first 10 minutes were used
as an introduction and participants were encouraged to freely explore the
system features. For the last 20 minutes, they were then handed a nominal 20
Euro for transportation and requested to create a plan that came under
budget, involved a visit to four of the five tourist sites and included lunch, an
afternoon stop at a market, and pauses in locations with wireless access and
payphones.

80  
 
Figure 49 One of the groups creating their version of a plan to visit Funchal for a day.

The sessions were observed (see Annex N) and videotaped (available at


[URL20]) and a semi-structured interview (see Annex O) was conducted on
their completion (see Annex P for the answers to the semi-structured
interview). To motivate interest in the outcome of the plan, a 30½ of
music/book vouchers were awarded to the team who developed the plan best
addressing the problem criteria.

4.5.1 RESULTS
Both groups were able to create plans that matched the problem requirements
and appeared to have little trouble grasping the fundamentals of the interface.
This was confirmed in the interviews, in which the participants reported that
the tokens were representative, immediately understandable and easy to
manipulate. Both groups suggested that using the system would not require
previous experience. Somewhat in contrast to this reported simplicity,
participants were also highly engaged with the system: both groups took the
entirety of the allotted time.
Furthermore, all participants also appeared to be immersed throughout the
session, suggesting that the system effectively supported collaboration within
small groups. Once again, this assertion was supported by comments during

81  
 
the interviews. Furthermore, it was borne out by specific behaviors, such as
frequent consensual passing of the tokens or, more rarely, one participant
gently taking a token from another. Body language and physical activity were
also observed to be effective tools for collaboration, with users keeping in
touch with the activity of their peers simply via watching their movements and
expressions.
Users also used the physical environment to simplify tasks, for example by
maintaining the original physical placement of tokens and paper instructions
when they were not in use. Users would also hold tokens in order to
temporally store them whilst they were not in use. At other times, tokens
would be handed to particular users to perform tasks in order to streamline
and facilitate management of the space.
In summary, although the study was short and exploratory, its results are
broadly positive. Its serves to validate some of the key concepts in Mementos:
the suitability of a Tangible Interface to tourism;; the ability of the system to
support both planning tasks and collaborative activity;; and a richness of
embodied actions that require further in-depth study.

4.6 FINAL REMARKS


Mementos offers a glimpse of what TUIs can become, as it allows users to
naturally pick a physical interface and apply it to different scenarios in their
lives. Although there are still some limitations regarding the size of the
hardware to be embedded in the tokens, Mementos stands as proof-of-
concept to other researchers trying to either, combine two important TUI
research areas, such as Ambient Displays [IU97] and Tangible Tabletop
Interaction;; create a TUI to overcomes much of the issues found with the
current computer systems that tackle tourism activities;; or study how to design
Tangible Interaction techniques for complex real-world application domains.
 

82  
 
5. S ECOND PROTOTYPE : E CO P LANNER

5.1 INTRODUCTION
Eco Planner is the first TUI that tackles the issue of energy consumption at
home, by allowing its users to create, manage and analyze their daily routines
through tangible objects that serve as physical representations of WKH XVHUV·
activities. (QHUJ\ FRQVXPSWLRQ FDQ EH FKDUDFWHUL]HG DV ´WKH URXWLQH
accomplishment of what people take WREHWKH¶QRUPDO·ZD\VRIlifeµ [PSP10].
7KHVH URXWLQHV FDQ EH FKDQJHG ZLWKRXW QHFHVVDULO\ ILUVW FKDQJLQJ SHRSOH·V
attitudes. Human routines relating to home energy use in the home are directly
responsible for 28% of U.S. energy consumption. Most people, however, are
unaware of how their daily activities impact the environment or how often
they engage in those activities [PGR10].
3HRSOH·V EHKDYLRUV DUH UDUHO\ HQHUJ\-VHQVLWLYH EHFDXVH PRVW RI WKHP GRQ·W
have a sense of the cost of energy nor the cost associated with specific
appliancHV DSDUWIURPWKH$& 7KHUH·VDOVRDFRPPRQODFNRIDZDUHQHVVIRU
the usual settings used on specific appliances. While awareness of such costs is
perceived as a desirable result of using energy-related displays, awareness of
relatively low costs of appliance use may actually be a disincentive to
conserve. Even cumulative savings over time may not provide sufficient
incentive for long-term change for many people [PSP10].
Prior research has identified the important role of habit in guiding energy-
consuming behavior ² and the challenge of altering habitual consumption.
Slight alterations in seemingly arbitrarily developed routines could substantially
reduce consumption [PSP10]. People are simply not aware about the

83  
 
relationship between their behaviors/routines and their energy consumption
[PGR10].
Immediate feedback has been shown to be one of the most effective
strategies in reducing electricity usage in the home, by proving a basic
mechanism with which to monitor and compare behavior and allows an
individual to better evaluate their performance [PGR10].
The goal of the Eco Planner is to allow users to visualize and compare the
impact of their daily routines on the environment and change them in a way
WKDW PDNHVWKHLUOLIH·VQROHVVFRPIRUWDEOHEut a lot greener. Users should be
able to test their beliefs with the Eco Planner, so as to change their attitudes,
and ultimately, shape their values.

5.2 CONCEPT
Fueled by literature review, the concept behind Eco Planner is of a TUI that
can overcome some of the weaknesses of information systems developed to
tackle the issue of home sustainability. Eco Planner is the first TUI designed
to tackle the problem of sustainability, so its design process and
implementation should be a valuable resource for other TUI developers
aiming to build for the same domain.
The concept for the Eco Planner was born of literature review on
motivational theories (see Section 5.2.1) and the state of the art in energy
saving computer systems (e.g. ambient and numeric displays). After fleshing
the initial brainstorm in a mind map (see Section 5.2.2) and scenario (see
Section 5.2.3), the idea for the Eco Planner took form as a TUI composed of a
set of tokens and a home tabletop interface, combining two distinct areas of
TUI research, Tangible Tabletop Interaction and Tangible Augmented Reality
[KBP+01, LNB+04, ZCC+04]. In Eco Planner, each token physically
represents an activity (e.g. watching TV, doing the laundry), and users can
FROODERUDWLYHO\ FUHDWH WKHLU KRXVHKROG·V URXWLQH by laying the tokens on the
tabletop interface. The 2D space of the tabletop represents a day of the week,
so that tokens closer to the right will represent activities done at night, while
tokens closer to the left will represent activities done in the morning. Likewise,
tokens that are vertically aligned on the tabletop represent concurrent
activities. By placing a token on a key area of the interface, users can choose
different options for the activity (e.g. committing to always do the laundry
with a full tank). (FR3ODQQHUVKRXOGEHDEOHWRXQGHUVWDQGWKHXVHUV·URXWLQH
compare it with previous routines, and display tips on how an actual routine

84  
 
FRXOG EH ´JUHHQHUµ Ultimately, users can decide if they want to be
ecologically or financially motivated, changing how the system interprets their
routine and what tips it offers.
Eco Planner should successfully motivate users to lower costs or conserve
energy by allowing them to:

o Avoid inconvenience and discomfort by tackling the ecological problems


of their routines at their own pace.

o Self-comparison, as every new routine is compared to the previous one.

o Set goals for their household as they commit to new routines. Goals serve
a directive function ² they direct attention and effort toward goal-relevant
activities;; second goals have an energizing function and, in particular, high
goals often lead to greater effort than low goals;; third, goals affect
persistence;; and finally, goals affect behavior indirectly as individuals use,
apply, and/or learn strategies or knowledge to best accomplish the goal at
hand [FFL10].

o Be accountable for the routines created. By allowing users to express


commitment to their family or roommates, Eco Planner increases the
probability that they will pursue that behavior.

o Get nominal rewards (as green points), which the users can compare with
other households. Users usually respond to rewards even if they are
nominal in nature (e.g. an acknowledgement of positive behavior)
[FFL10].

o Agree on the number of appliances they possess or their relative impact


on the overall consumption. By offering a coherent baseline, Eco Planner
develops group efforts toward energy conservation [RDM10].

o Be more aware of energy-conserving options for their products. Often


users ignore visible options, instead relying on habit and split-second
decisions [PSP10]. Eco Planner might inform the users of specific greener
options on the appliances that are part of their routine, allowing them to
change how they use them.
 

85  
 
5.2.1 THE TRANS-‐THEORETICAL MODEL
The Trans-theoretical Model (TTM) states that intentional behavior change is
a process occurring in a series of stages, rather than a single event. Motivation
is required for the focus, effort and energy needed to move through the stages
[HGH10].

o Pre-contemplation ² (FR 3ODQQHU VKRXOG ´SODQW WKH VHHGµ that


unsustainable energy behaviors are problematic, and show the user the
consequences of his non-sustainable energy behavior.
o Contemplation ² Eco Planner allows users to test if their behaviors are a
problem. Users can plan for very small changes on their routines,
encouraging them to larger energy actions in the future [HGH10].
o Preparation ² Eco Planner allows users to come up with different plans
for their routines, allowing users to set goals and commit to them.
o Action ² As users engage in their new routines, Eco Planning should
provide them with positive reinforcement. By allowing for interactive
exploration and customization on the V\VWHP·V interface, users will tend to
develop intrinsic motivations [HGH10].
o Maintenance, Relapse, Recycling ² In order for users to maintain their
routines they should be intrinsically motivated first and foremost, so using
the Eco Planner should be a comfortable and engaging experience. If
users have a strong, hard goal, keeping them informed will also serve as
strong motivation. Ultimately, users should be able to keep track of their
progress as to encourage them to self-reinforce and self-reflect on their
energy experiences [HGH10].

5.2.2 MIND MAP


The mind map for the Eco Planner branches in three different but important
directions (see Figure 51). First, it focuses on how Eco Planner can motivate
its users by allowing public commitment to routines (e.g. to other users in the
same household) and self-comparison, and by using motivational theories (e.g.
trans-theoretical model [HGH10], rational-economic model [FFL10]). Second,
it focuses on the qualities of the routines created by the users (e.g. aggregate
results, collaborative creation and manipulation). Third and last, it focuses on
the qualities reserved for TUI systems (e.g. comfortable and engaging
experience, association of physical tokens with activities).

86  
 
Figure 51 Mind map for the Eco Planner TUI. For an image with higher resolution, please refer to
Annex Q.

5.2.3 SCENARIO
Task: Getting to know and use the Eco Planner
Persona: John and his wife Susan.

,W·V D 6DWXUGD\ PRUQLQJ DQG -RKQ LV EURZVLQJ WKURXJK KLV IULHQGV SRVWV on
Facebook. He notices that a couple of his close friends have some iconic
updates related to an application called Eco Planner. His friend Jonathan
UHFHQWO\ VKDUHG WKDW KH·OO VWDUWLQJ GRLQJ KLV ODXQGU\ RQO\ ZLWK FROG ZDWHU
Another of his friends, Quentin, committed to using the bus to work. All
these updates are accompanied with how much Green Points both his friends
have since they started using Eco Planner.
%HLQJDZDUHWKDWERWKKHDQGKLVZLIHGRQ·WZRUU\DERXWVDYLQJHQHUJ\DQG
knowing about the ever-growing problem of sustainability, John reads into this
Eco Planner application to see if there are many disadvantages in using it. Eco
Planner is composed by an interactive table and a series of tokens. John is
informed that he can request the Eco Planner for 3 months without charge,
and then decide if he wants to keep it. John calls Susan upstairs to show her
what he found. After explaining to her what Eco Planner his, they decide to
request one from the Facebook page.
7ZRZHHNVODWHU(FR3ODQQHUDUULYHVDW-RKQDQG6XVDQ·VKRXVH6XVDQLVDW
work in the city center, working as a high-school English teacher. John is

87  
 
working from home today;; being a college professor gives him some flexibility
in his schedule. Curious about the new technology, John takes a break from
KLVZRUNWRVHWHYHU\WKLQJXS+H·VVXUSULVHG(FR3ODQQHUKDVonly one wire,
the power outlet. Being slick and good looking, he plugs it in the living room
without even reading the manual first ² he feels confident around computers
DQGJL]PRV$IWHUWXUQLQJLWRQIRUWKHILUVWWLPHKH·VDVNHGWRFRQILJXUHWKH
wireless settings and enter his Facebook account details.
John then begins to remove the tokens from the plastic bag they came
wrapped in. He notices an area of the Eco Planner screen that represents the
different areas of the house, each with a different color. John places the tokens
that represent the difference activities that happen in his house in the different
DUHDVRIWKHVFUHHQ HJWKHWRNHQWKDWUHSUHVHQWV´ZDWFKLQJ79µLQWKHJUHHQ
area, that represents the living room).
John decides to read the manual and continue to work while his wife
GRHVQ·W DUULYH $ FRXSOH RI KRXUV ODWHU 6XVDQ DUULYHV IURP ZRUN 6KH
immediately notices the slick Eco Planner table in her living room, calling
John while walking towards it. John explains his wife that he has everything set
up, while showing her the tokens he set up on the table. John explains to her
the meaning of the tokens that came with the Eco Planner. They start by
defining their actual routine. Susan gets ahead, placing the laundry token on
right side of the screen. John understands that she meant that their routine
includes washing their clothes later in the day, and checks to see if the time
displayed is correct.
When they finish adjusting token, they place the laundry token on the
´&KDQJHDFWLYLW\RSWLRQVµ area of the screen, choosing which options they use
when they do their laundry. John and Susan start debating about these options
for the first time, being aware of how they usually wash their clothes. Susan
SLFNVWKHRSWLRQV´+RWZDWHUµDQG´Half tankµAs they place the tokens on
the main part of the screen they notice tips have appeared indicating greener
options for doing the laundry. On the right side of the tokens there are also
iconic representations of the options John and Susan chose. John and Susan
continue to set every activity for a while, and after they are finished, John taps
on the commit icon. They are then notified that a Facebook update was
posted on their Walls, saying they have committed to their first routine.
Since both John and Susan have been exposed to tips as they built their
routine, they have some small changes they can commit to without causing
them too much hassle in their lives. John starts by committing to turn the TV
off instead of setting it to stand-by. Susan commits to changing the fridge

88  
 
settings to a less cool temperature. By incrementally changing these options on
the Eco Planner, a plus icon appears at the end of the routine, indicating that
they are taking greener steps. They are again notified that a Facebook update
was posted on their Walls with this information. John and Susan go to fix
dinner after turning the Eco Planner off.

5.3 DESIGN

5.3.2 THE TAC PARADIGM


The TAC paradigm [SJ09] offers a descriptive framework in which developers
can explicitly declare the relationships between tokens and one or more
constraints (such as physical constraints in the interface or other tokens). Each
of these relationships is called a TAC. In the Eco Planner prototype there is
only one TAC, as each activity token is constrained by the tabletop surface
(see Table 4). The next step is creating a TAC palette, which represents all
interactions possible within each TAC. For the Eco Planner, users can add (or
remove) a series of tokens to the tabletop surface, move them around, and
attach (or detach) tokens together (see Table 5). These interactions can be
modeled using a form of state diagrams that allow for parallel tasks to be
represented in each state (Dialogue diagram, see Figure 52). The Eco Planner
interaction is represented by only one state, which manages all of the actions
represented in the TAC palette (see Table 5).

Table 4 Part of the TAC paradigm for the Eco Planner TUI.

Finally, the Interaction diagram allows for WKH GHVLJQ RI WKH V\VWHP·V
response in both the physical and digital world. For the Eco Planner, as users
add/remove tokens to the tabletop surface, or attach/detach tokens together,
the system augments the tokens (or group of tokens) in real-time by updating
the screen with digital information (see Figure 57).

89  
 
5.3.2.1 The TAC palette

Table 5 The TAC palette for the Eco Planner TUI.

5.3.2.2 DIALOGUE DIAGRAM

Figure 52 The dialogue diagram for the Eco Planner TUI.

90  
 
5.3.2.3 INTERACTION DIAGRAM

Figure 53 The interaction diagram for the Eco Planner TUI.

5.3.3 TOKENS· DESIGN


Due to the hardware restrictions when this application was being developed,
(see Section 4) the tokens that were built to debug the system GRQ·WPDWFKWKH
initial concept. The tokens in Figure 54 were built using Lego pieces in
combination with a fiducial marker, and are able to be vertically attached to
one another. Time pieces (representing 30 minutes) can be added at the front
of the tokens, indicating the duration of the activity.

Figure 54 Tokens used for the Eco Planner TUI. They can be vertically attached to one another, and
LW·VSRVVLEOHWRDGGWLPHpieces in the front.

91  
 
5.3.1 WIREFRAMES
The design process for the Eco Planner started with iterations over a series of
wireframes that complied with the ideas exposed in the concept phase. The
next set of images represent an example of interaction where the user has five
tokens on the tabletop surface, each representing activities such as commuting
to work or watching TV (see Figure 55). As the user places the token that
UHSUHVHQWV ´GRLQJ WKH ODXQGU\µ RQ WKH RSWLRQV DUHD KH FDQ WKHQ FKRRVH
between different washing options (through multi-touch controls, see Figure
56). Depending on the options chosen the system provides feedback
informing the user if the new routine is better or worse than the previous one,
and allowing him to commit to the changes made (see Figure 57).

Figure 55 The wireframe #1 for the Eco Planner TUI. Five activity tokens are being used. For an
image with higher resolution, please refer to Annex R.

Figure 56 The wireframe #2 for the Eco Planner TUI. One activity token is in the options area. For
an image with higher resolution, please refer to Annex R.

92  
 
Figure 57 The wireframe #3 for the Eco Planner TUI. User can commit to the changes made. For an
image with higher resolution, please refer to Annex R.

Figure 58 The wireframe #4 for the Eco Planner TUI. User has committed to a new routine. For an
image with higher resolution, please refer to Annex R.

5.4 IMPLEMENTATION
Eco Planner was implemented according to the initial concept, but due to
hardware limitations the application was tested under a small different set of
conditions. While the application is perfectly capable of augmenting tokens in
the surface of a tabletop, since no multi-touch surface was available during this
stage of development, tokens where tracked from a top perspective (using
fiducial markers, see Figure 59) 7KLV GLGQ·W allow the tokens to physically
represent activities VLQFH WKH\ FRXOGQ·W KDYH D PHDQLQJIXO VKDSH RU LPDJH
attached to the top of their surface (see Figure 54).

93  
 
Regardless of the testing conditions, the application is capable of detecting
attached tokens (representing a common time for all of them) and supports
routines from 7 am to 23 pm (see Figure 60). It was built using the Processing
environment and reacTIVision technology [JGA+07]. The application code is
available in Annex S, and the XML schemas on Annex T.

Figure 59 Photo taken during the testing phase of the Eco Planner prototype. The tokens are being
scanned from a top perspective using reacTIVision technology [JGA+07].

5.4.1 PROCESSING LIBRARIES


The Eco Planner application used the following libraries:

o FullScreen ² a Processing library that allows for better full screen support
[URL13].
o TUIO ² a client API for tangible multi-touch surfaces. The TUIO
protocol allows the transmission of an abstract description of interactive
surfaces, including touch events and tangible object states. This protocol
encodes control data from a tracker application (e.g. based on computer
vision) and sends it to any client application that is capable of decoding
the protocol [URL21].
o g4p ² a Processing library that provides set of 2D GUI components to
enable simple user input at runtime that can be used in all 2D and 3D
graphics modes in the Processing sketching environment [URL22].

94  
 
Figure 60 Screenshot from a test of the Eco Planner TUI. The tokens are being tracked in a surface,
while the interface is being simulated on a monitor (with tokens as pictures of activities). For an image
with higher resolution, please refer to Annex U.

5.5 FINAL REMARKS


Despite the limitations caused by not having a multi-touch surface available
during the development of Eco Planner, this dissertation argues that this early
prototype shows that with fairly DFFHVVLEOH WHFKQRORJ\ LW·V SRVVLEOH WR create
TUIs that can tackle common issues of typical users. The documentation
presented in this chapter can inform future researchers on how develop
systems that combine the areas of Tangible Tabletop Interaction and Tangible
Augmented Reality [KBP+01, LNB+04, ZCC+04] to motivate users to save
energy by offering novel and engaging interactions.

95  
 
6. D ISCUSSION AND C ONCLUSION

´2XU LQWHQWLRQ LV WR WDNH advantage of natural physical


affordances to achieve a heightened legibility and
seamlessness of interaction between people and
LQIRUPDWLRQµ>,8@

In the last ten years TUIs have become an established are of HCI research, as
attested by the growing body of work and the number of venues dedicated in
TUI research. Seeking to provide seamless interfaces between people, digital
information, and the physical environments, TUIs have shown to have an
enormous potential to enhance the way in which people interact with and
leverage digital information. However, TUI research is still in its infancy.
Definitely, the next step for is for TUI research to better understand the
implications of interlinking the physical and digital worlds, to design Tangible
Interaction techniques for complex real-world application domains, and to
develop technologies that bridge the digital and physical worlds [SH10].

6.1 CONTRIBUTIONS
Most Tangible Interfaces prototypes are either constrained to operate in
specific physical environments (such as augmented rooms [e.g. BGS+09]) or
surfaces (such as multi-touch [e.g. JGA+07]) ² resulting in TUIs confined to
labs or specific work environments ² or are composed of tangible objects that
are only aware of one another (such as the Siftables [MKM07]) ² which focus
on play applications. The first contribution of this dissertation is tackling this

96  
 
absence of TUIs that operate in real-world scenarios, tackling the common
problems of everyday users.
It starts by presenting Mementos, a TUI for tourism and travel. By
supporting the three major activities of a trip ² travelling (by offering real-time
information for navigation aid), in-situ planning (by offering collaborative plan
creation and revision), and the handling of photos and videos from a trip (by
allowing media sharing and revisiting) ² Mementos offers an interface capable
of adapting to different contexts and overcoming most of the problems of the
actual information systems for tourist support, while still maintaining the
benefits expected of a TUI ² physicality, the seamless integration with the
environment, and the support for memory, epistemic actions and
collaboration. Additionally, Mementos combines two important TUI research
areas: Ambient Displays [IU97] and Tangible Tabletop Interaction. Mementos
is presented as a proof-of-concept prototype, showing other researchers a
successful example of a TUI that can be useful to the typical user in a typical
task. This dissertation documents the development process of such TUI, how
it was designed with both HCI and TUI-specific methods, how it combined
existing technologies and made them work together across different scenarios,
and what results were obtained from a user study conducted on part of the
interface. Ultimately, it presents a TUI that has shift its perspective from
information-centric to a more action-centric perspective.
Similarly, this dissertation also documents the development of the first TUI
dedicated to sustainability. Eco Planner allows users to create and revise
household routines while being aware of their impact on the environment.
0RWLYDWHG E\ WKLV GLVVHUWDWLRQ·V LQLWLDO JRDO RI developing and documenting
TUIs that could fit the needs of a typical user in his day-to-day activities, the
design and development of the Eco Planner application combined the areas of
Tangible Tabletop Interaction and Tangible Augmented Reality [KBP+01
LNB+04, ZCC+04] to develop an interface that could be useful and
interaction-rich, leveraging all the benefits of TUIs. By using the Trans-
theoretical Model (TTM) as a guide while brainstorming and designing the
V\VWHP·VIXQFWLRQDOLW\(FR3ODQQHUDLPVWRPRWLYDWHXVHUVWKURXJKWKHYDULRXV
stages of behavior change. 8OWLPDWHO\WKLVSURWRW\SH·VGRFXPHQWDWLRQDLPVDW
informing other TUI researchers on the possibilities of expanding TUIs to the
home environment, while tackling a difficult subject of creating interfaces to
motivate the sustainable behavior of its users.
In both the cases of Mementos and Eco Planner, tangible objects were
designed so that users could use them in their daily lives independent from the

97  
 
system (e.g. tokens acting as souvenirs in the case of Mementos, the token that
UHSUHVHQWV ´GRLQJ WKH ODXQGU\µ EHLQJ SODFHG RYHU WKH ZDVKLQJ PDFKLQH WR
remember the user of his commitments in the case of the Eco Planner).
The second contribution of this dissertation is opening the discussion of
embedding the theory of Embodied Cognition in a new concrete framework
or set of guidelines for TUI design. While the frameworks and methods
presented in this work are of great valuable to TUI developers, they mainly
focus on how a TUI can be constructed, modeled or categorized. They fall
short of addressing the biggest difference between TUIs and GUIs, their
physicality 7KLV UHVXOWV RQ 78,V WKDW DUHQ·W specifically built to maximize
usability or to reflect aspects of human cognition.
Embodied Cognition is a perspective in cognitive science that sees the body
with the central role in how the mind works. It understands human cognition
has being neither centralized nor abstract;; it is instead rooted in sensorimotor
processing [W02]. It further suggests that cognition exploits repeated
interaction with the environment, creating structures which advance and
simplify cognitive tasks [A03]. Embodied Cognition theory also argues that
traditional accounts of memory focus too much on the passive storage of
LQIRUPDWLRQ GHIHQGLQJ WKH LGHD WKDW SDWWHUQV VWRUHG LQ PHPRU\ UHÁHFW WKH
nature of bodily actions and their ability to mesh with situations during a task
[G97]. The way the brain offloads some of its cognitive duties onto the
environment instead of doing all of the computational work on its own is
RIWHQ FDOOHG DV ´VFDIIROGLQJµ >&@ $ SDUWLFXODU NLQG RI ´VFDIIROGLQJµ LV
known as epistemic actions ² which are actions aimed at altering the world so
as to aid and augment cognitive processes instead of trying act directly over
the goal of a task [CC98].
It is clear that most of the foundations for TUIs are rooted on the theories
of Embodied Cognition. An example of work that inspired the beginning of
TUI research is the work of Donald Norman [N88], which introduced the
notion of affordances. They are properties of an object that invite and allow
VSHFLÀF DFWLRQV 6XUHO\ WKH SRZHU RI 78,V OLHV LQ SURYLGLQJ ERWK UHDO DQG
perceived affordances. Even so, there is no way of guiding the design of
tangible objects (e.g. size, shape, material) in order to create the correct
affordances for their applicability. In a different example, while the sense of
touch is our primal and only non-distal sense [SH10]WKHUHLVQ·W a clear guide
on how to employ specific vibrotactile feedback on the tangible objects for
specific purposes. /LNHZLVH ZKLOH LW·V DFFHSWHG WKDW using two hands can
impact performance at the cognitive level, as it changes how users think about

98  
 
a task [SH10@6WLOOWKHUH·VQRIRUPDOZD\RIGHILQLQJDSSURSULDWHWZR-handed
Tangible Interactions.

6.2 FUTURE WORK


While the theories of embodiment are often utilized in conceptual discussions
of TUIs [A07, D01, FTJ08b, HB06, HI07, KHT06], they rarely are treated
explicitly. Thus WKHUH LVQ·W formal method for applying them to TUI design.
How can we design interaction with tangible objects that can be at several
levels of meaning? How can we ease the way in which they move from ready-
to-hand to present-at-hand objects, and vice-versa? Although it is known that
TUIs allow for more epistemic actions, the ultimate goal will be of allowing
TUI developers to methodically think about and explicitly design interfaces
that make specific epistemic actions easier, thus support cognition.
The guidelines proposed in this dissertation served as no more than a
starting point for the conceptualizing of both prototypes documented here.
They where a backdrop from which WKH´FULWLTXHµRIWKHDFWXDOVWDWHRI78,
frameworks was fleshed out. ThLV ´FULWLTXHµ was made public in the early
stages of this thesis work, by submitting a short paper for the Tangible,
Embedded and Embodied Interaction conference of 2010. While not being
accepted due to naturally being in an embryonic state, the reviews were
promising and motivational, classifying it of work with ´a great deal of
potentialµ. This potential was seen in first hand during the user tests with
Mementos, with users engaging in clear epistemic actions while tackling the
task at hand. If there was a clear and concise way of designing Mementos to
foster those and other epistemic actions more naturally and easily, the author
argues that the task would have been completed more quickly and effortlessly.
This work will be continued in a PhD starting October, and it will have
four major priorities:

o Tackle the limitations mentioned in this dissertation by implementing


hardware that fits different sized and shaped tokens and that allows for
stand-alone interaction with such tokens (e.g. by embedding them with
technology that can change their physical form and texture). Another
limitation to tackle is of the lack of an interactive surface that can sense
and actuate on different tokens.

99  
 
o Develop a series of prototypes in various domains, such as in the domain
of problem solving and spatial planning, information visualization,
tangible reminders, and entertainment and play (see Section 2.4).
 

o Conduct long term user studies (which lack for TUIs in general [SH10])
on both the prototypes that will be developed and in Mementos and Eco
Planner.

o Produce of a set of guidelines or themes for TUI design based on the


theories of Embodied Cognition.
 

6.3 FINAL REMARKS


The work presented in this dissertation is a solid first step into using TUIs as a
mean to build computer systems that are: (1) easier to use and approach due to
their physicality;; (2) available to the casual user that lives outside of research
labs or needs something more than intelligent toys;; (3) less cognitively taxing
and more natural to use, allowing users to focus their attention on the work at
hand as systems embody the users real-world skills.

100  
 
7. R EFERENCES

>$@$QGHUVRQ0/´(PERGLHGFRJQLWLRQDILHOGJXLGHµ$UWLI,QWHOO
149, 1 (Sep. 2003), 91-130.

[A07] A. 1 $QWOH ´7KH &7, IUDPHZRUN ,QIRUPLQJ WKH GHVLJQ RI WDQJLEOH
V\VWHPV IRU FKLOGUHQµ LQ 3URFHHGLQJV RI 7(, · SS  ²202, NY: ACM,
2007.

>$@ 5 $LVK ´' LQSXW IRU &$$' V\VWHPVµ &RPSXWHU $LGHG 'HVLJQ YRO
11, no. 2, pp. 66²70, 1979.

[A99] R. Abrams ´$GYHQWXUHV LQ WDQJLEOH FRPSXWLQJ 7KH ZRUN RI LQWHUDFWLRQ
GHVLJQHU ¶'XUUHOO %LVKRS· LQ FRQWH[Wµ 0DVWHU·V WKHVLV 5R\DO &ROOHJH RI $UW
London, 1999.

[ABL+04] D. Africano, S. Berg, K. Lindbergh, P. Lundholm, F. Nilbrink, and


$ 3HUVVRQ ´'HVLJQLQJ WDQJLEOH LQWHUIDFHV IRU FKLOGUHQ·V FROODERUDWLRQµ LQ
Proceedings of CHI04 Extended Abstracts, pp. 853 ²886, ACM, 2004.

>$.<@ 0 : $OLEDOL 6 .LWD DQG $ <RXQJ ´*HVWXUH DQG WKH SURFHVV RI
VSHHFK SURGXFWLRQ :H WKLQN WKHUHIRUH ZH JHVWXUHµ /DQJXDJH  &RJQ itive
Processes, vol. 15, pp. 593²613, 2000.

>$1@ 5 $LVK DQG 3 1RDNHV ´$UFKLWHFWXUH ZLWKRXW QXPEHUVµ &RPSXWHU
Aided Design, vol. 16, no. 6, pp. 321 ²328, 1984.

[B00] M. Beaudouin-/DIRQ ´,QVWUXPHQWDO LQWHUDFWLRQ $Q LQWHUDFWLRQ PRGHO


for designing post-:,03 XVHU LQWHUIDFHVµ LQ 3URFHHGLQJV RI &+,· SS  ²
453, NY: ACM, 2000.

>%@ % %X[WRQ ´6NHWFKLQJ 8VHU ([SHULHQFHV *HWWLQJ WKH 'HVLJQ 5LJKW DQG
WKH5LJKW'HVLJQµ0RUJDQ.DXIPDQQ3XEOLVKHUV,QF

>%@ %DUVDORX /:  ´*URXQGHG FRJQLWLRQµ . Annu. Rev. Psychol. 59: In
press.

>%@0%DQ]L´*HWWLQJ6WDUWHGZLWK$UGXLQRµ25HLOO\

101  
 
>%@ %URRNV 5 $   ´$ UREXVW OD\HUHG FRQWURO V\VWHP IRU D PRELOH
URERWµ,(((-RXUQDORI5RERWLFVDQG$XWRPDWLRQ   -23.

[BB96] W. Bruns and V %UDXHU ´%ULGJLQJ WKH JDS EHWZHHQ UHDO DQG YLUWXDO
modeling ³ A new approach to human-FRPSXWHU LQWHUDFWLRQµ LQ 3URFHHGLQJV
of the IFIP 5.10 Workshop on Virtual Prototyping, Providence, September,
1994, IFIP, 1996.

[BC03] Brown, B. and Chalmers, M. 2003. ´7RXULVP DQG PRELOH WHFKQRORJ\µ


In Eighth European Conference on Computer Supported Cooperative Work.
335-354.

[BCL+05] Borriello, G., Chalmers, M., LaMarca, A., and Nixon, P. 2005.
´'HOLYHULQJUHDO-ZRUOGXELTXLWRXVORFDWLRQV\VWHPVµ&RPPXQ . ACM 48,
36-41.

>%'@6%UDYHDQG$ 'DKOH\´LQ7RXFK $0HGLXPIRU+DSWLF,QWHUSHUVRQDO


&RPPXQLFDWLRQµ LQ ([WHQGHG $EVWUDFWV RI &+, · SS  ²364, NY: ACM,
1997.

>%*@ %RXVVHPDUW % DQG *LURX[ 6  ´7DQJLEOH 8VHU ,QWHUIDFHV IRU
Cognitive AssLVWDQFHµ ,Q 3URFHHGLQJV RI WKH VW LQWHUQDWLRQDO &RQIHUHQFH RQ
Advanced information Networking and Applications Workshops - Volume 02
(May 21 - 23, 2007).

>%*@ 0 %DVNLQJHU DQG 0 *URVV ´7DQJLEOH ,QWHUDFWLRQ  )RUP 


&RPSXWLQJµ,QWHUDFWLRQVYRO[YL i.1, pp. 6²11, 2010.

[BGS+09] Billinghurst, M., Grasset, R., Seichter, H., and Dünser, A. 2009.
´7RZDUGV $PELHQW $XJPHQWHG 5HDOLW\ ZLWK 7DQJLEOH ,QWHUIDFHVµ ,Q WK
international Conference on Human -Computer interaction. vol. 5612. 387 -396.

[BLS05] BonanQL / /HH & DQG 6HONHU 7  ´$WWHQWLRQ -based design of
DXJPHQWHG UHDOLW\ LQWHUIDFHVµ ,Q &+,
 &+,
 $&0 1HZ <RUN  -
1231.

>%-'@ - %XXU 0 9 -HQVHQ DQG 7 'MDMDGLQLQJUDW ´+DQGV -only scenarios
and video action walls: Novel method VIRUWDQJLEOHXVHULQWHUDFWLRQGHVLJQµLQ
Proceedings of DIS04, pp. 185²192, NY: ACM, 2004.

>%.3@ 0 %LOOLQJKXUVW + .DWR DQG , 3RXS\UHY ´7KH 0DJLF%RRN ³
0RYLQJ VHDPOHVVO\ EHWZHHQ UHDOLW\ DQG YLUWXDOLW\µ ,((( &RPSXWHU *UDSKLFV
and Applications, pp. 1²4, May/June 2001.

>%56@5%DOODJDV05LQJHO06WRQHDQG-%RUFKHUV´L6WXII$SK\VLFDO
XVHU LQWHUIDFH WRRONLW IRU XELTXLWRXV FRPSXWLQJ HQYLURQPHQWVµ LQ 3URFHHGLQJV
RI&+,·SS²544, NY: ACM, 2003.

[BWD07] J. Brewer, A. Williams, and 3 'RXULVK ´$ KDQGOH RQ ZKDWV JRLQJ
on: Combining tangible interfaces and ambient displays for collaborative
JURXSVµLQ3URFHHGLQJVRI7(,SS ²10, NY: ACM.

>&@ &ODUN $QG\  ´5HDVRQV 5RERWV DQG WKH ([WHQGHG 0LQGµ 0LQG 
Language 16(2):121-145.

102  
 
>&@ $ &\SKHU HG ´:DWFK :KDW , 'R 3URJUDPPLQJ E\ 'HPRQVWUDWLRQµ
The MIT Press, 1993.

>&@&ODUN$QG\´%HLQJ7KHUHµ&DPEULGJH0$0,73UHVV

>&&@ &ODUN $  &KDOPHUV '   ´7KH H[WHQGHG PLQGµ $QDO\VLV
58(1):7²19.

[CDJ+02] K. Camarata, E. Y. Do, B. R. Johnson, and M. D. Gross,


´1DYLJDWLRQDO EORFNV 1DYLJDWLQJ LQIRUPDWLRQ VSDFH ZLWK WDQJLEOH PHGLDµ LQ
Proceedings of the 7th International Conference on Intelligent User Interfaces,
pp. 31²1<,8,·$&0

[CL05] Chiu, ' . DQG /HXQJ +  ´7RZDUGV XELTXLWRXV WRXULVW VHUYLFH
FRRUGLQDWLRQ DQG LQWHJUDWLRQµ ,Q 3URF RI WKH WK LQWHUQDWLRQDO &RQIHUHQFH RQ
Electronic Commerce. ICEC '05, vol. 113. 574 -581.

>&/6@+&KXQJ&-/HHDQG76HONHU´/RYHU·VFXSV'ULQNL ng interfaces
DVQHZFRPPXQLFDWLRQFKDQQHOVµLQ3URFHHGLQJVRI&+,·1<$&0

[CRK+01] A. Chang, B. Resner, B. Koerner, X. Wang, and H. Ishii,


´/XPL7RXFK $Q HPRWLRQDO FRPPXQLFDWLRQ GHYLFHµ LQ 3URFHHGLQJV RI &+,·
Extended Abstracts, pp. 313²314, NY: ACM, 2001.

[CPP+05] Chipchase, J., Persson, P., Piippo, P., Aarras, M., and Yamamoto, T.
 ´0RELOH HVVHQWLDOV ILHOG VWXG\ DQG FRQFHSWLQJµ ,Q 3URFHHGLQJV RI WKH
2005 Conference on Designing For User Experience (San Francisco, California,
November 03 - 05, 2005).

>&55@ 1 &RXWXUH * 5LYLqUH DQG 3 5HXWHU ´*HR78, $ WDQJLEOH XVHU
LQWHUIDFH IRU JHRVFLHQFHµ LQ 3URFHHGLQJV RI 7(, SS  ²96, NY: ACM,
2008.

>&:3@ - &RKHQ 0 :LWKJRWW DQG 3 3LHUQRW ´/RJMDP $ WDQJLEOH PXOWL -
person interfacH IRU YLGHR ORJJLQJµ LQ 3URFHHGLQJV RI &+, SS  ²135,
NY: ACM, 1999.

>'@ 3 'RXULVK ´:KHUH WKH $FWLRQ ,V 7KH )RXQGDWLRQV RI (PERGLHG
,QWHUDFWLRQµ0,73UHVV

>'%6@'RHULQJ7%HFNKDXV6 DQG6FKPLGW$ ´7RZDUGVD VHQVLEOH


integration of paper-based tangible user interfaces into creative work
SURFHVVHVµ,Q&+,


>'+@ 'URU ,( +DUQDG 6 LQ SUHVV  ´2IIORDGLQJ FRJQLWLRQ RQWR FRJQLWLYH
WHFKQRORJ\µ &RJQLWLRQ GLVWULEXWHG +RZ FRJQLWLYH WHFKQRORJ\ H[WHQGV RXU
minds. John Benjamins, Amsterdam. 2009.

>(%@ ' (GJH DQG $ %ODFNZHOO ´3HULSKHUDO WDQJLEOH LQWHUDFWLRQ E\ DQDO\WLF
GHVLJQµLQ3URFHHGLQJVRI7(,SS ²76, NY: ACM.

>)@ - )UD]HU ´$Q (YROXWLRQDU\ $UFKLWHFWXUHµ 7KHPHV 9,, /RQGRQ


Architectural Association, 1995 .

103  
 
>)@ * : )LW]PDXULFH ´*UDVSDEOH 8VHU ,QWHUIDFHVµ 'LVVHUWDWLRQ &RPSXWHU
Science, University of Toronto, Canada, 1996.

>)%@ * : )LW]PDXULFH DQG : %X[WRQ ´$Q HPSLULFDO HYDOXDWLRQ RI
graspable user interfaces: Towards specialized, space -multiplexHG LQSXWµ LQ
Proceedings of CHI97, pp. 43²50, NY: ACM, 1997.

>))@ - )UD]HU DQG 3 )UD]HU ´,QWHOOLJHQW SK\VLFDO WKUHH -dimensional
PRGHOOLQJ V\VWHPVµ LQ 3URFHHGLQJV RI &RPSXWHU *UDSKLFV· SS  ²370,
Online Publications, 1980.

[FF82] J. Frazer and P  )UD]HU ´7KUHH-GLPHQVLRQDO GDWD LQSXW GHYLFHVµ LQ


Proceedings of Computer Graphics in the Building Process, Washington:
National Academy of Sciences, 1982.

>))/@ )URHKOLFK - )LQGODWHU / DQG /DQGD\ -  ´7KH GHVLJQ RI HFR -
feedback technolog\µ ,Q 3URFHHGLQJV RI WKH WK LQWHUQDWLRQDO &RQIHUHQFH RQ
Human Factors in Computing Systems (Atlanta, Georgia, USA, April 10 - 15,
2010). CHI '10. ACM, New York, NY, 1999 -2008.

[FGH+00] K. P. Fishkin, A. Gujar, B. L. Harrison, T. P. Moran, and R. Want,


´(PERGLHG XVHU LQWHUIDFHV IRU UHDOO\ GLUHFW PDQLSXODWLRQµ &RPPXQLFDWLRQV RI
the ACM, vol. 43, no. 9, pp. 75²80, 2000.

>),%@ * : )LW]PDXULFH + ,VKLL DQG : %X[WRQ ´%ULFNV /D\LQJ WKH
IRXQGDWLRQV IRU JUDVSDEOH XVHU LQWHUIDFHVµ LQ 3URFHHGLQJV RI &+, 95, pp. 442²
449, NY: ACM, 1995.

>).<@ / )HLMV 6 .\IILQ DQG % <RXQJ ´3URFHHGLQJV RI 'HVLJQ DQG
6HPDQWLFV RI )RUP DQG 0RYHPHQWµ ³ DesForM 2005. Foreword. Koninklijke
Philips Electronics N.V. Eindhoven. 3, 2005.

[FMI00] P. Frei, V. Su, B. Mikhak, and + ,VKLL ´&XUO\ERW 'HVLJQLQJ D 1HZ
&ODVV RI &RPSXWDWLRQDO 7R\Vµ LQ 3URFHHGLQJV RI &+,  SS  ²136, NY:
ACM, 2000.

>)7@ < )HUQDHXV DQG - 7KRODQGHU ´)LQGLQJ GHVLJQ TXDOLWLHV LQ D WDQJLEOH
SURJUDPPLQJVSDFHµLQ3URFHHGLQJVRI&+,SS ²456, NY: ACM, 2006.

>)7-@ < )HUQDHXV - 7KRODQGHU DQG 0 -RQVVRQ ´%H\RQG UHSUHVHQWDWLRQV
Towards an action-FHQWULF SHUVSHFWLYH RQ WDQJLEOH LQWHUDFWLRQµ ,QWHUQDWLRQDO
Journal of Arts and Technology, vol. 1, no. 3/4, pp. 249 ²267, 2008.

[FTJ08b] Y. FernaHXV - 7KRODQGHU DQG 0 -RQVVRQ ´7RZDUGV D QHZ VHW RI
LGHDOV &RQVHTXHQFHV RI WKH SUDFWLFH WXUQ LQ WDQJLEOH LQWHUDFWLRQµ LQ
3URFHHGLQJVRI7(,·SS²230, NY: ACM, 2008.

[G03] S. Goldin-0HDGRZ´+HDULQJ*HVWXUH+RZ2XU+DQGV+HOS8V7KLQNµ
Harvard University Press, 2003.

[G10] Garzón, F. "The reactive brain and the extended mind: A fourth
SRVLWLRQµ 8npublished manuscript, available online (July 2010) .
https://ftp.um.es/logica/paco_calvo/ReactiveBrainExtendedMind.pdf

104  
 
>*@ *OHQEHUJ ´:KDW PHPRU\ LV IRUµ %(+$9,25$/ $1' %5$,1
SCIENCES (1997) 20:1.

>*&@ *ORV - : DQG &DVVHOO -  ´5RVHEXG D SODFH IRU LQWHUDFWLRQ
EHWZHHQ PHPRU\ VWRU\ DQG VHOIµ ,Q 3URFHHGLQJV RI WKH QG LQWHUQDWLRQDO
Conference on Cognitive Technology (CT '97) (August 25 - 28, 1997).

>*)@ 6 *UHHQEHUJ DQG & )LWFKHWW ´3KLGJHWV (DV\ GHYHORSPHQW RI SK\VLFDO
LQWHUIDFHV WKURXJK SK\VLFDO ZLGJHWVµ LQ 3URFHHGLQJV RI 8,67· SS  ²218,
NY: ACM, 2001.

[GPS+04] Garzotto, F., Paolini, P., Speroni, M., Proll, B., Retschitzegge r, W.,
DQG 6FKZLQJHU :  ´8ELTXLWRXV $FFHVV WR &XOWXUDO 7RXULVP 3RUWDOVµ ,Q
Proc. of the Database and Expert Systems Applications. IEEE.

[GSH+07] A. Girouard, E. T. Solovey, L. M. Hirshfield, S. Ecott, O. Shaer,


DQG 5 - . -DFRE ´6PDUW EORFNV $ WDQJLEOH PDWKHPDWLFDO PDQLSXODWLYHµ LQ
Proceedings of TEI07, pp. 183²186, NY: ACM, 2007.

>*62@$*LOOHW06DQQHU'6WRIIOHUDQG $2OVRQ´7DQJLEOH DXJPHQWHG


LQWHUIDFHV IRU VWUXFWXUDO PROHFXODU ELRORJ\µ ,((( &RPSXWHU *UDSKLFV 
Applications, vol. 25, no. 2, pp. 13²17, 2005.

[HAM+07] B. Hartmann, L. Abdulla, M. Mittal, and S. R. Klemmer,


´$XWKRULQJ VHQVRUEDVHG LQWHUDFWLRQV E\ GHPRQVWUDWLRQ ZLWK GLUHFW
PDQLSXODWLRQ DQG SDWWHUQ UHFRJQLWLRQµ LQ 3URFHHGLQJV RI &+, · SS  ²
154, NY: ACM, 2007.

[HB@ +RUQHFNHU ( DQG %XXU -  ´*HWWLQJ D JULS RQ WDQJLEOH
LQWHUDFWLRQ D IUDPHZRUN RQ SK\VLFDO VSDFH DQG VRFLDO LQWHUDFWLRQµ ,Q 3URF RI
CHI '06.

>+(@(YDQGHQ+RYHQDQG%(JJHQ´7DQJLEOHFRPSXWLQJLQHYHU\GD\OLIH
Extending current frameworks for tangible user interfaces with personal
REMHFWVµ LQ 3URFHHGLQJV RI (86$,  SS  ²242, Springer, LNCS 3295,
2004.

[HFA+07] E. van den Hoven, J. Frens, D. Aliakseyeu, J. B. Martens, K.


2YHUEHHNH DQG 3 3HWHUV ´'HVLJQ 5HVHDUFK DQG 7DQJLEOH ,QWH UDFWLRQµ
3URFHHGLQJVRI7(,·SS²115, 2007.

>+*+@ +H + $ *UHHQEHUJ 6 DQG +XDQJ ( 0  ´2QH VL]H GRHV
not fit all: applying the transtheoretical model to energy feedback technology
GHVLJQµ ,Q 3URFHHGLQJV RI WKH WK LQWHUQDWLRQDO &RQ ference on Human
Factors in Computing Systems (Atlanta, Georgia, USA, April 10 - 15, 2010).

>++.@ +ROODQ - +XWFKLQV ( DQG .LUVK '  ´'LVWULEXWHG FRJQLWLRQ
toward a new foundation for human -FRPSXWHU LQWHUDFWLRQ UHVHDUFKµ $&0
Trans. Comput.-Hum. Interact. 7, 2 (Jun. 2000), 174-196.

[HHO08] B. Hengeveld, C. Hummels, K. Overbeeke, R. Voort, H. van Balkom,


DQG - GH 0RRU ´/HW PH DFWXDWH \RXµ LQ 3URFHHGLQJV RI 7(, SS  ²166,
NY: ACM, 2008.

105  
 
[HHO09] B. Hengeveld, C. Hummels, and K. Overbeeke, ´7DQJLEOHV IRU
WRGGOHUV OHDUQLQJ ODQJXDJHµ LQ 3URFHHGLQJV RI 7(, SS  ²168, NY: ACM,
2009.

>+,@ - +XUWLHQQH DQG - + ,VUDHO ´,PDJH VFKHPDV DQG WKHLU PHWDSKRULFDO
H[WHQVLRQV ,QWXLWLYH SDWWHUQV IRU WDQJLEOH LQWHUDFWLRQµ LQ 3URFHHGLQJV RI
TEI07, pp. 127²134, NY: ACM, 2007.

>+,:@ - +XUWLHQQH - + ,VUDHO DQG . :HEHU ´&RRNLQJ XS UHDO ZRUOG
EXLVLQHVV DSSOLFDWLRQV FRPELQLQJ SK\VLFDOLW\ GLJLWDOLW\ DQG LPDJH VFKHPDVµ LQ
3URFHHGLQJVRI7(,·SS²246, NY: ACM, 2008.

[HMD+08] Hornecker, E., Marshall, P., Dalton, N. S., and Rogers, Y. 2008.
´&ROODERUDWLRQDQGLQWHUIHUHQFHDZDUHQHVVZLWKPLFHRUWRXFKLQSXWµ,Q3URF
of the ACM 2008 Conference on Computer Supported Cooperative Work.
CSCW '08.

[HPG+94] K. Hinckley, R. Pausch, J. Goble, and N  .DVVHO ´3DVVLYH UHDO-


ZRUOG LQWHUIDFH SURSV IRU QHXURVXUJLFDO YLVXDOL]DWLRQµ LQ 3URFHHGLQJV RI
CHI94, pp. 452²458, NY: ACM, 1994.

>+5/@ / ( +ROPTXLVW - 5HGVWU|P DQG 3 /MXQJVWUDQG ´7RNHQ -based
DFFHV WR GLJLWDO LQIRUPDWLRQµ LQ 3URFHHGLQJV RI WKH 1st International
Symposium on Handheld and Ubiquitous Computing, (H. Gellersen, ed.), pp.
234²245, Lecture Notes In Computer Science, vol. 1707, London: Springer -
Verlag, 1999.

[HSC+09] M. S. Horn, E. T. Solovey, R. J. Crouser, and R. J. K. Jacob,


´&RPSDUing the use of tangible and graphical programming interfaces for
LQIRUPDO VFLHQFH HGXFDWLRQµ LQ 3URFHHGLQJV RI &+,· SS  ²984, NY:
ACM, 2009.

>+6-@06+RUQ(76RORYH\DQG5-.-DFRE´7DQJLEOHSURJUDPPLQJ
for informal science learning: 0DNLQJ 78,V ZRUN IRU 0XVHXPVµ LQ
Proceedings of 7th International Conference on Interaction Design and
&KLOGUHQ,'&·SS²201, NY: ACM, 2008.

[HYG03] C. J. Huang, E. Yi-/XHQ 'R DQG 0 ' *URVV ´0RXVH+DXV WDEOHµ
in Proceedings of CAAD Futures, 200 3.

>,@ ,VKLL +  ´7DQJLEOH ELWV EH\RQG SL[HOVµ ,Q 3URF RI WKH QG
international Conference on Tangible and Embedded interaction. TEI '08.

>,E@ + ,VKLL ´7KH WDQJLEOH XVHU LQWHUIDFH DQG LWV HYROXWLRQµ
Communications of the ACM, vol. 51, no. 6, pp. 32²36, 2008.

>,0/@ ,VKLL + 0D]DOHN $ DQG /HH -  ´%RWWOHV DV D PLQLPDO
LQWHUIDFH WR DFFHVV GLJLWDO LQIRUPDWLRQµ ,Q &+,
 ([WHQGHG $EVWUDFWV RQ
Human Factors in Computing Systems (Seattle, Washington, March 31 - April
05, 2001). CHI '01. ACM, New York, NY, 187-188.

>,8@ + ,VKLL DQG % 8OOPHU ´7DQJLEOH ELWV 7RZDUGV VHDPOHVV LQWHUIDFHV
EHWZHHQ SHRSOH ELWV DQG DWRPVµ LQ 3URFHHGLQJV RI &+, SS  ²241, NY:
ACM, 1997.

106  
 
>-@ 6 -RUGj ´2Q VWDJH 7KH UHDFWDEOH DQG RWKHU PXVLFDO WDQ JLEOHV JR UHDOµ
International Journal of Arts and Technology (IJART), vol. 1, no. 3/4, pp.
268²287, Special Issue on Tangible and Embedded Interaction 2008.

>-*$@ 6 -RUGj * *HLJHU 0 $ORQVR DQG 0 .DOWHQEUXQQHU ´7KH
reacTable: Exploring the syner gy between live music performance and tabletop
WDQJLEOHLQWHUIDFHVµLQ3URFHHGLQJVRI7(,·SS ²146, NY: ACM, 2007.

[JGH+08] Jacob, R. J., Girouard, A., Hirshfield, L. M., Horn, M. S., Shaer, O.,
6RORYH\(7DQG=LJHOEDXP-´5HDOLW\ -based interaction: a framework
for post-:,03 LQWHUIDFHVµ ,Q 3URF RI WKH 7ZHQW\ -Sixth Annual SIGCHI
Conference on Human Factors in Computing Systems. CHI '08.

>-,3@5-.-DFRE+,VKLL*3DQJDURDQG-3DWWHQ´$WDQJLEOHLQWHUIDFH
for organizing iQIRUPDWLRQ XVLQJ D JULGµ LQ 3URFHHGLQJV RI &+, · SS  ²
346, NY: ACM, 2002.

>.@ 0 .DOWHQEUXQQHU ´:HEVLWH RQ 7DQJLEOH 0XVLFµ 5HDG $SULO 
http://modin.yuri.at/tangibles/ .

>.%@ + .DWR DQG 0 %LOOLQJKXUVW HW DO ´9LUWXDO REMHFW PDQLSXODWLRQ on a
table-WRS $5 HQYLURQPHQWµ LQ 3URFHHGLQJV RI ,QWHUQDWLRQDO 6\PSRVLXP RQ
Augmented Reality ISAR 2000, pp. 111 ²119, 2000.

>.%@ - - .DODQLWKL DQG 9 0 %RYH ´&RQQHFWLEOHV 7DQJLEOH VRFLDO
QHWZRUNVµLQ3URFHHGLQJVRI7(,SS ²206, NY: ACM, 2008.

>.%@ + .DWR DQG 0 %LOOLQJKXUVW ´0DUNHU WUDFNLQJ DQG +0' FDOLEUDWLRQ
for a video-EDVHG DXJPHQWHG UHDOLW\ FRQIHUHQFLQJ V\VWHPµ LQ 3URFHHGLQJV RI
the 2nd International Workshop on Augmented Reality (IWAR 99), 1999.

[KBP+01] H. Kato, M. Billinghurst, I. Poupyrev, N. Tetsutani, and K.


7DFKLEDQD ´7DQJLEOH DXJPHQWHG UHDOLW\ IRU KXPDQ FRPSXWHU LQWHUDFWLRQµ LQ
Proceedings of Nicograph 2001, Nagoya, Japan, 2001.

>.+1@ . .RED\DVKL 0 +LUDQR $ 1DULWD DQG + ,VKLL ´$ WDQJLEOH
interface for IP network VLPXODWLRQµ LQ 3URFHHGLQJV RI &+, · H[WHQGHG
abstracts, pp. 800²801, NY: ACM, 2003.

>.+7@ .OHPPHU 6 5 +DUWPDQQ % DQG 7DND\DPD /  ´+RZ ERGLHV
PDWWHU ILYH WKHPHV IRU LQWHUDFWLRQ GHVLJQµ ,Q 3URF RI WKH WK &RQIHUHQFH RQ
Designing interactive Systems. DIS '06.

>.//@ 6 5 .OHPPHU - /L - /LQ DQG - $ /DQGD\ ´3DSLHU 0kFKp
7RRONLW VXSSRUW IRU WDQJLEOH LQSXWµ LQ 3URFHHGLQJV RI &+, SS  ²406,
NY: ACM, 2004.

>.0@0-.LPDQG0/0DKHU´7KHLPSDFWRIWDQJLEOHXVHULQWHU faces on
GHVLJQHUV· VSDWLDO FRJQLWLRQµ +XPDQ -Computer Interaction, vol. 23, no. 2,
2008.

>.0@ ' .LUVK DQG 3 0DJOLR ´2Q GLVWLQJXLVKLQJ HSLVWHPLF IURP SUDJPDWLF
DFWLRQVµ&RJQLWLYH6FLHQFHYROQRSS ²549, 1994.

107  
 
[KST+09] D. S. Kirk, A. SelOHQ 6 7D\ORU 1 9LOODU DQG 6 ,]DGL ´3XWWLQJ WKH
SK\VLFDO LQWR WKH GLJLWDO ,VVXHV LQ GHVLJQLQJ K\EULG LQWHUDFWLYH VXUIDFHVµ LQ
Proceedings of HCI 2009, 2009.

>/@%/DXUHO´&RPSXWHUVDV7KHDWHUµ$GGLVRQ -Wesley Professional, 1993.

[LBB+07] E. van Loenen, T. Bergman, V. Buil, K. van Gelder, M. Groten, G.


+ROOHPDQV - +RRQKRXW 7 /DVKLQD DQG 6 YDQ GH :LMGHYHQ ´(QWHU7DLEOH $
VROXWLRQIRUVRFLDOJDPLQJH[SHULHQFHVµLQ7DQJLEOH3OD\5HVHDUFKDQG'HVLJQ
for Tangible and Tabletop Games, Workshop at the 2007 Intelligent User
Interfaces Conference, pp. S. 16²19, Honolulu, Hawaii, USA, 2007.

[LHY+08] J. Leitner, M. Haller, K. Yun, W. Woo, M. Sugimoto, and M. Inami,


´,QFUH7DEOH D PL[HG UHDOLW\ WDEOHWRS JDPH H[SHULHQFHµ LQ 3URFHHGLQJV RI WKH
2008 International Conference on Advances in Computer Entertainment
Technology, pp. 9²16, NY: ACM, 2008.

>/-@ /DNRII *  -RKQVRQ 0 ´7KH PHWDSKRULFDO VWUXFWXUH RI WKH KXPDQ
FRQFHSWXDOV\VWHPµ&RJQLWLYH6FLHQFHSS -208.

[LNB+04] G. A. Lee, C. N HOOHV 0 %LOOLQJKXUVW DQG * - .LP ´,PPHUVLYH
DXWKRULQJ RI WDQJLEOH DXJPHQWHG UHDOLW\ DSSOLFDWLRQVµ LQ 3URFHHGLQJV RI
IEEE/ACM International Symposium on Mixed and Augmented Reality, pp.
172²181, IEEE Computer Society, 2004.

[LPI07] V. LeClerc, A. ParNHV DQG + ,VKLL ´6HQVSHFWUD $ FRPSXWDWLRQDOO\


augmented physical modeling toolkit for sensing and visualization of structural
VWUDLQµLQ3URFHHGLQJVRI&+,·SS ²804, NY: ACM, 2007.

>0@ 7 6 0F1HUQH\ ´)URP WXUWOHV WR WDQJLEOH SURJUDPPLQJ EU icks:
([SORUDWLRQV LQ SK\VLFDO ODQJXDJH GHVLJQµ 3HUV 8ELTXLW &RPSXW YRO  SS
326²337, 2004.

>0@%-0LOOHU´&RJQLWLRQDQGWKH&LW\µ

>0@ 0DUVKDOO 3  ´'R WDQJLEOH LQWHUIDFHV HQKDQFH OHDUQLQJ"µ ,Q 3URF
of the 1st international Conference on Tangible and Embedded interaction.
TEI '07.

[MDM+04] Matthews, T., Dey, A. K., Mankoff, J., Carter, S., and Rattenbury,
7  ´$ WRRONLW IRU PDQDJLQJ XVHU DWWHQWLRQ LQ SHULSKHUDO GLVSOD\Vµ ,Q
UIST '04. ACM, New York, NY, 247-256.

[MF99] W. E. Mackay and A.-/ )D\DUG ´'HVLJQLQJ LQWHUDFWLYH SDSHU /HVVRQV


IURP WKUHH DXJPHQWHG UHDOLW\ SURMHFWVµ LQ 3URFHHGLQJV RI ,:$5
International Workshop on Augmented Reality, Natick, MA, 1999.

>0.0@ 0HUULOO ' .DODQLWKL - DQG 0DHV 3  ´6LIWDEOHV towards
VHQVRUQHWZRUNXVHULQWHUIDFHVµ,Q7(,


>035@ 0DUVKDOO 3 3ULFH 6 DQG 5RJHUV <  ´&RQFHSWXDOL]LQJ
WDQJLEOHVWRVXSSRUWOHDUQLQJµ,Q,'&
$&01HZ<RUN1< -109.

108  
 
[MRG+07] E. Mugellini, E. Rubegni, S. Gerardi, and O. A. K KDOHG ´8VLQJ
SHUVRQDOREMHFWVDVWDQJLEOHLQWHUIDFHVIRUPHPRU\UHFROOHFWLRQDQGVKDULQJµLQ
3URFHHGLQJVRI7(,·SS²238, NY: ACM, 2007.

[MWC03] Maglio, P. P., Wenger, M. J., & Copeland, A. M. (2003). ´The


benefits of epistemic action outweigh th e costsµ. In Proceedings of the Twenty
Fifth Annual Conference of the Cognitive Science Society.

>1@ ' 1RUPDQ ´7KH 3V\FKRORJ\ RI (YHU\GD\ 7KLQJVµ 1HZ <RUN %DVLF
Books, 1988.

[NNG03] H. Newton-'XQQ + 1DNDQR DQG - *LEVRQ ´%ORFNMDP $ WDQJLEOH


inteUIDFH IRU LQWHUDFWLYH PXVLFµ LQ 3URFHHGLQJV RI WKH  &RQIHUHQFH RQ
New Interfaces for Musical Expression (NIME -03), pp. 170²177, 2003.

>2)@ & 2·0DOOH\ DQG ' 6WDQWRQ )UDVHU ´/LWHUDWXUH UHYLHZ LQ OHDUQLQJ ZLWK
WDQJLEOHWHFKQRORJLHVµ1(67$IXWXUHODE report 12, Bristol, 2004.

>3@ 3RODQ\L 0 ´7KH 7DFLW 'LPHQVLRQµ /RQGRQ 5RXWOHGJH  .HJDQ 3DXO
Ltd. 108 pp. 1967.

>3@53HUOPDQ´8VLQJ&RPSXWHU7HFKQRORJ\WR3URYLGHD&UHDWLYH/HDUQLQJ
(QYLURQPHQWIRU3UHVFKRRO&KLOGUHQµ0,7/RJR0HPR

>3@ 5 3R\QRU ´7KH KDQG WKDW URFNV WKH FUDGOHµ ,' 0DJD]LQH SS  ²65,
May/June 1995.

[PFD+95] Parsons, L. M., Fox, P. T., Downs, J. H., Glass, T., Hirsch, T. B.,
0DUWLQ & & -HUDEHN 3O $  /DQFDVWHU - /  µ 8VH RI LPSOLFLW PRWRU
imager\ IRU YLVXDO VKDSH GLVFULPLQDWLRQ DV UHYHDOHG E\ 3(7µ 1DWXUH   -
58.

>3*5@ 3DWHO 6 1 *XSWD 6 DQG 5H\QROGV 0 6  ´7KH GHVLJQ DQG
evaluation of an end-user-deployable, whole house, contactless power
FRQVXPSWLRQ VHQVRUµ ,Q 3URFHHGLQJV R f the 28th international Conference on
Human Factors in Computing Systems (Atlanta, Georgia, USA, April 10 - 15,
2010). CHI '10. ACM, New York, NY, 2471 -2480.

>3+@ ( : 3HGHUVHQ DQG . +RUQE N ´PL[L78, $ WDQJLEOH VHTXHQFHU IRU
electronic live performaQFHVµ LQ 3URFHHGLQJV RI 7(, · SS  ²230, NY:
ACM, 2009.

>3+%@ 3DUDGLVR - $ +VLDR . DQG %HQEDVDW $  ´7DQJLEOH PXVLF
LQWHUIDFHVXVLQJSDVVLYHPDJQHWLFWDJVµ,Q3URFRI1,0(

>3,@ - 3DWWHQ DQG + ,VKLL ´$ FRPSDULVRQ RI VSDWLDO organization strategies
LQ JUDSKLFDO DQG WDQJLEOH XVHU LQWHUIDFHVµ LQ 3URFHHGLQJV RI 'HVLJQLQJ
$XJPHQWHG5HDOLW\(QYLURQPHQWV'$5(·SS ²50, NY: ACM, 2000.

>3,@ - 3DWWHQ DQG + ,VKLL ´0HFKDQLFDO FRQVWUDLQWV DV FRPSXWDWLRQDO
constraints in tablHWRSWDQJLEOHLQWHUIDFHVµLQ3URFHHGLQJVRI&+,·SS ²
818, NY: ACM, 2007.

109  
 
>305@3RXS\UHY,0DUX\DPD6DQG5HNLPRWR-´$PELHQWWRXFKµ
In UIST '02. ACM, NY, 51-60.

>312@,3RXS\UHY71DVKLGDDQG02NDEH´$FWXDWLRQDQGWDQJLEO e user
LQWHUIDFHV7KH9DXFDQVRQGXFNURERWVDQGVKDSHGLVSOD\VµLQ3URFHHGLQJVRI
7DQJLEOHDQG(PEHGGHGLQWHUDFWLRQ7(,·SS ²212, NY: ACM, 2007.

>35,@ - 3DWWHQ % 5HFKW DQG + ,VKLL ´$XGLRSDG $ WDJ -based interface for
musical performancHµ LQ 3URFHHGLQJV RI WKH ,QWHUQDWLRQDO &RQIHUHQFH RQ 1HZ
Interface for Musical Expression NIME02, pp. 24 ²26, 2002.

>363@ 3LHUFH - 6FKLDQR ' 3DXORV (   ´+RPH +DELWV DQG (QHUJ\
([DPLQLQJ'RPHVWLF,QWHUDFWLRQVDQG(QHUJ\&RQVXPSWLRQµ

[R06] 5RJHUV < ´0RYLQJ RQ IURP :HLVHU·V 9LVLRQ RI &DOP &RPSXWLQJ
(QJDJLQJ 8EL&RPS ([SHULHQFHVµ ,Q 3URF 8ELFRPS  SS  ²421.
Springer, Heidelberg (2006).

>5@ 0 5HVQLFN ´%HKDYLRU FRQVWUXFWLRQ NLWVµ &RPPXQLFDWLRQV RI WKH $&0
vol. 36, no. 7, pp. 64²71, July 1993.

[R97] Rekimoto, J. Pick-and-'URS ´$ 'LUHFW 0DQLSXODWLRQ 7HFKQLTXH IRU


0XOWLSOH &RPSXWHU (QYLURQPHQWVµ ,Q 3URFHHGLQJV RI 8,67 · $&0 3UHVV
1997.

>5&@ . 5\RNDL DQG - &DVVHOO ´6WRU\0DW $ SOD\ VSDFH IRU FROODERUDWLYH
VWRU\WHOOLQJµ LQ3URFHHGLQJVRI&+,·SS ²273, NY: ACM, 1999.

[RDM10] Riche, Y., Dodge, J., and Metoyer, R. A. 2010. "Studying always -on
HOHFWULFLW\ IHHGEDFN LQ WKH KRPHµ ,Q 3URFHHGLQJV RI WKH WK LQWHUQDWLRQDO
Conference on Human Factors in Computing Systems ( Atlanta, Georgia, USA,
April 10 - 15, 2010). CHI '10. ACM, New York, NY, 1995 -1998.

[RFK+98] Rauterberg, M., Fjeld, M., Krueger, H., Bichsel, M., Leonhardt, U.,
0HLHU0  ´%8,/'-,7$SODQQLQJWRROIRUFRQVWUXFWLRQDQGGHVLJQµ
ACM CHI Conference, Extended Abstracts. p. 177-178.

>5+%@ 5RJHUV < +D]OHZRRG : %OHYLV ( DQG /LP <  ´)LQJHU
talk: collaborative decision-making using talk and fingertip interaction around
D WDEOHWRS GLVSOD\µ ,Q &+,
 ([WHQGHG $EVWUDFWV RQ +XPDQ )DFWRUV in
Computing Systems (Vienna, Austria, April 24 - 29, 2004).

>5-7@5DIIOH+-RDFKLP0:DQG7LFKHQRU-´6XSHUFLOLDVNLQDQ
LQWHUDFWLYHPHPEUDQHµ,Q&+,
$&01HZ<RUN1< -809.

[RMB+98] M. Resnick, F. Martin, R. Berg, R. Borovo y, V. Colella, K. Kramer,


DQG % 6LOYHUPDQ ´'LJLWDO PDQLSXODWLYHV 1HZ WR\V WR WKLQN ZLWKµ LQ
3URFHHGLQJVRI&+,·SS²287, NY: ACM, 1998.

>50,@ . 5\RNDL 6 0DUWL DQG + ,VKLL ´,2 EUXVK 'UDZLQJ ZLWK HYHU\GD\
REMHFWVDVLQNµLQ3URFHHGLQJV of CHI 2004, pp. 303²310, NY: ACM, 2004.

110  
 
>53,@ + 6 5DIIOHH $ - 3DUNHV DQG + ,VKLL ´7RSRER $ FRQVWUXFWLYH
DVVHPEO\ V\VWHP ZLWK NLQHWLF PHPRU\µ LQ 3URFHHGLQJV RI WKH $&0 &+,·
pp. 647²654, NY: ACM, 2004.

>6@-$6HLW]´7KHERGLO\EDVLVRI WKRXJKWµLQ3URFHHGLQJVRIWK$QQXDO
Symposium of the Jean Piaget Society, Mexico City, Mexico, 1999.

>6E@ 6FKZDUW] '/  ´3K\VLFDO LPDJHU\ NLQHPDWLF YV G\QDPLF PRGHOVµ
Cogn. Psychol. 38:433²64

>6$@ 6FKDFWHU '/ $GGLV '5  ´7KH FRJQLWLYH neuroscience of


FRQVWUXFWLYH PHPRU\ UHPHPEHULQJ WKH SDVW DQG LPDJLQLQJ WKH IXWXUHµ 3KLORV
Trans. R. Soc. Lond. B Biol. Sci. 362:773 ²86.

[SBG+08] J. Sturm, T. Bekker, B. Groenendaal, R. Wesselink, and B. Eggen,


´.H\LVVXHVIRUWKHVXFFHVVIXOGHVLJQRIDQ LQWHOOLJHQWLQWHUDFWLYHSOD\JURXQGµ
in Proceedings of IDC 2008, pp. 258 ²265, NY: ACM, 2008.

>6&%@ / 6LWRUXV 6 6 &DR DQG - %XXU ´7DQJLEOH XVHU LQWHUIDFHV IRU
FRQILJXUDWLRQ SUDFWLFHVµ LQ 3URFHHGLQJV RI 7(, SS  ²230, NY: ACM,
2007.

[SDM+01] L. Sherman, A. Druin, J. Montemayor, A. Farber, M. Platner, S.


Simms, J. Porteous, H. Alborzi, J. Best, J. Hammer, A. Kruskal, J. Matthews,
(5KRGHV&&RVDQVDQG//DO´6WRU\.LW7RROVIRUFKLOGUHQWREXLOGURRP -
VL]HG LQWHUDFWLYH H[SHULHQFHVµ LQ ([ tended Abstracts, Interactive Video
Poster, CHI2001, NY: ACM, 2001.

>6*+@ 6RORZD\ ( *X]GLDO 0 DQG +D\ . (  ´/HDUQHU -centered
GHVLJQ WKH FKDOOHQJH IRU +&, LQ WKH VW FHQWXU\µ LQWHUDFWLRQV   $SU
1994), 36-48.

[SH10] Shaer, O. and HorQHFNHU (  ´7DQJLEOH 8VHU ,QWHUIDFHV 3DVW
3UHVHQW DQG )XWXUH 'LUHFWLRQVµ )RXQG 7UHQGV +XP -Comput. Interact. 3, 1²
2 (Jan. 2010), 1-137.

>6+6@ $ 6LQJHU ' +LQGXV / 6WLIHOPDQ DQG 6 :KLWH ´7DQJLEOH SURJUHVV
Less is more in somewire audio VSDFHVµ LQ 3URFHHGLQJV RI &+, · SS  ²
111, NY: ACM, 1999.

[SLC+04]. Shaer, N. Leland, E. H. Calvillo -*DPH] DQG 5 - . -DFRE ´7KH
7$& SDUDGLJP 6SHFLI\LQJ WDQJLEOH XVHU LQWHUIDFHVµ 3HUVRQDO DQG 8ELTXLWRXV
Computing, vol. 8, pp. 359²369.

[SIW+02] E. Sharlin, Y. Itoh, B. Watson, Y. Kitamura, S. Sutphen, and L. Liu,


´&RJQLWLYH FXEHV $ WDQJLEOH XVHU LQWHUIDFH IRU FRJQLWLYH DVVHVVPHQWµ LQ
Proceedings of CHI02, pp. 347²354, NY: ACM, 2002.

>6.@ + 6X]XNL DQG + .DWR ´$OJR%ORFN $ WDQJLEOH SURJUDPPLQ g language,
D WRRO IRU FROODERUDWLYH OHDUQLQJµ LQ 3URFHHGLQJV RI WKH WK (XURSHDQ /RJR
conference (Eurologo93), pp. 297 ²303, Athens, Greece, 1993.

111  
 
>6.@ + 6X]XNL DQG + .DWR ´,QWHUDFWLRQ -level support for collaborative
learning: AlgoBlock ³ An open proJUDPPLQJ ODQJXDJHµ LQ 3URFHHGLQJV RI
CSCL, 1995.

>69@ % 6FKLHWWHFDWWH DQG - 9DQGHUGRQFNW ´$XGLR&XEHV $ GLVWULEXWHG FXEH


WDQJLEOHLQWHUIDFHEDVHGRQLQWHUDFWLRQUDQJHIRUVRXQGGHVLJQµLQ3URFHHGLQJV
RI7(,·SS²10, NY: ACM, 2008.

[SWK+04] Sharlin, E., Watson, B., Kitamura, Y., Kishino, F., and Itoh, Y.
 ´2Q WDQJLEOH XVHU LQWHUIDFHV KXPDQV DQG VSDWLDOLW\µ 3HUVRQDO
Ubiquitous Comput. 8, 5, 338-346.

>7@67XUNOH´(YRFDWLYH2EMHFWV7KLQJV:H7KLQN:LWKµ&DPEULGJH0$
MIT Press, 2007.

>7&@ :RUOG 7UDYHO  7RXULVP &RXQFLO  ´7UDYHO  7RXULVP (FRQRPLF


,PSDFWµ$SULOUHWULHYHGIURPKWWSZZZZWWFRUJ

[TKR+08] L. Terrenghi, D. Kirk, H. Richter, S. Krämer, O. Hilliges, and A.


%XW] ´3K\VLFDO KDQGOHV DW WKH LQWHUDFWLYH VXUIDFH ([SORULQJ WDQJLELOLW\ DQG LWV
EHQH"WVµLQ3URFHHGLQJVRI$9,SS ²145, NY: ACM, 2008.

>8,@ % 8OOPHU DQG + ,VKLL ´(PHUJLQJ IUDPHZRUNV IRU WDQJLEOH user
LQWHUIDFHVµ,%06\VWHPV-RXUQDOYROQR ²4, pp. 915²931, July 2000.

>8,@ - 8QGHUNRIIOHU DQG + ,VKLL ´,OOXPLQDWLQJ OLJKW $Q RSWLFDO GHVLJQ WRRO
with a luminous-WDQJLEOH LQWHUIDFHµ LQ 3URFHHGLQJV RI &+, SS  ²549,
NY: ACM, 1998.

[UI9@ - 8QGHUNRIIOHU DQG + ,VKLL ´8US $ OXPLQRXV -tangible workbench for
XUEDQ SODQQLQJ DQG GHVLJQµ LQ 3URFHHGLQJV RI &+, · SS  ²393, NY:
ACM, 1999.

>8,E@ % 8OOPHU DQG + ,VKLL ´0HGLDEORFNV 7DQJLEOH LQWHUIDFHV IRU RQOLQH
PHGLDµLQ3URFHHGLQJV RI&+,·SS²32, NY: ACM, 1999.

>8,-@ % 8OOPHU + ,VKLL DQG 5 -DFRE ´7RNHQFRQVWUDLQW V\VWHPV IRU
WDQJLEOH LQWHUDFWLRQ ZLWK GLJLWDO LQIRUPDWLRQµ $&0 7UDQVDFWLRQV RQ
ComputerHuman Interaction, vol. 12, no. 1, pp. 81 ²118, 2005.

[URL1] http://tangible.media.mit.edu/

[URL2] http://www.picocricket.com/

[URL3]
http://tek.sapo.pt/noticias/computadores/computadores_desaconselhados_a_
criancas_com_m_1071229.html

[URL4] http://www.handyboard.com/

[URL5] http://www.handyboard.com/cricket/

[URL6] http://mindstorms.lego.com/

[URL7] http://www.phidgets.com/

112  
 
[URL7] http://www.littlebits.cc

[URL8] http://code.google.com/p/shake -drivers/

[URL9] http://www.processing.org

[URL10] SHAKE's User Manual, http://code.google.com/p/shake -


drivers/downloads/detail?name=SHA KE%20SK6%20User%20Manual%20Rev
%20J.pdf&can=2&q=

[URL4] http://www.handyboard.com/

[URL5] http://www.handyboard.com/cricket/

[URL6] http://mindstorms.lego.com/

[URL7] http://www.phidgets.com/

[URL11] http://www.extrapixel.ch/processing/bluetoothDesktop/

[URL12] http://www.touchatag.com/

[URL13] http://www.superduper.org/processing/fullscreen_api/

[URL14] http://code.google.com/p/touchatag -processing/

[URL15] http://www.libnfc.org/documentation/installation

[URL16] http://processing.org/discourse/yabb2/YaBB.p l?num=1269777690

[URL17] http://www.libnfc.org/

[URL18] http://www.flickr.com/

[URL19] http://flickrj.sourceforge.net/

[URL20] http://www.mysecondplace.org/?p=398

[URL21] http://www.tuio.org/

[URL22] http://www.lagers.org.uk/processing.html

[VSK08] M. Virnes, E. Sutinen, and E. Kärnä-/LQ ´+RZ FKLOGUHQV LQGLYLGXDO


QHHGV FKDOOHQJH WKH GHVLJQ RI HGXFDWLRQDO URERWLFVµ LQ 3URFHHGLQJV RI ,'&
2008, pp. 274²281, NY: ACM, 2008.

>:@ :LOVRQ 0  ´6L[ YLHZV RI HPERGLHG FRJQLWLRQµ 3V\FKRQRPLF
Bulletin & Review, 9, 625²636.

>:@ 0 :HLVHU ´6RPH FRPSXWHU VFLHQFH LVVXHV LQ XELTXLWRXV FRPSXWLQJµ
Communications of the ACM, vol. 36, no. 7, pp. 74 ²84, 1993.

[W98] F. R. Wilson, The Hand ³ ´+RZ ,WV 8VH 6KDSHV WKH %UDLQ /DQJXDJH
DQG+XPDQ&XOWXUHµ9LQWDJHERRNV5DQG om House, 1998.

113  
 
>:%@ :HLVHU 0 DQG %URZQ -6 ´'HVLJQLQJ &DOP 7HFKQRORJ\µ
http://www.ubiq.com/hypertext/weiser/calmtech/calmtech.htm, December
1995.

[WDO04] S. A. Wensveen, J. P. Djajadiningrat, and C. J. Overbeeke,


´,QWHUDFWLRQ IURJJHU $ GHVLJQ IUDmework to couple action and function
WKURXJK IHHGEDFN DQG IHHGIRUZDUGµ LQ 3URFHHGLQJV RI ',6 · SS  ²184,
NY: ACM, 2004.

>:*@*:HLQEHUJDQG6*DQ´7KHVTXHH]DEOHV7RZDUGDQ H[SUHVVLYH DQG


interdependent multi-SOD\HUPXVLFDOLQVWUXPHQWµ&RPSXW er Music Journal, vol.
25, no. 2, pp. 37²45, 2001.

>:.6@ 7 :HVWH\Q - .LHQW] 7 6WDUQHU DQG * $ERZG ´'HVLJQLQJ WR\V
with automatic play characterization for supporting the assessment of a childs
GHYHORSPHQWµLQ3URFHHGLQJVRI,'&SS ²92, NY: ACM, 2008.

>:0*@ 3 :HOOQHU : 0DFND\ DQG 5 *ROG ´&RPSXWHU -augmented
HQYLURQPHQWV %DFN WR WKH UHDO ZRUOGµ &RPPXQLFDWLRQV RI WKH $&0 YRO 
no. 7, pp. 24²26, 1993.

>:3@ 3 :\HWK DQG + & 3XUFKDVH ´7DQJLEOH SURJUDPPLQJ HOHPHQWV IRU
young chiOGUHQµ LQ 3URFHHGLQJV RI &+, · ([WHQGHG $EVWUDFWV SS  ²775,
NY: ACM, 2002.

>::+@ - :HUQHU 5 :HWWDFK DQG ( +RUQHFNHU ´8QLWHG -pulse: Feeling
\RXU SDUWQHUV SXOVHµ LQ 3URFHHGLQJV RI 0RELOH+&,  SS  ²553, NY:
ACM, 2008.

>=@ - =KDQJ ´7KH QDWXUH RI H[WHUQDO UHSUHVHQWDWLRQV LQ SUREOHP VROYLQJµ
Cognitive Science, vol. 21, pp. 179²217, 1997.

>=$5@ =XFNHUPDQ 2 $ULGD 6 DQG 5HVQLFN 0  ´([WHQGLQJ WDQJLEOH
interfaces for education: digital montessori -LQVSLUHG PDQLSXODWLYHVµ ,Q 3 roc.
of the SIGCHI Conference on Human Factors in Computing Systems. CHI '05.

>=&&@ = =KRX $ ' &KHRN 7 &KDQ - + 3DQ DQG < /L ´,QWHUDFWLYH
HQWHUWDLQPHQW V\VWHPV XVLQJ WDQJLEOH FXEHVµ LQ 3URFHHGLQJV RI ,(
Australian Workshop on Interact ive Entertainment, 2004.

>=+6@-=LJHOEDXP0+RUQ26KDHUDQG5-DFRE ´7KHWDQJLEOHYLGHR
HGLWRU &ROODERUDWLYH YLGHR HGLWLQJ ZLWK DFWLYH WRNHQVµ LQ 3URFHHGLQJV RI 7(,
¶SS²46, NY: ACM, 2007.

[ZJL+09] G. Zufferey, P. Jermann, A. Lucchi, and P. Dillenbourg,


´7LQNHU6KHHWV 8VLQJ SDSHU IRUPV WR FRQWURO DQG YLVXDOL]H WDQJLEOH
VLPXODWLRQVµLQ3URFHHGLQJVRI7(,SS ²384, NY: ACM, 2009.

>=1@ - =KDQJ DQG ' $ 1RUPDQ ´5HSUHVHQWDWLRQV LQ GLVWULEXWHG FRJQLWLYH
WDVNVµ&RJQLWLYH6FLHQFH vol. 18, no. 1, pp. 87²122, 1994.

114  
 

You might also like