Professional Documents
Culture Documents
TUIO and Embodied Cognition
TUIO and Embodied Cognition
Tangible
User
Interfaces
and
Embodied
Cognition
Augusto Esteves
C ONTENTS
ABSTRACT ........................................................................................................... 8
1. INTRODUCTION ............................................................................................... 9
1.1 THESIS GOALS ........................................................................................ 10
1.1.1 THE DEVELOPMENT OF TUIS FOR NOVEL DOMAINS.................. 10
1.1.2 THE CREATION OF GUIDELINES FOR TUI DESIGN BASED ON THE
THEORIES OF EMBODIED COGNITION .................................................. 11
2
2.4.8 TANGIBLE REMINDERS .................................................................. 33
2.5 CURRENT FRAMEWORKS AND CLASSIFICATIONS ............................... 33
2.5.1 PROPERTIES OF GRASPABLE USER INTERFACES ......................... 34
2.5.2 CONCEPT AND THE MCRIT INTERACTION MODEL FOR TUIS .. 34
2.5.3 CLASSIFICATION OF TUIS .............................................................. 35
2.5.4 MAPPINGS BETWEEN THE PHYSICAL AND THE DIGITAL ........... 36
2.5.5 TOKENS AND CONSTRAINTS ......................................................... 37
2.5.6 TANGIBLE INTERACTION .............................................................. 38
2.6 TECHNOLOGIES FOR BUILDING TUIS ................................................. 39
2.6.1 RADIO-FREQUENCY IDENTIFICATION ......................................... 39
2.6.2 COMPUTER VISION ......................................................................... 39
2.6.3 MICROCONTROLLERS , SENSORS AND ACTUATORS ..................... 40
2.6.4 TOOLS FOR TUI DEVELOPMENT ................................................... 40
2.7 BENEFITS AND LIMITATIONS OF TUIS ................................................ 43
2.7.1 STRENGTHS ..................................................................................... 43
2.7.2 LIMITATIONS ................................................................................... 45
3. EMBODIED COGNITION ................................................................................. 48
3.1 INTRODUCTION AND RELEVANCE ....................................................... 48
3.2 FOUR VIEWS OF EMBODIED COGNITION........................................... 50
3.2.1 COGNITION IS SITUATED ............................................................... 50
3.2.2 HUMANS OFF -LOAD COGNITION ONTO THE ENVIRONMENT .... 50
3.2.3 COGNITION IS FOR ACTION ........................................................... 50
3.2.4 OFF-LINE COGNITION IS BODY-BASED ........................................ 51
3.3 GUIDELINES FOR TUI DEVELOPMENT .............................................. 51
3.3.1 THINKING THROUGH TUIS ........................................................... 51
3.3.2 ACTING THROUGH TUIS ............................................................... 52
3.3.3 SCAFFOLDING TO TUIS ................................................................. 54
3.3.4 COLLABORATING THROUGH TUIS ............................................... 55
4. FIRST PROTOTYPE: MEMENTOS ................................................................... 57
3
4.1 INTRODUCTION ..................................................................................... 57
4.2 CONCEPT ............................................................................................... 58
4.2.1 QUESTIONNAIRE ............................................................................ 59
4.2.2 MIND MAP ....................................................................................... 62
4.2.3 DESCRIPTION .................................................................................. 63
4.2.4 SCENARIO ........................................................................................ 65
4.2.5 STAKEHOLDER·S DIAGRAM ............................................................ 67
4.3 DESIGN .................................................................................................. 68
4.3.1 THE TAC PARADIGM ...................................................................... 68
4.3.3 TOKENS· DESIGN ............................................................................ 71
4.3.4 WIREFRAMES ................................................................................... 72
4.4 IMPLEMENTATION ................................................................................ 75
4.4.1 TOKENS (TRAVELLING) ................................................................. 75
4.4.2 PUBLIC KIOSKS ................................................................................ 77
4.4.3 HOME (PERSONAL COMPUTER ) ..................................................... 79
4.5 EVALUATION ......................................................................................... 80
4.5.1 RESULTS ........................................................................................... 81
4.6 FINAL REMARKS .................................................................................... 82
5. SECOND PROTOTYPE: ECO PLANNER ........................................................... 83
5.1 INTRODUCTION ..................................................................................... 83
5.2 CONCEPT ............................................................................................... 84
5.2.1 THE TRANS-THEORETICAL MODEL .............................................. 86
5.2.2 MIND MAP ....................................................................................... 86
5.2.3 SCENARIO ........................................................................................ 87
5.3 DESIGN .................................................................................................. 89
5.3.2 THE TAC PARADIGM ...................................................................... 89
5.3.3 TOKENS· DESIGN ............................................................................ 91
5.3.1 WIREFRAMES ................................................................................... 92
5.4 IMPLEMENTATION ................................................................................ 93
4
5.4.1 PROCESSING LIBRARIES.................................................................. 94
5.5 FINAL REMARKS .................................................................................... 95
6. DISCUSSION AND CONCLUSION ..................................................................... 96
6.1 CONTRIBUTIONS ................................................................................... 96
6.2 FUTURE WORK ....................................................................................... 99
6.3 FINAL REMARKS .................................................................................. 100
7. REFERENCES .............................................................................................. 101
P ICTURES
5
FIGURE 22 GRAPHS WITH THE AGE GROUPS OF TYPICAL TOURISTS . THE GRAPH ON THE LEFT
REPRESENTS THE ANSWERS TO THE ONLINE QUESTIONNAIRE , WHILE THE GRAPH ON THE RIGHT
REPRESENTS ANSWERS TO THE PHYSICALLY QUESTIONNAIRES HANDED IN HOTELS . .................. 59
FIGURE 23 GRAPHS WITH THE GROUP SIZE OF TYPICAL TOURISTS. THE GRAPH ON THE LEFT
REPRESENTS THE ANSWERS TO THE ONLINE QUESTIONNAIRE , WHILE THE GRAPH ON THE RIGHT
REPRESENTS ANSWERS TO THE PHYSICALLY QUESTIONNAIRES HANDED IN HOTELS . .................. 60
FIGURE 24 GRAPHS WITH THE COMPANIONS OF TYPICAL TOURISTS. THE GRAPH ON THE LEFT
REPRESENTS THE ANSWERS TO THE ONLINE QUESTIONNAIRE , WHILE THE GRAPH ON THE RIGHT
REPRESENTS ANSWERS TO THE PHYSICALLY QUESTIONNAIRES HANDED IN HOTELS . .................. 60
FIGURE 25 GRAPHS WITH THE DURATION OF THE TRIPS OF TYPICAL TOURISTS . THE GRAPH ON
THE LEFT REPRESENTS THE ANSWERS TO THE ONLINE QUESTIONNAIRE , WHILE THE GRAPH ON
THE RIGHT REPRESENTS ANSWERS TO THE PHYSICALLY QUESTIONNAIRES HANDED IN HOTELS .
................................................................................................................................................................ 60
FIGURE 26 GRAPHS WITH THE ANSWERS TO HOW OFTEN PEOPLE FEEL LOST ON THEIR TRIPS.
THE GRAPH ON THE LEFT REPRESENTS THE ANSWERS TO THE ONLINE QUESTIONNAIRE , WHILE
THE GRAPH ON THE RIGHT REPRESENTS ANSWERS TO THE PHYSICALLY QUESTIONNAIRES
HANDED IN HOTELS .............................................................................................................................. 61
FIGURE 27 GRAPHS WITH WHEN PEOPLE PREFER TO PLAN FOR THEIR TRIPS . THE GRAPH ON
THE LEFT REPRESENTS THE ANSWERS TO THE PHYSICALLY QUESTIONNAIRES HANDED IN
HOTELS (WHERE PEOPLE GRADED THEIR PREFERENCES FROM 1 TO 5), WHILE THE GRAPH ON
THE RIGHT REPRESENTS ANSWERS TO THE ONLINE QUESTIONNAIRE . ........................................... 61
FIGURE 28 GRAPHS WITH HOW PEOPLE PREFER TO TRAVEL DURING THEIR TRIPS . THE GRAPH
ON THE LEFT REPRESENTS THE ANSWERS TO THE PHYSICALLY QUESTIONNAIRES HANDED IN
HOTELS (WHERE PEOPLE GRADED THEIR PREFERENCES FROM 1 TO 5), WHILE THE GRAPH ON
THE RIGHT REPRESENTS ANSWERS TO THE ONLINE QUESTIONNAIRE . ........................................... 61
FIGURE 29 GRAPHS WITH HOW PEOPLE SHARE MEDIA FROM THEIR TRIPS. THE GRAPH ON THE
LEFT REPRESENTS THE ANSWERS TO THE PHYSICALLY QUESTIONNAIRES HANDED IN HOTELS
(WHERE PEOPLE GRADED THEIR PREFERENCES FROM 1 TO 5), WHILE THE GRAPH ON THE RIGHT
REPRESENTS ANSWERS TO THE ONLINE QUESTIONNAIRE . ............................................................... 62
FIGURE 30 MIND MAP FOR THE TUI PROTOTYPE MEMENTOS. FOR AN IMAGE WITH HIGHER
RESOLUTION , PLEASE REFER TO ANNEX D........................................................................................ 62
FIGURE 31 THE MEMENTOS· CONCEPT. ............................................................................................ 63
FIGURE 32 THE STAKEHOLDER ·S DIAGRAM FOR THE MEMENTOS TUI. ....................................... 67
FIGURE 33 THE TAC S FOR THE MEMENTOS TUI. ........................................................................... 69
FIGURE 34 THE DIALOGUE DIAGRAM FOR THE MEMENTOS TUI. ................................................. 70
FIGURE 35 THE INTERACTION DIAGRAM FOR THE MEMENTOS TUI (AT THE PUBLIC KIOSKS ). . 70
FIGURE 36 THE INTERACTION DIAGRAM FOR THE MEMENTOS TUI (WHILE TRAVELLING ). ...... 71
FIGURE 37 SOME OF THE CONSIDERED TOKENS. ............................................................................. 71
FIGURE 38 WIREFRAME #1 FOR THE PUBLIC KIOSK OF MEMENTOS: THE USER IS INFORMED OF
WHERE HE IS IN RELATION TO THE CITY HE IS VISITING . F OR AN IMAGE WITH HIGHER
RESOLUTION , PLEASE REFER TO ANNEX E. ....................................................................................... 72
FIGURE 39 WIREFRAME #2 FOR THE PUBLIC KIOSK OF MEMENTOS: THE USER HAS PLACED ONE
OF THE CONCRETE TOKENS HE WAS CARRYING ON THE FIRST SLOT OF THE KIOSK ;; HE HIS
INFORMED OF TRANSPORTATION OPTIONS (AND THE TIME COSTS ) CLOSE TO HIS CURRENT
LOCATION . F OR AN IMAGE WITH HIGHER RESOLUTION , PLEASE REFER TO ANNEX E................. 73
FIGURE 40 WIREFRAME #3 FOR THE PUBLIC KIOSK OF MEMENTOS: THE USER HAS PLACED
ANOTHER OF THE CONCRETE TOKENS HE WAS CARRYING ON THE SECOND SLOT OF THE KIOSK ;;
HE HIS INFORMED OF TRANSPORTATION OPTIONS (AND THE TIME COST ) BETWEEN HIS
6
LOCATION AND THE TWO DESIRED DESTINATIONS . FOR AN IMAGE WITH HIGHER RESOLUTION ,
PLEASE REFER TO A NNEX E. ............................................................................................................... 73
FIGURE 41 WIREFRAME #4 FOR THE PUBLIC KIOSK OF MEMENTOS: THE USER HAS PLACED ONE
OF THE ABSTRACT TOKENS FOUND IN THE KIOSK UNDER THE CONCRETE TOKEN ON THE FIRST
SLOT;; HE HIS INFORMED OF BUS STOPS CLOSE TO THE FIRST DESTINATION . FOR AN IMAGE WITH
HIGHER RESOLUTION , PLEASE REFER TO ANNEX E. ........................................................................ 74
FIGURE 42 WIREFRAME #5 FOR THE PUBLIC KIOSK OF MEMENTOS: THE USER HAS REMOVED
THE CONCRETE TOKEN FROM THE FIRST SLOT OF THE KIOSK ;; HE HIS INFORMED OF
TRANSPORTATION OPTIONS (AND THE TIME COST ) CLOSE TO HIS CURRENT LOCATION ;; A
HISTORY OF PREVIOUS DESTINATIONS IS KEPT ON - SCREEN . F OR AN IMAGE WITH HIGHER
RESOLUTION , PLEASE REFER TO ANNEX E. ....................................................................................... 74
FIGURE 43 USERS RECEIVE VIBROTACTILE MESSAGES ON THE TOKENS THEY CARRY AS THEY
PASS THROUGH RELEVANT REAL- WORLD SETTINGS . THIS IS ACHIEVED BY SENDING THE
MESSAGES VIA BLUETOOTH , FROM THE RELEVANT SPOTS (WHO SCAN FOR THE CORRECT
TOKENS ) TO THE TOKENS , EMBEDDED WITH A SHAKE [URL8] DEVICE . ..................................... 75
FIGURE 44 AN OPEN SHAKE DEVICE [URL8]. ................................................................................ 76
FIGURE 45 THE SET OF TOKENS USED IN THE KIOSK EVALUATION : FOUR ABSTRACT TOKENS
(REPRESENTING MARKETS , TAXIS, CAFES, AND MUNICIPAL WIFI) AND TWO CONCRETE TOKENS
(REPRESENTING A FAMOUS CHURCH AND BOTANICAL GARDEN ). ................................................... 76
FIGURE 46 A TOUCHATAG RFID READER AND FIVE TAGS [URL12]. ............................................ 77
FIGURE 47 SCREENSHOT FROM APPLICATION THAT RUNS ON THE PUBLIC KIOSK , AS PART OF
THE MEMENTOS TUI. F OR AN IMAGE WITH HIGHER RESOLUTION , PLEASE REFER TO ANNEX I.
................................................................................................................................................................ 78
FIGURE 48 SCREENSHOT FROM APPLICATION THAT RUNS AT THE PERSONAL COMPUTER IN THE
USERS · HOME , AS PART OF THE MEMENTOS TUI. .............................................................................. 79
FIGURE 49 ONE OF THE GROUPS CREATING THEIR VERSION OF A PLAN TO VISIT FUNCHAL FOR
A DAY ...................................................................................................................................................... 81
FIGURE 50 MIND MAP FOR THE ECO PLANNER TUI. ...................................................................... 86
FIGURE 51 MIND MAP FOR THE ECO PLANNER TUI. FOR AN IMAGE WITH HIGHER RESOLUTION ,
PLEASE REFER TO A NNEX Q. ............................................................................................................... 87
FIGURE 52 THE DIALOGUE DIAGRAM FOR THE ECO PLANNER TUI. ............................................ 90
FIGURE 53 THE INTERACTION DIAGRAM FOR THE ECO PLANNER TUI. ....................................... 91
FIGURE 54 TOKENS USED FOR THE ECO PLANNER TUI. THEY CAN BE VERTICALLY ATTACHED
TO ONE ANOTHER , AND IT ·S POSSIBLE TO ADD TIME PIECES IN THE FRONT. ................................ 91
FIGURE 55 THE WIREFRAME #1 FOR THE ECO PLANNER TUI. FIVE ACTIVITY TOKENS ARE
BEING USED . F OR AN IMAGE WITH HIGHER RESOLUTION , PLEASE REFER TO ANNEX R. ............. 92
FIGURE 56 THE WIREFRAME #2 FOR THE ECO PLANNER TUI. ONE ACTIVITY TOKEN IS IN THE
OPTIONS AREA . F OR AN IMAGE WITH HIGHER RESOLUTION , PLEASE REFER TO ANNEX R. ........ 92
FIGURE 57 THE WIREFRAME #3 FOR THE ECO PLANNER TUI. USER CAN COMMIT TO THE
CHANGES MADE . F OR AN IMAGE WITH HIGHER RESOLUTION , PLEASE REFER TO ANNEX R. ...... 93
FIGURE 58 THE WIREFRAME #4 FOR THE ECO PLANNER TUI. USER HAS COMMITTED TO A NEW
ROUTINE . F OR AN IMAGE WITH HIGHER RESOLUTION , PLEASE REFER TO ANNEX R. .................. 93
FIGURE 59 PHOTO TAKEN DURING THE TESTING PHASE OF THE ECO PLANNER PROTOTYPE .
THE TOKENS ARE BEING SCANNED FROM A TOP PERSPECTIVE USING REACTIVISION
TECHNOLOGY [JGA+07]. ..................................................................................................................... 94
FIGURE 60 SCREENSHOT FROM A TEST OF THE ECO PLANNER TUI. THE TOKENS ARE BEING
TRACKED IN A SURFACE , WHILE THE INTERFACE IS BEING SIMULATED ON A MONITOR (WITH
TOKENS AS PICTURES OF ACTIVITIES ). F OR AN IMAGE WITH HIGHER RESOLUTION , PLEASE REFER
TO ANNEX U. ........................................................................................................................................ 95
7
A BSTRACT
Tangibles User Interfaces (TUIs) have matured to a point where they can be
considered as an established area of Human-Computer Interaction research.
Despite this, most TUIs are built to address very particular tasks found
normally in research labs, kindergartens, or specific work environments. This
dissertation contributes with the design and implementation of two Tangible
Interfaces that tackle common problems of typical users ² Mementos, a TUI
for tourists and travelers, and Eco Planner, a TUI that focuses on
sustainability. Finally, this dissertation also reviews the theories on Embodied
Cognition, opening the discussion on the possibility of using them as core to
new frameworks or guidelines to aid TUI developers in creating systems that
feel more like real world tools.
8
1. I NTRODUCTION
9
increasingly popular in, such as in the areas of learning, planning and problem
solving, programming and simulation tools, information visualization and
exploration, entertainment, play, performance and music, social
communication [SH10], and most recently work (e.g. [HIW08, EB09]).
In the last decade a considerable amount of frameworks for TUI design and
development were created to provide developers with explanatory and
generative power, allowing them to analyze and compare different TUI
prototypes and providing them with a conceptual structure for thinking
through a problem or application [SH10]. Developers can then implement
their systems through the use of technologies such as radio-frequency
iGHQWLÀcation (RFID), computer vision (e.g. ARToolKit [KB99, KB00]), and
microcontrollers associated with sensors and actuators (e.g. phidgets [URL7],
littleBits [URL8]).
10
1.1.2 THE CREATION OF GUIDELINES FOR TUI DESIGN BASED ON THE THEORIES OF
EMBODIED COGNITION
This dissertation will present arguments that current frameworks for TUI
design (see Section 2.5) focus mainly on implementation and characterization
issues, DQGGRQ·W take advantage from core principle of TUIs ² their physically
and existence in the real world. These frameworks say little regarding how a
TUI can be designed in order to maximize usability or to reflect aspects of
human cognition. In order to tackle this research gap, a literature review on
the cognitive theories of Embodied Cognition will be conducted (see Section
3) to analyze their potential as a basis for the development of guidelines or
frameworks for TUI design. From this review, an initial set of guidelines will
be created (see Section 3.3) to guide the conceptualizing process of both the
prototypes to be developed (see Section 4 and 5). These guidelines will be
iterated over as the development process of the prototypes moves forward, as
to provide an understanding of the utility of such guidelines in TUI design.
11
2. T ANGIBLE U SER I NTERFACES
2.1 INTRODUCTION
Since the 1970s that the dominant form of human-computer interface is
limited to the desktop computer, using mouse and keyboard to interact with a
series of windows, icons, menus and pointers (WIMP). Named Graphical User
Interface (GUI), it serves as a general purpose interface, emulating multiple
different tools and allowing for many different interfaces to be represented
digitally on-screen. As it allows for both hands to work in synchrony, the GUI
paradigm is competent enough for a task such as writing this paper, but the
same input devices are used in any other application domains, from
productivity tools to games [KHT06].
The technological advancements made over the last twenty years, allied with
research on the psychological and social aspects of Human-Computer
Interaction (HCI), have lead to a recent explosion of new post-WIMP
interaction styles on almost every possible domain. These novel input devices
(such as the Wii Remote or multi-touch surfaces) are becoming increasingly
popular, as they draw on WKH XVHUV· real world skills to interact with digital
content [DBS09]. Not only are input devices changing but computers as well,
EHFRPLQJHPEHGGHGLQWKHXVHUV·HYHU\GD\REMHFWVDQGHQYLURQPHQWs, making
computation more accessible and pro-active [R06].
12
One of the emerging post-WIMP interfaces are Tangible User Interfaces
(TUIs), which differ from GUIs by being special purpose interfaces for
specific applications and domains that are reliant on well-defined physical
forms [I08]. These forms provide tangible representations to digital
information and controls, allowing users to literally interact with data through
their hands and bodies. TUIs are implemented using a series of materials and
technologies, augmenting physical objects to receive and interpret inputs (e.g.
grabbing, squeezing, tapping, moving), and to provide a series of sensory
stimulating outputs (e.g. altering textures and shapes, sound and visual effects).
These tangible objects then provide users with a parallel feedback loop,
combining physical passive haptic feedback with digital feedback (visual or
auditory) [UI00]. A tangiblHREMHFWRUIRUPLVWKHQ´VLPXOWDQHRXVO\LQWHUIDFH
LQWHUDFWLRQ REMHFW DQG LQWHUDFWLRQ GHYLFHµ >HB06]. Furthermore, by having a
three-GLPHQVLRQDOLQWHUDFWLRQ78,VGRQ·WOLPLWWKHPVHOYHVWRWZR-dimensional
images on a screen [SH10].
For many reasons, TUIs are very appealing to different kinds of users. They
let users be active and creative with their hands [W98], and the interaction is
firmly rooted in WKH XVHUV· real world knowledge and skills [DBS09] (e.g. by
employing physical constraints as slots or racks to the interface, indicating how
an object must be handled in the context of the application [JGH+08]). It has
also been shown that allowing consistently dedicating physical actions to
interface functions affords kinesthetic learning and memorization [KHT06].
Since data and information are physically represented, they can also be
concurrently manipulated by multiple users, making TUIs also well suited for
collaboration [I08@ $V XVHUV FDQ VHH DQRWKHU·V SK\VLFDO DFWLRQV WKH\ EHFRPH
more aware, leading to fluid interaction and coordination [HMD+08] as well
13
as promoting discussion [ZAR05]. ,W·V DVVXPHG WKDW WKH 78, IHDWXUHV RI
interaction can lead users to feel more engaged and reflective, as well as to
lowering the threshold of participation in such systems [M07]. This way, TUIs
are making the interaction with computers less cognitively taxing, allowing
users to focus more of their brain power on the task at hand [SGH94].
TUIs became an important research area mostly due to the work of Hiroshi
Ishii and his Tangible Media Group. Nowadays, the most valuable
contributions are found in the work of Orit Shaer and Eva Hornecker, as they
aim to establish solid methods and frameworks for TUI development. It is
QRZYHU\FRPPRQWRVHHWKHZRUG¶TDQJLEOH·LQPDQ\FRQIHUHQFHSDSHUVDQG
session titles. The first conference fully devoted to Tangible Interfaces and
Tangible Interaction took place in Baton Rouge, Louisiana, back in 2007.
Named TEI (Tangible, Embedded and Embodied Interaction), it annually
gathers the work of a diverse community, made of HCI researchers,
technologists, product designers, artists, and others. Now in its fifth year, the
conference will be held for the third time in Europe, Portugal, with the help of
the newly created Madeira Interactive Technologies Institute (M-ITI).
For a more comprehensive introduction and review of TUIs, please refer to
WKH·VPRQRJUDSKIURP6KDHUHWDO. [SH10].
2.2 HISTORY
7KH WHUP ¶7DQJLEOH ,QWHUIDFH· GHULYHG IURP WKH LQLWLDO PRWLYDWLRQ IRU
Augmented Reality and Ubiquitous Computing [SH10]. Back in 1993, the issue
´%DFN WR WKH 5HDO :RUOGµ >WMG93] argued that the current desktop
computers and virtual reality were very different from thH KXPDQV· QDWXUDO
environment. The issue suggested that users VKRXOGQ·W EH IRUFHG WR HQWHU D
virtual world, but that the real world should be enriched and augmented with
digital content and functionality. TUIs emerged as part of a trend that
followed a motivation to retain the richness of physical interaction, and by
HPEHGGLQJ FRPSXWLQJ LQ WKH XVHUV· WRROV HQYLURQPHQWV DQG SUDFWLFHV ²
enabling a real-time parallel between the digital and the real [SH10]. Ideas
from ethnography, situated and embodied cognition, and phenomenology
EHFDPH FHQWUDO WR WKH ELUWK RI 78,V DV ´KXPDQV DUH RI DQG LQ WKH HYHU\GD\
ZRUOGµ>W93].
It took a couple of years for these ideas to materialize in an interaction style
of its own. In 1995, Fitzmaurice et al. [FIB95] introduced the concept of a
Graspable Interface, where graspable handles are used to manipulate digital
14
objects. Two years later, Ishii and his students presented their work on
Tangible Bits [IU97]. This work focused on making the physical world an
interface by mapping objects and surfaces with digital data.
While Ishii and his students worked on their vision, other research groups
focused on developing applications for specific domains, by augmenting
existing tools and artifacts. Some of the resulting work of such groups was
later classified as TUIs. For example the work of Wendy Mackay, with the use
RI ÁLJKW VWULSV LQ DLU WUDIILF FRQWURO DQG RQ DXJPHQWHG SDSHU LQ YLGHR
storyboarding [MF99]. Other examples include the German Real Reality, that
allowed for the simultaneous construction of real and digital models [B93,
BB96], and the work of Rauterberg and his team, that developed Build-IT
[RFK+98], an augmented reality tabletop planning tool that was based on
)LW]PDXULFH·VJUDVSDEOHLQWHUIDFHLGHD In the domain of education, Suzuki and
Kato [SK93, SK95] developed the AlgoBlocks, that aimed to supporting
groups of children in learning how to program. Cohen et al. [CWP99] created
the Logjam, which supported video logging and coding.
Since TUIs have been proposed as novel interface style, development has
focused on exploring technical solutions and possibilities. It is now on a more
mature phase of research, where the goal isQ·W tied to a proof-of-concept, but
focuses on conceptual design, user studies, field tests, critical reflection,
theory, building design knowledge, connecting with different design
disciplines, and creating toolkits and frameworks to lower the threshold for
developing TUIs [SH10].
15
A system that was strongly based on this interface was 5DXWHUEHUJ·V%XLOG-
IT [RFK+98]. It utilized the same input mechanisms in conjunction with
Augmented Reality visualizations for architectural and factory planning tasks.
A couple of years later, Ishii and his students work with Tangible Bits
sparked the idea of the TUI [IU97]. The work aimed at making bits physically
accessible and manipulable, using real world objects and tools as a medium for
manipulation, and the real world as a display. Ambient displays would
represent information and data through sound, light, air or water movement
HJ1DWDOLH-HUHPLMHQNR·V/LYH:LUH>WB95]).
Figure 2 &HQWHUDQGSHULSKHU\RIWKHXVHU·VDWWHQWLRQZLWKLQDSK\VLFDOVSDFH>IU97].
One of the first TUI prototypes developed by Ishii was still strongly
influenced by the GUI paradigm. The Tangible Geospace was an interactive
map of the MIT Campus projected onto a table. As users placed physical icons
on the table (e.g. a model of the Barker Engineering Library), the map
automatically reposition itself so that the physical icon was positioned over the
respective building on the map. Adding other tangible models made the map
zoom and turn to match the corresponding buildings. ,VKLL·V 8US SURMHFW
[UI99] deliberately diverted from the GUI metaphor, focusing on tangible
objects that served for manipulating and representing data, more specifically
16
tangible buildings in the context of an urban planning system (see Figure 3).
Users were able to interact with wind and sunlight simulations by moving
physical building models on the surface of a table. These tangible buildings
casted digital shadows, while the simulated wind flow was projected as lines.
Users could HYHQ FKDQJH WKH EXLOGLQJV· PDWHULDO SUoperties (glass or stone
walls) and change the time of day, influencing the projection of shadows and
wind patterns.
Figure 3 Urp, a TUI for urban planning that allows for users to determine the wind flow and shadow
projecting of buildings [UI99].
17
2.2.3.2 THE MARBLE ANSWERING MACHINE
3UREDEO\ WKH PRVW NQRZQ LQVSLUDWLRQ IRU 78,V LV 'XUUHO %LVKRS·V 0DUEOH
Answering Machine [P95]. In this system, incoming calls are represented by
colored marbles. As users place the marbles in the different indentations of
the machine they can access different functions relating to the call represented
by the marble, such as playing messages or calling the number back (see Figure
4).
2.2.3.3 3D MODELING
Motivated by the fact that CAD systems in the 1980s were cumbersome and
awkward to use, both Robert Aish [A79$1@DQG-RKQ)UD]HU·VWHDP[F95,
FF80, FF82] started looking for alternatives using physical models as input
devices. Their results was a computer that could create a digital model of the
construction by scanning the assembly of blocks, taking into account the
location, orientation and type of each component. The computer simulation
could even provide suggestions on how to improve the users· design, and the
plans and drawings would be automatically printed once the users were
satisfied.
18
interaction [SH10]. Some of these interfaces were born from the work of Ishii
on Tangible Bits, such as ambient displays ² others can be seen as particular
types of TUIs.
19
Greenberg and Fitchett [GF01] described a series of projects that used the
Phidgets toolkit to build physical awareness devices, such as a flower that
blooms every time a work colleague is available. More recent projects include
Tangible Interfaces as Ambient Displays, some supporting distributed groups
in maintaining awareness [BWD07] by using tangible objects for input and
output. Edge and Blackwell [EB09] suggested that tangible objects could also
EH XVHG LQ WKH SHULSKHU\ RI WKH XVHUV· DWWHQWLRQ UHVXOWLQJ LQ VPDOO $PELHQW
Displays ² e.g. tangible objects that represent tasks and documents on an
office secretary.
20
Tangible Interfaces differ from the other approaches as they allow for
representations to be artifacts in their own way, since users can directly act
upon, lift up, rearrange, sort and manipulate [D01]. At one moment in time,
even several levels of meaning can be present at once (e.g. moving a prism
token in Illuminating Light [UI98] can be done simply to make space, to
explore the V\VWHP·VRXWSXW, to use it as a tool, or to explore the entire system
as tool). The user can freely change his attention between all these diơerent
nested levels because of the embodiment of computation.
21
2.3.6 REALITY-‐BASED INTERACTION
Proposed by Jacob et al. [JGH+08], Reality-Based Interaction has the goal of
serving as a unifying framework for a large subset of emerging interaction
styles in the field of HCI. These styles include virtual and augmented reality,
ubiquitous and pervasive computing, and handheld and Tangible Interaction.
The common denominator between all these interaction styles is the fact
WKDWWKH\DOODLPWRWDNHDGYDQWDJHIURPWKHXVHUV·UHDO-world skills, as they try
to narrow the gap between the differences in engaging with digital and
physical tools.
22
exchange physical objects, and the ability to work with others to
collaborate on a task.
23
Tangibles and Learning [OF04]. Jordà [J08], on the other hand, offers an
overview over music performances with TUIs.
Figure 10 From left to right: a Lego Mindstorms [URL6] robot that can detect and grab colored balls;;
a Topobo [RPI04] assembly, with physical parts that can be programmed in separate by simple
demonstration.
Computationally enhanced constructions kits are TUIs that can help children grasp
concepts that are normally considered to be beyond their cognitive and
abstract thinking capabilities [SH10]. Examples include the Smart Blocks, that
24
allows children to explore the concepts of volume and surface of tangible
objects they assemble [GSH+07];; the Curlybot [FMI00], a robotic ball that is
able to record the movement children imprint on it and then replay it;; Topobo
[RPI04], a TUI that enables children to learn about balance, movement
patterns, and anatomy, by allowing the construction of robots from physical
parts that can be programmed individually though demonstration (see Figure
10).
Another compelling area for TUIs in learning is the area of storytelling,
supporting literacy education by augmenting books and toys. An example is
the StoryMat by Ryokai and Cassell [RC99], which is composed by a carpet
WKDW FDQUHFRUG DQG UHSOD\ FKLOGUHQ·VVWRULHs by detecting which toys that are
placed upon it. Some projects use various techniques in tandem with TUIs, as
Augmented Reality (e.g. [ZCC+04]), or multi-WRXFK VFUHHQV HJ ´(O\ WKH
([SORUHUµ>ABL+04]).
TUI systems have also started to be used as aids for children with special
needs [VSK08], such as the Topobo [RPI04], the Lego Mindstorms (see Figure
10), and the LinguaBytes project [HHO08, HHO09] by Hengeveld, that used
story-reading as a way of tackling speech impairment in handicapped children.
Operating tangible objects slows down interaction, provides a sensorial
experience, supports collaboration, and allows children to train their
perceptual-motor skills DQG WR EH LQ FRQWURO RI ZKDW·V KDSSHQLQJ ,W·V DQ
accepted fact that TUIs can provide children with access to a rich learning
environment, with more opportunities for cognitive, linguistic and social
learning. Some researchers even fear the impact that traditional GUIs can have
RQFKLOGUHQLILW·VWKHRQO\DYDLODEOHWRROIRUSOD\DQGOHDUQLQJsince an early age
[URL3].
Some TUIs have also been developed as diagnostic tools, allowing for a
perception of the level of cognitive and spatial abilities a child possesses, or
detecting the effects of brain damage on adults [SIW+02]. This is achieved by
analyzing the steps and mistakes taken during the building of spatial structures
using tangible objects. Other projects include tangible toys that record
LQWHUDFWLRQ SURYLGLQJ YDOXDEOH GDWD WR GHWHUPLQH WKH FKLOG·V RYHUDOO
development [WKS08].
25
objects), the existence of physical constraints in the interface, and the
tangibility of the problem. Epistemic actions [B07] are the manipulations of
physical or digital artifacts not for the sake of the goal of the activity, but with
the aim of better understanding the DFWLYLW\·V context. Such actions should
alleviate mental work [KM94], and facilitate the successful conclusion of the
task at hand [MWC03]. Physical constraints use physical affordances (e.g.
UDFNV VORWV WR FRPPXQLFDWH WKH V\VWHP·V V\QWD[ GHFUHDVH WKH OHDUQLQJ time
for how to use the interface, and consequently lower the threshold for using
the system [UIJ05].
Figure 11 The SandScape [I08b], a TUI that allows users to interact with physical sand and see the
results projected onto the landscape in real-time.
26
nodes and links, and simultaneously see the simulation results projected onto
the table in real-time (see Figure 12).
27
2.4.3 INFORMATION VISUALIZATION
$OWKRXJK PRVW 78,V DUH FRPSRVHG RII SK\VLFDO ´LPPXWDEOHµ REMHFWV WKH\
offer a very rich multimodal representation and allow for two-handed input,
enhancing the interaction with visualizations [SH10].
The Props-Based Interface for 3D Neurosurgical Visualization [HPG+94]
is a TUI for neurosurgical visualization that supports the physical
manipulation of handheld tools in free space. Surgeons can simulate slices by
simply holGLQJDSODVWLFSODWHXSWRDGROO·VKHDG. It was shown that users of
this system could understand and use the interface within one minute of
touching the physical props. GeoTUI [CRR08] is a TUI that allows
geophysicists to use tangible props to cut planes on digital geographical maps
that are projected upon a surface. The evaluation against a standard GUI
system showed that the users performed significantly better with the TUI
counterpart.
Ullmer et al. [UIJ05] developed two famous tangible query interface
prototypes that use tangible objects as database parameters. These objects are
manipulated in physical constraints such as tracks or slots that represent
database queries, views, or Boolean operations.
28
In most cases, the syntax of the programming language replicated in a TUI
is enforced by physical constraints on the interface (e.g. the physical form of
WKH7HUQ·V>HSJ08] pieces determines what type and how many blocks can be
connected together).
Also important to note is that many Tangible Programming systems can be
perceived as entertainment interfaces, since their design allows for free play
and exploration. This might explain the fact this approach to programming is
so popular and widespread. A study conducted with 260 museum visitors
[HSC+09] showed the children are the ones who respond best to this
approach, especially girls. It clearly points to the fact that carefully designed
Tangible Programming systems can offer concrete educational EHQHÀWV
In some rare exceptions, Tangible Programming can be found in
applications not related to learning and play. Researchers at the Mads Clausen
Institute in Denmark put Tangible Interfaces at work in the context of
LQGXVWULDOZRUNVXSSRUWLQJFRQÀJXUation work by service technicians [SCB07].
This work makes an important attempt to bring back some of the advantages
of traditional mechanical interfaces, advantages shared with TUIs: motor
memory, real-world skills and visibility of action.
29
Figure 14 The Philips EnterTaible project [LBB+07].
Leitner et al. [LHY+08] presented a mixed reality gaming table that tracks
real objects using a depth camera and transforms them into obstacles or a
ramps in projected virtual car race. Zigelbaum et al. [ZHS+07] introduced the
Tangible Video Editor, a Tangible Interface for physically editing digital video
clips. These video clips are represented by tangible objects which can be easily
sorted and placed into a sequence. The system also allows for tangible objects
representing transitions to be attached between clips. The IOBrush [RMI04]
TUI is a drawing tool for children that allows them to explore color, texture,
and movement via a physical paintbrush with an embedded video camera (see
Figure 15).
Figure 15 The IOBrush [RMI04], with its physical paintbrush embedded with a video camera.
Sturm et al. [SBG+08] identified key design issues that should be present in
entertainment and edutainment TUIs, which are the support of social
30
interaction, simplicity combined with adequate challenge and goals, and
motivating system feedback.
31
disposition. Similarly, the Block Jam [NNG03] is a dynamic sequencer built
from physical cubes that can be attached to each other. mixiTUI [PH09] is a
tangible sequencer for sound clips and music that is able to adds loops,
controls, and eơects to its sounds, utilizing the interaction mechanisms from
the ReacTable [JGA+07].
Kaltenbrunner [K09] has classified the different musical TUIs as the ones
who have PXVLF ´FRQWDLQHGµ ZLWKLQ SK\VLFDO DUWLIDFWV WKDW FDQ EH UXEEHG
squeezed or moved (e.g. the Squeezables [WG01]);; musical building blocks
(e.g. Block Jam [NNG03]), that consists on groups or individual physical
blocks that repeatedly generate or manipulate sound, and can be stacked,
attached, or simply placed close to each other;; with token-based sequencers,
the surface of the system if continuously scanned and sound is produced
depending on the spatial position and the physical properties of the tokens
(e.g. color);; there are musical TUIs that consist on interactive, touch-based
surfaces, where the music is produced based on the interactions with the
tangible objects (e.g. AudioPad [PRI02], reacTable [JGA+07]);; finally, there
are simple commercial alternatives (e.g. NeXURVPLWK·V 0XVLF%ORFNV )LVKHU-
3ULFH·V SOD\ ]RQH PXVLF WDEOH WKDW JHQHUDWH PXVLF E\ GHWHFWLQJ WKH SUHVHQFH
and sequence of tangible objects in specific physical constraints, such as slots.
32
or tactile modality for remote communication and intimacy [SH10]. Another
example is the United Pulse [WWH08], which is used to transmit the partners·
pulse between two wearable rings.
Figure 17 From left to right: the LumiTouch [CRK+01], Lovers Cup [CLS06], and InTouch [BD97].
33
guide design and analysis, providing a conceptual structure for thinking
through a problem or application [SH10].
34
paradigm and developed an interaction model for systems with a TUI ² the
MCRit model (see Figure 1), based on the GUI MVC (Model, View, Control)
model. According to their work, TUIs are systems that give physical form to
digital information, using tangible objects as both input and output for
computational media. While the core of the MVC model is the separation
between the graphical representation and the control (by input devices such as
a mouse and a keyboard), the MCRit model basically eliminates this distinction
as TUIs integrate physical representations and control, blurring the barrier
between solely input or output devices.
)RXUSURSHUWLHVWKDWDURVHIURP8OOPHUDQG,VKLL·V work were:
35
be attached or detached from the constraints to conduct digital operations
(e.g. moving a token from a slot to another). The classical example is the
Marble Answering Machine [P95].
Holmquist et al. [HRL99] also worked in classifying physical objects that can
be linked to digital information, suggesting three different sets of tangible
objects:
36
tangible objects, a FRQWDLQHU·VIRUPLVJHQHULFQRWUHIOHFWLQJWKHQDWXUHRI
the digital information it is associated with.
o Tokens ² tangible objects that physically resemble the information they
represent in some way. Tokens are typically used to access information.
o Tools ² tangible objects used as representations of computational
functions (e.g. zoom).
Ullmer and Ishii [UI00] suggest a slightly diơerent approach, where they
consider a tangible object as a token, and then use the concept of containers
and tools as subtypes of tokens.
To specify a TUI using the TAC paradigm, a TUI developer defines the
possible TAC relationships within a TUI. The product of this activity is a
palette of ways in which objects can be combined together to form meaningful
expressions for the system (and the user).
37
Figure 19 The TAC palette for the Marble Answering Machine [P95].
38
2.6 TECHNOLOGIES FOR BUILDING TUIS
Since to this date there are no standard input or output devices for developing
TUIs. Because of that, TUI developers employ a wide range of pre-existing or
custom-made technologies that detect objects and gestures.
39
Papier-Mâché [KLL+04], that is capable of detecting electronic tags and
barcodes.
2.6.4.1 PHIDGETS
Phidgets [URL7] are commercially VROG´SOXJDQGSOD\µGHYLFHVHJVHQVRUV
boards, actuators, I/O) that support software developers in the
implementation of TUI prototypes that aim at using tangible objects capable
of both physical input and physical output. One of the advantages of using
Phidgets is that they are centrally controlled through a computer rather than
through microprocessors embedded in the tangible objects. Phidgets also
offers an API for a variety of development environments.
iStuff [BRS+03] is very similar to Phidgets in concept, but uses a set of
wireless physical devices controlled in Java. Through an intermediary software,
developers can define high-level events and dynamically map them to input
and output events.
40
Table 1 Comparison of TUI implementation technologies [SH10].
41
Exemplar [HAM+07] is toolkit similar to Phidgets in the way it leverages
central control through a computer. With Exemplar, a developer demonstrates
a sensor-based interaction to the system (e.g. shaking an accelerometer), while
the system graphically displays the resulting sensor signals. By iteratively
refining the recognized action, the developer use the desired sensing pattern in
prototyping or programming applications.
2.6.4.2 ARDUINO
Arduino [B09] is a toolkit consisting of a special board and a programming
environment. The Arduino board differs from Phidgets by interfacing with
standard electronics parts, requiring the developer to physical wire, circuit
build, and solder.
2.6.4.3 LITTLEBITS
Currently under development, littleBits [URL8] offers a toolkit that consists of
electronic components pre-assembled on tiny circuit boards that are able to
snap together through the use of magnets.
2.6.4.4 ARTOOLKIT
As mentioned before, the ARToolKit [KB99, KB00] is a Computer Vision
marker tracking library that allows developers to create augmented reality, and
consequently, TUI applications. A part from tracking the position and
orientation of visual markers, ARToolKit also allows computer graphics to be
drawn exactly over the real marker.
2.6.4.5 REACTIVISION
The reacTIVision [JGA+07] is a Computer Vision framework designed for the
development of TUIs that require a multi-touch surface, capable of tracking
tangible objects through the use of fiducial markers (see Figure 20). One of
the differences between reacTIVision and other toolkits is its distributed
architecture, separating the tracker from the actual application.
42
Figure 20 The reacTIVision [JGA+07] framework diagram..
2.7.1 STRENGTHS
1.7.1.1 COLLABORATION
o TUIs have been shown to lower the threshold for participation
[HSC+09] when compared to traditional GUIs, being particularly
successful in regard to children and females.
o By offering multiple access points through the tangible objects, TUIs
HQVXUH WKDW WKHUH LV QR ´ERWWOHQHFN IRU LQWHUDFWLRQµ DOORZLQJ IRU
simultaneous interaction and easy participation [SH10].
o Physical interaction is visible to others, and tangible objects enhance
the legibility and comprehension of such actions. This ultimately supports
group awareness and coordination [KHT06].
o The TUI design and setup can provide ´embodied facilitationµ, subtly
constraining what the users can do, and guiding their actions [SH10].
o Tangible objects are easier to share than computer graphics [GSO05],
fostering discussion.
43
o Tangible objects are normally understood as resources for shared
activity [FT06, FTJ08, FTJ08b].
o By being consisted of multiple tangible objects, TUIs allow users to
distribute tasks DPRQJVWHDFKRWKHU·VE\UHDUUDQJLQJWKHREMHFWVDURXQG
the interface [SH10].
2.7.1.2 SITUATEDNESS
o The situated nature of TUIs makes them very powerful UbiComp
devices [WMG93].
o Interaction with tangible objects can have different meanings
depending on the context they are being used. On the other hand, these
objects can also change the meaning of the location they are place onto.
o TUIs offer vast VXSSRUWIRU´RIIOLQHµDFWLYLWLHV, normally directed at the
social and physical setting [FTJ08].
o By allowing users to keep their physical mobility (as their hands are not
confined to the keyboard and mouse) TUIs allow the use of
unconstrained gestures while interacting with a system, lightening the
cognitive load of their users [AKY00]. Furthermore, TUIs that employ
JHVWXUH DV SDUW RI WKH LQWHUDFWLRQ WDNH DGYDQWDJH RI XVHUV· NLQHVWKHWLF
memory [S99].
o Tangible objects have been shown to support cognition by serving as
´WKLQNLQJ SURSVµ DQG allowing the offload of memory [SH10]. These
qualities arise from the fact they work as epistemic tools, allowing users
to perform actions that have no functional consequence and DUHQ·W
44
computationally interpreted (e.g. annotating and counting, turning or
occluding) [KM94].
o Physically representing a task can have very positive effects on the
reasoning abilities and performance of the users [ZN94]. This tangible
representation is intrinsic components of a different number of cognitive
tasks, as they guide, constrain, and determine cognitive behavior [Z97].
Additionally, as the syntax of the interface is physically represented users
can perceptually infer what they can do and how, decreasing the need for
explicit rules [ZN94].
2.7.2 LIMITATIONS
o One of the problems with TUIs is scalability, both in terms of the
physical space required for a big system (in terms of multiple tangible
objects or a complex physical interface) and the support and functionality
the developing tools currently provide [SH10].
o While digital objects are malleable, easy to create, modify, replicate, and
distribute, tangible objects are normally rigid and static [PNO07].
Many problems arise from this fact, like a difficulty in supporting an undo
and history function, or a replay of actions [KST+09].
o Tangible objects need to be well designed so they are easy to grasp,
lift, or position. Ultimately, they need to address the purpose of the
system. A TUI game might not always require objects that are easy and
effortless to use, as gamers enjoy a challenge and the feeling of getting
skilled as they play [SH10].
o Normally, each time a TUI is prototyped the developers are required to
learn a new toolkit or software library, as well as rewrite most of the code.
Since research into new interaction techniques and technological solutions
45
still continues, software tools that can be easily extended are needed.
Ultimately, although toolkit programming substantially reduces the time
and effort required for developers to build functioning TUIs, it falls short
of providing a comprehensive set of abstractions for specifying,
discussing, and programming Tangible Interaction within an
interdisciplinary development team [SH10].
o So far QR HYDOXDWLRQ PHWKRGV VSHFLÀF WR78,V have been developed,
with the methods used being very similar to those used within HCI.
46
attempt to represent the range and scope of systems that fall under this
banner. They say little regarding how a TUI can be designed in order to
maximize usability, or to reflect aspects of human cognition. This dissertation
argues this is an important omission and one that currently requires work in
order to address.
Some work has considered how TUIs can aid cognition in particular,
isolated, situations. For example, there is considerable work exploring how
TUIs can be deployed in learning scenarios [e.g. MPR03] to support tasks as
diverse as connecting abstract concepts to physical analogies [GSH+07] and
encouraging students to plan and reflect upon their activities [M07]. Raffle et
al. suggest that haptic TUIs may support information retention [RJT03]. Other
systems have focused on taking into account the cognitive load of interaction
and advocated strategies to ensure it is minimized through leveraging attention
theory [BLS05], designing for peripheral display [MDM+04, IU97] or relying
on multi-sensorial information processing [PMR02]. Finally, Sharlin et al.
provided a compelling discussion of mechanism to leverage human
understanding of spatiality and the importance of supporting exploration
strategies such as the distinction between pragmatic and epistemic action
[SWK+04]. However, despite this laudable body of work, there is no unifying
framework or comprehensive set of guidelines for TUIs which aims support
designers in ensuring that their systems reflect the cognitive strengths and
weakness of everyday users [M07]. This paper argues that this lack is
hampering the development of TUIs suggests that the literature on embodied
cognition [A03] is a suitable starting point from which to develop a more
complete set of design recommendations for TUIs to achieve these aims.
The second goal of this dissertation will be of presenting an initial set of
guidelines directly informed by the literature on embodied cognition. The
usefulness of these guidelines will be explored through subsequent design
iterations over the development of two TUI systems. The ultimate goal is
produce guidelines which will inform the design of TUIs which take full
advantage of the cognitive capabilities of their users. These guidelines will
enable future designers to more easily produce tangible interfaces which are
simple, effective and pleasant to use.
47
3. E MBODIED C OGNITION
48
abstract;; it is instead rooted in sensorimotor processing [W02]. Embodied
Cognition further suggests that intelligence lies with the social and cultural
worlds, arguing that they are central to human cognition, cognition that
exploits repeated interaction with the environment, creating structures that
advance and simplify cognitive tasks [A03].
Embodied Cognition also focuses on how humans store and access
memory. Glenberg [G97] argued that traditional accounts of memory focus
too much on the passive storage of information. He defends the idea that
pDWWHUQVVWRUHGLQPHPRU\UHÁHFW the nature of bodily actions and their ability
to mesh with situations during goal pursuit. Perception of relevant objects
triggers affordances for action stored in memory.
Conversely, reasoning about future actions relies on remembering
affordances while suppressing perception of the environment. Simulation also
appears central to constructing future events based on memories of past
events [SA07]. :KHQSHRSOHYLHZDVWDWLFFRQÀJXUDWLRQRI gears, for example,
they use simulation to infer the direction in which a particular gear will turn.
Numerous sources of evidence support the use of simulation in these tasks
[B08]. The time to draw an inference is often correlated with the duration of a
physical event, such as how long a gear takes to turn. It has been shown that
carrying out associated actions (e.g. moving your hand like a gear) can improve
inference [S99b].
Clark classified of external ´VFDIIROGLQJµKRZWKHEUDLQoffloads some of its
cognitive duties onto the environment, instead of doing all of the
computational work on its own [C01]. An example of this mental shortcut is
how people look for Kodak equipment in a store, focusing on finding a
distinct yellow package instead of a specific textual element [C97]. Landmarks
could also be an example of external cognition in cities. By offloading mental
effort onto the complex urban environment, the residents and visitors of cities
are able navigate without a mental overload [M06]. A particular kind of
´VFDIIROGLQJµLVNQRZQDVHSistemic actions. These actions aim at altering the
world so as to aid and augment cognitive processes (e.g. rotating a Tetris piece
to see where to fit it [KM94]). They contrast with pragmatic actions that alter
the world because some physical change is desirable for its own sake (e.g.
fitting a Tetris piece in place [KM94] [CC98]. The properties of epistemic
actions are reducing space complexity (memory involved in mental computation),
time complexity (number of steps involved in mental computation), and
unreliability (probability of error of mental computation) [KM94].
49
7KLV H[WHUQDO ´VFDIIROGLQJµ FDQ DOVR EH DFKLHYHG RQWR WHFKQRORJ\, which
can have profound effects on how we think and encode information, on how
we communicate with one another, on our mental states, and on our very
nature. ,W·V FOHDU WKDW LI XVHUV FDQ SK\VLFDOO\ LQWHUDFW ZLWK WHFKQRORJ\ it will
increase their cognitive capacities and thus enhance their efficiency, in the
same way they would see further with telescopes or move faster with cars
[DH09].
50
3.2.4 OFF-‐LINE COGNITION IS BODY-‐BASED
Even when decoupled from the environment, the activity of the mind is
grounded in mechanisms that evolved for interaction with the environment,
that is, mechanisms of sensory processing and motor control. Mental imagery
(e.g. auditory imagery, kinesthetic imagery) is an obvious example of mentally
simulating external events [PFD+95], while the working memory appears to
be an example of a kind of symbolic off-loading where information is off-
loaded onto perceptual and motor control systems in the brain. In these cases,
rather than the mind operating to serve the body, we find the body (or its
control systems) serving the mind [W02].
Areas of human cognition previously thought to be highly abstract now
appear to be yielding to an embodied cognition approach. It appears that off-
line embodied cognition is a widespread phenomenon in the human mind
[W02].
51
o A system that uses tangible objects must be, amongst other things, a
reservoir of resources for learning, problem solving, and reasoning.
Even when dealing with highly abstract mental concepts their resolution
may be rooted, though in an indirect way, in sensory and motoric
knowledge. Well-designed tangible objects become integrated into the way
people think, see, and control activities, part of the distributed system of
cognitive control [HHK00].
Users expect the physical input components to mirror the state of the
corresponding digital information completely. This interaction is a
¶FRQYHUVDWLRQDO· VW\OH RI LQWHUDFWLRQ JLYLQJ FRQVWDQW IHHGEDFN DOORZLQJ
52
users to proceed in small steps and to express and test their ideas quickly
[SWK+04].
o Users should have good knowledge of state so they can monitor the
progress of their work [SWK+04].
o Using concrete rather than abstract tangible objects can often lead to
improved task performance. Limiting the types of interaction that are
suggested by the form of the object can also reduce the entropy of the
system and help the user deduce the inherent functionality of the object
from its physical qualities (e.g. an heavy object will make the user think of
using it to squash or hold something in place). Users see a tangible
object for what they can do with it and not by what the object really
is. This deduction can be based on cultural standards [M07].
53
o Consistently dedicating physical movement to interface functions
affords kinesthetic learning and memorization over prolonged use.
Physical feedback can further help distinguish commands kinesthetically
[KHT06].
o TUIs should hold and manipulate information for the user, allowing
that information to be quickly harvested and shared in a need-to-know
basis, instead of forcing the user to fully encode it and store it internally.
The effective use of tangible objects for external storage of information
GHSHQGVRQWKHXVHU·VSHUFHSWLRQRIWKHPWKRX [W02].
In the real world people offload cognitive processes that are difficult (e.g.
visualizing) to external aids. Children, for example, use their fingers to
perform arithmetic operations (e.g. add, subtract). This way, people extend
their performance capacity beyond the limits of their own brain power,
the same way they see further with telescopes or move faster with cars
[DH09].
54
o Space is a resource that must be managed, much like time, memory,
and energy. Users must always be facing some direction, have only certain
tangible objects in view, be within reach of certain others. When space is
ZHOOXVHGLWUHGXFHVWKHWLPHDQGPHPRU\GHPDQGVRIWKHXVHU·VWDVNDQG
increases the reliability of execution and the number of jobs the user can
handle at once [HHK00].
o Users should be able to think and talk with through with their bodies and
by the actions performed on the tangible objects, using them as props to
act with ² people know more than they can tell. The tangible objects
should give discussions a focus and provide a record of decisions
[HB06].
o Users should have equal access and direct interaction with digital
information and the tangible objects, promoting collaborative decision-
making, social cohesion, engagement and encouragement [RHB+04].
o Users should be able to simultaneously use the system and see each
RWKHU·V SK\VLFDO DFWLRQV, enhancing awareness which in time can
support fluid interaction and implicit coordination. This lowers cognitive
effort, as it helps to decode what everyone else is doing or planning to do.
55
This is a direct result from mirror circuits in the brain which help
perceivers infer intent, and not only recognize the actions performed
[B08].
56
4. F IRST PRO TOTYPE : M EMENTOS
4.1 INTRODUCTION
The motivation behind this first prototype is of documenting the conceptual,
methodological, and technical steps required for building a TUI prototype
capable of operating in complex real world scenarios, supporting multiple
tasks and operating in multiple contexts. By being able to operate in different
contexts and scenarios, this prototype will also aim of merging two areas of
TUI research, Ambient Displays [IU97] and Tangible Tabletop Interaction.
The domain chosen for building such a prototype was the domain of
tourism. Tourism and travel represent highly significant economic activities
worth an estimated 5751 billion USD worldwide in 2010 (approx 9.2% of
world GDP) [TC10]. They are also activities fraught with human-centric
challenges and problems (see Brown and Chalmers [BC03] for an informative
ethnographic description of the tourism experience). Independent travelers, in
particular, are typically in unfamiliar surroundings, often grappling with
unknown languages and dealing with unusual climates whilst they perform
complex tasks such as navigation and the collaborative planning of spatial-
temporal itineraries based on significant quantities of complex textual
information from guidebooks and timetables.
Although many of these activities are arguably core aspects of the travel
experience, the challenges they represent have attracted considerable attention
in the HCI research community. One focus has been the support of pre-
planning activities through mechanisms such as making itinerary suggestions
that reflHFW D XVHU·V SUHIHUHQFes [e.g. CL05]. Other authors have proposed
systems that provide contextually relevant information during trips. Such
57
systems can provide customized navigation information, relay important news
(such as delayed flights) or support visits to specific sites (such as museums)
[e.g. GPS+04]. The dominant form factor for such systems is graphical
interfaces on mobile devices. Despite this wealth of work, relatively few of
these systems are currently in widespread use [BC03]. Potential reasons for
this include issues relating to a lack of availability or adoption of the
technology (typically high-end smart phones), distrust (either in terms of the
privacy of tracking systems [BCL+05] or potentially a reluctance to take
recommendations from a digital agent [R06]), a lack of support for
collaboration (most tourists travel in small groups of 2-4 people) [BC03] and
the fact that dealing with digital information presented on a mobile device
disrupts and distracts from the actual experience of travel [BLS05].
The remainder of this section documents the steps taken during the
concept, design, implementation and evaluation of Mementos. These use
traditional HCI methods combined with particular frameworks for TUI
development.
4.2 CONCEPT
The concept for Mementos was developed after an initial work of
understanding where computer systems could augment the tourism
experience. This was possible by combining literature review [BC03] with
questionnaires and semi-structured interviews (see Section 4.2.1), so as to
comprehend how people behave as tourists. The result of this work took form
after a brainstorming section (see Figure 21), followed by the development of
a mind map (see Section 4.2.2) and an initial system description (see Section
4.2.3), that was examined conceptually via a scenario (see Section 4.2.4).
Figure 21 Result from one of the brainstorm sessions for the Mementos TUI.
58
4.2.1 QUESTIONNAIRE
In order to capture the behavior of travelers and tourists a questionnaire (see
Annex A) was designed and distributed both online and to six hotels situated
in Funchal, a coastal Portuguese city and popular tourist destination. A total of
118 responses were gathered (see Annex B) and qualitatively analyzed. Based
on the results, a semi-structured interview was designed (see Annex C) and
conducted with 12 participants to explore some of the issued raised in further
depth. These investigations were focused on capturing how people plan and
structure their day-to-day activities and exploration during travel, and how
they communicate such activities to others. It also explored how people store
and share the memories from their travels (such as through photos and videos
or objects and souvenirs). A literature review on ethnographic study of the
tourism experience [BC03] was then conducted to cement the conclusions
taken from the questionnaires and semi-structured interviews.
As expected, most answers to the online questionnaire were given from
people with ages between 16 and 30. Oppositely, more than half of the people
who answered the physical questionnaires distributed at hotels where over 50
years old (see Figure 22). Independently, most people appear to travel in
groups of two or three people (see Figure 23), that are composed of friends (in
the case of younger people) or husbands and wives (in the case of older
people, see Figure 24). Additionally, most trips take between one and two
weeks (see Figure 25).
Figure 22 Graphs with the age groups of typical tourists. The graph on the left represents the answers
to the online questionnaire, while the graph on the right represents answers to the physically
questionnaires handed in hotels.
Another difference between young and older people while travelling is that
the first occasionally feels lost or unsure of directions they have, while the
59
second rarely encounters these problems (see Figure 26). This might be
because young people prefer to plan day by day (e.g. in the hotel) as they
travel, while older people rather plan before leaving home (see Figure 27).
Figure 23 Graphs with the group size of typical tourists. The graph on the left represents the answers
to the online questionnaire, while the graph on the right represents answers to the physically
questionnaires handed in hotels.
In both cases both young and older people prefer to travel by foot,
occasionally opting for the bus (see Figure 28). When the trip is actually over,
young people prefer to share their experiences as media through online social
networks (e.g. Flickr), while older people share them at home, to friends and
family (see Figure 29).
Figure 24 Graphs with the companions of typical tourists. The graph on the left represents the
answers to the online questionnaire, while the graph on the right represents answers to the physically
questionnaires handed in hotels.
Figure 25 Graphs with the duration of the trips of typical tourists. The graph on the left represents
the answers to the online questionnaire, while the graph on the right represents answers to the
physically questionnaires handed in hotels.
60
Figure 26 Graphs with the answers to how often people feel lost on their trips. The graph on the left
represents the answers to the online questionnaire, while the graph on the right represents answers to
the physically questionnaires handed in hotels.
Figure 27 Graphs with when people prefer to plan for their trips. The graph on the left represents the
answers to the physically questionnaires handed in hotels (where people graded their preferences from
1 to 5), while the graph on the right represents answers to the online questionnaire.
Figure 28 Graphs with how people prefer to travel during their trips. The graph on the left represents
the answers to the physically questionnaires handed in hotels (where people graded their preferences
from 1 to 5), while the graph on the right represents answers to the online questionnaire.
61
Figure 29 Graphs with how people share media from their trips. The graph on the left represents the
answers to the physically questionnaires handed in hotels (where people graded their preferences from
1 to 5), while the graph on the right represents answers to the online questionnaire.
Figure 30 Mind map for the TUI prototype Mementos. For an image with higher resolution, please
refer to Annex D.
62
First it explored the possibility of tokens acting as ambient displays,
GLVFRQQHFWHG IURP DQ\ ´V\VWHPµ. Which possible interactions could be
developed and what kind of feedback could they convey? Second, it focused
RQ KRZ WKH WRNHQV FRXOG EH XVHG WR FUHDWH UHYLVH DQG VKDUH WKH WULSV· SODQV
with others. Third, it examined the different contexts and scenarios where the
tokens could be used to support the travelling experience. Forth, it tackled the
LVVXH ZLWK WKH WRNHQV· SK\VLFDO IRUP, that had to fit every interaction across
different possible contexts of utilization. Lastly, it took a look at which
embodied aspects could be embedded in the tokens.
4.2.3 DESCRIPTION
Following on the mind map work (see Section 4.2.2), the Mementos· concept
was fleshed out by envisioning a system composed of three parts: a set of
tokens, a kiosk interface and a home interface (see Figure 31). These three
different interaction spaces should tackle most of the problems and activities
that where identified during the literature review process [BC03] and
questionnaire analysis (see Section 4.2.1).
63
such a token might be linked to the Eiffel tower and take the form of a model
of this monument. The other class of abstract tokens represented more
general tourist infrastructure such as a set of cafes and transportation points
and appeared as neutral coin-like objects identified with graphical logos. The
goal of such sets of such tokens is for them to be distributed for particular
locations or cities in much the same way as guidebooks are currently.
The concept for the tokens is to provide information related to the object
they represent through non-intrusive feedback. Most significantly, this
feedback should be delivered via vibrotactile cues mediated by location
awareness ² the tokens would vibrate when approaching the location (or set of
locations) they represent. Furthermore, in the case of the concrete tokens, they
also should respond to the proximity of transportation links leading to their
location. This feedback could be silenced by touching or picking up the token.
The goal of this interface is three-fold. Firstly, to enable users to engage in a
simple form of collaborative planning based on selecting only relevant tokens
to carry with them. For example, a user wishing to travel by taxi, visit a
museum and stop for lunch would simply select the three tokens representing
these activities in order to receive relevant cues. Secondly, the tokens are
intended to support relatively undirected, exploratory travel experiences. For
example, a tourist strolling through a city with a token which responds to all
restaurants featured in a particular food guide could use the feedback to
opportunistically and discreetly highlight dining choices encountered during
the FRXUVHRIWKHLUZDON)LQDOO\LIDWRXULVW·VJRDOLVWRVHHNRXWRQHSDUWLFXODU
destination, the vibrotactile cues will highlight appropriate transportation links
(such as where to board or exit a bus) as well as proximity to the actual
location, thereby providing vital navigation information.
64
concrete token on the leftmost sensing zone causes transportation information
between the kiosk and concrete site to be presented (e.g. estimates for
travelling time and cost by taxi, bus and foot). Adding an abstract café token
to the same sensing zone caused dining options to be displayed in the
proximity of the concrete site. In a similar manner, the three zones could be
used to create multi-leg travel plans.
4.2.4 SCENARIO
Task: Visiting some specific places around the city of Funchal and sharing
photos of that trip.
Persona: Johnny Bravo.
Johnny Bravo and his wife have just arrived at the Pestana Grand Hotel in the
Lido area of the city of Funchal. They are 28 and 26 year-old respectively, and
are both nurses in the United Kingdom. They use their personal computers to
update their Facebook page, watch some movies and send e-mails.
Upon check-in, they are offered the possibility of paying a bit more for a
brand new technology called Mementos. After hearing all the benefits of using
such system, they decide to give it a try. They go to their Hotel room carrying
a set of small objects, each representing a place around the Island.
The next morning, John and his wife are ready to start visiting the city of
Funchal. Before leaving the UK they had already chosen some spots they
GLGQ·WZDQWWRPLVVRXWDQGDFFRUGLQJWRWKHLUSODQWRGD\WKH\ZDQWWRYLVLW
Monte and the Cathedral of Funchal.
Johnny looks at the objects they were given the previous night, and notices
one that is the miniature version of the Cathedral of Funchal, and another,
that is a miniature of a wickerwork sledge, which John and his wife always
65
wanted to try. As he picks them up, his wife is remembered of what is in store
for them today, and in excitement, asks to keep the wickerwork sledge object.
As they leave the Hotel, they ask the clerk to point them to the nearest bus-
stop which can take them to the city center, where they know they can find
the Sé Cathedral. As they walk down the road the clerk pointed out, unsure of
if WKH\·UH JRLQJ LQ WKHULJKW SDWK -RKQ·VREMHFW VWDUWV YLEUDWLQJLQ KLV SRFNHW
indicating that the bus-VWRSLVQHDU7KH\·UHQRZVXUHWKH\·UHFORVHDQGZLWKD
VLPSOHWDS-RKQ´VLOHQFHVµWKHREMHFW$IHZPHWHUVDKHDGWKH\ILQGWKHEXV-
stop, and a couple RIPLQXWHVODWHUWKH\·UHRQWKHEXVWRWKHFLW\FHQWHU
As the clerk explained them, each object will indicate the best bus-stop for
them to leave the bus. Decided to trust the objects, John and his wife enjoy
the ride and the scenery as they travel into Funchal, not worrying about
GLVFRYHULQJZHUHWROHDYHWKHEXVPLQXWHVLQWRWKHWULS-RKQ·VREMHFWVWDUWV
to vibrate again, alerting him to ask for the bus to stop.
They are dropped in the city center, and as John planned, they now have to
move north WKURXJKDFHQWHUSOD]D$IWHUDFRXSOHRIPLQXWHVZDONLQJ-RKQ·V
object starts to vibrate again, this time a more spaced-out and gentle vibration.
He remembers the clerk mentioning that this kind of vibration shows him that
a public kiosk is available nearby. His wife notices him squeezing his object to
stop the vibration, and takes part in the visual search for the kiosk.
They easily find it, and after arriving at the kiosk, it seems to be aware of
WKH\·UH SUHVHQFH 7KH\ DUH LQVWUXFWHG WR SODFH WKH REMHFWs they wish to get
information about on the surface of the kiosk. Right after following these
instructions, the screen shows them a set of information around both the Sé
Cathedral and the wickerwork sledge REMHFWV WKH\ KDYH SODFHG LQ WKH NLRVN·V
screen. They find out the best way to reach the Cathedral, and where to go
next in order to reach Monte.
2QFHWKH\UHPRYHWKHREMHFWVIURPWKHNLRVN·VVXUIDFHWKH\VWDUWZDONLQJLQ
WKH ULJKW GLUHFWLRQ $V WKH\ DSSURDFK WKH &DWKHGUDO -RKQ·V REMHFW VWDUWV
vibrating again, indicating that they are near. This time the vibration reminds
him of a submarine·V sonar. After tapping the object, John and his wife are
aware of the impending arrival at their destination, and after walking inside
they enjoy the architecture and mood of the Sé Cathedral.
Two weeks later, in the UK, Johnny uploads all of the photos taken during
the trip to Funchal to his Flickr page. As instructed in the manual, he tags the
photos with a special tag according to each of the tokens he wants to map
them to. Every time John hosts someone at his place he shows them the
66
photos from the trip by simply allowing his visitors to pick up the intriguing
objects, and connecting them to his computer.
Figure 32 7KHVWDNHKROGHU·VGLDJUDPIRUWKH0HPHQWRV78,
67
4.3 DESIGN
68
4.3.1.1 TACS
Figure 33 The TACs for the Mementos TUI.
69
4.3.1.3 DIALOGUE DIAGRAM
Figure 34 The dialogue diagram for the Mementos TUI.
Figure 35 The interaction diagram for the Mementos TUI (at the public kiosks).
70
Figure 36 The interaction diagram for the Mementos TUI (while travelling).
71
4.3.4 WIREFRAMES
With the help of the TAC paradigm, the richer interaction space was present
in the public kiosk. This concept was further iterated over via the HCI method
of wire-framing. The next group of images serves as example to a possible
interaction with the kiosk. As the user arrives at the kiosk, he is informed of
his actual location on a map of the city he is visiting (see Figure 38). As he
places one of the tokens he was carrying on the first slot of the kiosk, he is
informed of transportation options (and their time costs) between his location
and the location represented by the token (see Figure 39). The user then places
a second token from the ones he was carrying on the second slot of the kiosk,
being informed of the transportation time costs between the location
represented by the first token and the location represented by the second (see
Figure 40). The user then decides to use the tokens available on the kiosk,
which represent general facilities such as cafes or markets. He picks the one
that represents a bus stop and places it bellow the token on the first slot of the
kiosk. The system then informs him of bus stops in the proximity of the
location represented by the first token (see Figure 41). As the user removes
the token from the first slot, the system automatically looks for transportation
options (and their WLPHFRVWVEHWZHHQWKHXVHU·VORFDWLRQDQGWKHWRNHQLQWKH
second slot of the kiosk (see Figure 42).
Figure 38 Wireframe #1 for the public kiosk of Mementos: the user is informed of where he is in
relation to the city he is visiting. For an image with higher resolution, please refer to Annex E.
72
Figure 39 Wireframe #2 for the public kiosk of Mementos: the user has placed one of the concrete
tokens he was carrying on the first slot of the kiosk;; he his informed of transportation options (and
the time costs) close to his current location. For an image with higher resolution, please refer to
Annex E.
Figure 40 Wireframe #3 for the public kiosk of Mementos: the user has placed another of the
concrete tokens he was carrying on the second slot of the kiosk;; he his informed of transportation
options (and the time cost) between his location and the two desired destinations. For an image with
higher resolution, please refer to Annex E.
73
Figure 41 Wireframe #4 for the public kiosk of Mementos: the user has placed one of the abstract
tokens found in the kiosk under the concrete token on the first slot;; he his informed of bus stops
close to the first destination. For an image with higher resolution, please refer to Annex E.
Figure 42 Wireframe #5 for the public kiosk of Mementos: the user has removed the concrete token
from the first slot of the kiosk;; he his informed of transportation options (and the time cost) close to
his current location;; a history of previous destinations is kept on-screen. For an image with higher
resolution, please refer to Annex E.
74
4.4 IMPLEMENTATION
Figure 43 Users receive vibrotactile messages on the tokens they carry as they pass through relevant
real-world settings. This is achieved by sending the messages via Bluetooth, from the relevant spots
(who scan for the correct tokens) to the tokens, embedded with a SHAKE [URL8] device.
75
4.4.1.1 SHAKE
SHAKE is a device that provides any
computing platform with movement sensing
and vibrotactile feedback. It incorporates 6
degree of freedom inertial sensing, compass
heading sensing and electric field sensing in
a matchbox size enclosure. This device is
regularly used as a valuable tool for
researchers in the areas of HCI. SHAKE is Figure 44 An open SHAKE device
[URL8].
capable of sensing linear and rotational
movements, absolute orientation / direction and human body proximity, and
can transfer this information to any computing device that has Bluetooth
wireless connectivity (e.g. mobile phones, laptop computer) [URL10]. This
application used the SHAKE SK6 [URL8], with the 2.72 firmware version.
Figure 45 The set of tokens used in the kiosk evaluation: four abstract tokens (representing markets,
taxis, cafes, and municipal WiFi) and two concrete tokens (representing a famous church and
botanical garden).
76
This application was developed using Processing version 1.0.9, in both
Windows 7 and Mac OS X Snow Leopard.
The following Processing libraries were used:
4.4.2.2 TOUCHATAG
touchatag enables service providers and
enterprises to leverage ubiquitous identity ³ in
contactless RFID cards, and NFC mobile
devices ³ for wallet 2.0 services such as mobile Figure 46 A touchatag RFID reader
and five tags [URL12].
payment, fidelity and interactive advertising
77
[URL12]. The touchatag reader is based on the ACR122(U) NFC Reader from
Advanced Card Systems Limited, and its tags are ISO/IEC 14443 Type A
MIFARE Ultralight stickers.
Figure 47 Screenshot from application that runs on the public kiosk, as part of the Mementos TUI.
For an image with higher resolution, please refer to Annex I.
7KDW·VSUHWW\QLFH«,ZLOOGHILQLWHO\XVHLWIRUWKHQH[W5),'SURMHFW
I think it's really great that you made a library so we don't have to rely on the crappy
software provided.
78
This library was written in C, using the Java Native Interface (JNI) and
libnfc (an open source library for Near Field Communication) [URL17].
As a result of this work, the author of this dissertation wrote a tutorial on
how to compile libnfc on Windows, which is the only tutorial for
Windows in the libnfc main website [URL15]. The programming code for
this library is available in Annex J.
Figure 48 6FUHHQVKRWIURPDSSOLFDWLRQWKDWUXQVDWWKHSHUVRQDOFRPSXWHULQWKHXVHUV·KRPHDVSDUW
of the Mementos TUI.
79
This way, tokens end up serving not only as a souvenir from the trip, but
also a link to a user·VPHPRULHVWKURXJKWKHLUDELOLW\WROLQNWRFDSWXUHGGLJLWDO
content. In this way, they could become cherished keepsakes, kept over time
with their value rooted in their personal history, and literally holding stories
and memories [GC97]. The tangible objects also support users as they share
their media, allowing them to send their objects to friends and family, and
allowing for multiple users to interact with the objects and participate in
discovering the memories and stories each holds.
The kiosk interface was developed in Processing and the RFID hardware
chosen was the same as before, the touchatag reader and tags [URL12]. The
application code is available in Annex K and the XML schemas on Annex L.
o flickrj ² a Java API wrapper for the REST-based Flickr API [URL19].
4.5 EVALUATION
In order to provide a validation of the system design, an observational study
was conducted on two groups of three people (all University students) using
the Mementos public kiosk interface. 23 tokens were used in this test: five
represented prominent tourist sites around the city of Funchal, while the
remaining 18 represented the generic categories of restaurants, payphones,
wireless internet, markets, bus stops and taxi ranks (three tokens for each
category). Short paper brochures were provided for each of the tourist sites
providing a textual description and indicating opening and closing times and
likely visit durations (in Annex M).
The study lasted 30 minutes for each group. The first 10 minutes were used
as an introduction and participants were encouraged to freely explore the
system features. For the last 20 minutes, they were then handed a nominal 20
Euro for transportation and requested to create a plan that came under
budget, involved a visit to four of the five tourist sites and included lunch, an
afternoon stop at a market, and pauses in locations with wireless access and
payphones.
80
Figure 49 One of the groups creating their version of a plan to visit Funchal for a day.
4.5.1 RESULTS
Both groups were able to create plans that matched the problem requirements
and appeared to have little trouble grasping the fundamentals of the interface.
This was confirmed in the interviews, in which the participants reported that
the tokens were representative, immediately understandable and easy to
manipulate. Both groups suggested that using the system would not require
previous experience. Somewhat in contrast to this reported simplicity,
participants were also highly engaged with the system: both groups took the
entirety of the allotted time.
Furthermore, all participants also appeared to be immersed throughout the
session, suggesting that the system effectively supported collaboration within
small groups. Once again, this assertion was supported by comments during
81
the interviews. Furthermore, it was borne out by specific behaviors, such as
frequent consensual passing of the tokens or, more rarely, one participant
gently taking a token from another. Body language and physical activity were
also observed to be effective tools for collaboration, with users keeping in
touch with the activity of their peers simply via watching their movements and
expressions.
Users also used the physical environment to simplify tasks, for example by
maintaining the original physical placement of tokens and paper instructions
when they were not in use. Users would also hold tokens in order to
temporally store them whilst they were not in use. At other times, tokens
would be handed to particular users to perform tasks in order to streamline
and facilitate management of the space.
In summary, although the study was short and exploratory, its results are
broadly positive. Its serves to validate some of the key concepts in Mementos:
the suitability of a Tangible Interface to tourism;; the ability of the system to
support both planning tasks and collaborative activity;; and a richness of
embodied actions that require further in-depth study.
82
5. S ECOND PROTOTYPE : E CO P LANNER
5.1 INTRODUCTION
Eco Planner is the first TUI that tackles the issue of energy consumption at
home, by allowing its users to create, manage and analyze their daily routines
through tangible objects that serve as physical representations of WKH XVHUV·
activities. (QHUJ\ FRQVXPSWLRQ FDQ EH FKDUDFWHUL]HG DV ´WKH URXWLQH
accomplishment of what people take WREHWKH¶QRUPDO·ZD\VRIlifeµ [PSP10].
7KHVH URXWLQHV FDQ EH FKDQJHG ZLWKRXW QHFHVVDULO\ ILUVW FKDQJLQJ SHRSOH·V
attitudes. Human routines relating to home energy use in the home are directly
responsible for 28% of U.S. energy consumption. Most people, however, are
unaware of how their daily activities impact the environment or how often
they engage in those activities [PGR10].
3HRSOH·V EHKDYLRUV DUH UDUHO\ HQHUJ\-VHQVLWLYH EHFDXVH PRVW RI WKHP GRQ·W
have a sense of the cost of energy nor the cost associated with specific
appliancHVDSDUWIURPWKH$&7KHUH·VDOVRDFRPPRQODFNRIDZDUHQHVVIRU
the usual settings used on specific appliances. While awareness of such costs is
perceived as a desirable result of using energy-related displays, awareness of
relatively low costs of appliance use may actually be a disincentive to
conserve. Even cumulative savings over time may not provide sufficient
incentive for long-term change for many people [PSP10].
Prior research has identified the important role of habit in guiding energy-
consuming behavior ² and the challenge of altering habitual consumption.
Slight alterations in seemingly arbitrarily developed routines could substantially
reduce consumption [PSP10]. People are simply not aware about the
83
relationship between their behaviors/routines and their energy consumption
[PGR10].
Immediate feedback has been shown to be one of the most effective
strategies in reducing electricity usage in the home, by proving a basic
mechanism with which to monitor and compare behavior and allows an
individual to better evaluate their performance [PGR10].
The goal of the Eco Planner is to allow users to visualize and compare the
impact of their daily routines on the environment and change them in a way
WKDW PDNHVWKHLUOLIH·VQROHVVFRPIRUWDEOHEut a lot greener. Users should be
able to test their beliefs with the Eco Planner, so as to change their attitudes,
and ultimately, shape their values.
5.2 CONCEPT
Fueled by literature review, the concept behind Eco Planner is of a TUI that
can overcome some of the weaknesses of information systems developed to
tackle the issue of home sustainability. Eco Planner is the first TUI designed
to tackle the problem of sustainability, so its design process and
implementation should be a valuable resource for other TUI developers
aiming to build for the same domain.
The concept for the Eco Planner was born of literature review on
motivational theories (see Section 5.2.1) and the state of the art in energy
saving computer systems (e.g. ambient and numeric displays). After fleshing
the initial brainstorm in a mind map (see Section 5.2.2) and scenario (see
Section 5.2.3), the idea for the Eco Planner took form as a TUI composed of a
set of tokens and a home tabletop interface, combining two distinct areas of
TUI research, Tangible Tabletop Interaction and Tangible Augmented Reality
[KBP+01, LNB+04, ZCC+04]. In Eco Planner, each token physically
represents an activity (e.g. watching TV, doing the laundry), and users can
FROODERUDWLYHO\ FUHDWH WKHLU KRXVHKROG·V URXWLQH by laying the tokens on the
tabletop interface. The 2D space of the tabletop represents a day of the week,
so that tokens closer to the right will represent activities done at night, while
tokens closer to the left will represent activities done in the morning. Likewise,
tokens that are vertically aligned on the tabletop represent concurrent
activities. By placing a token on a key area of the interface, users can choose
different options for the activity (e.g. committing to always do the laundry
with a full tank). (FR3ODQQHUVKRXOGEHDEOHWRXQGHUVWDQGWKHXVHUV·URXWLQH
compare it with previous routines, and display tips on how an actual routine
84
FRXOG EH ´JUHHQHUµ Ultimately, users can decide if they want to be
ecologically or financially motivated, changing how the system interprets their
routine and what tips it offers.
Eco Planner should successfully motivate users to lower costs or conserve
energy by allowing them to:
o Set goals for their household as they commit to new routines. Goals serve
a directive function ² they direct attention and effort toward goal-relevant
activities;; second goals have an energizing function and, in particular, high
goals often lead to greater effort than low goals;; third, goals affect
persistence;; and finally, goals affect behavior indirectly as individuals use,
apply, and/or learn strategies or knowledge to best accomplish the goal at
hand [FFL10].
o Get nominal rewards (as green points), which the users can compare with
other households. Users usually respond to rewards even if they are
nominal in nature (e.g. an acknowledgement of positive behavior)
[FFL10].
85
5.2.1 THE TRANS-‐THEORETICAL MODEL
The Trans-theoretical Model (TTM) states that intentional behavior change is
a process occurring in a series of stages, rather than a single event. Motivation
is required for the focus, effort and energy needed to move through the stages
[HGH10].
86
Figure 51 Mind map for the Eco Planner TUI. For an image with higher resolution, please refer to
Annex Q.
5.2.3 SCENARIO
Task: Getting to know and use the Eco Planner
Persona: John and his wife Susan.
,W·V D 6DWXUGD\ PRUQLQJ DQG -RKQ LV EURZVLQJ WKURXJK KLV IULHQGV SRVWV on
Facebook. He notices that a couple of his close friends have some iconic
updates related to an application called Eco Planner. His friend Jonathan
UHFHQWO\ VKDUHG WKDW KH·OO VWDUWLQJ GRLQJ KLV ODXQGU\ RQO\ ZLWK FROG ZDWHU
Another of his friends, Quentin, committed to using the bus to work. All
these updates are accompanied with how much Green Points both his friends
have since they started using Eco Planner.
%HLQJDZDUHWKDWERWKKHDQGKLVZLIHGRQ·WZRUU\DERXWVDYLQJHQHUJ\DQG
knowing about the ever-growing problem of sustainability, John reads into this
Eco Planner application to see if there are many disadvantages in using it. Eco
Planner is composed by an interactive table and a series of tokens. John is
informed that he can request the Eco Planner for 3 months without charge,
and then decide if he wants to keep it. John calls Susan upstairs to show her
what he found. After explaining to her what Eco Planner his, they decide to
request one from the Facebook page.
7ZRZHHNVODWHU(FR3ODQQHUDUULYHVDW-RKQDQG6XVDQ·VKRXVH6XVDQLVDW
work in the city center, working as a high-school English teacher. John is
87
working from home today;; being a college professor gives him some flexibility
in his schedule. Curious about the new technology, John takes a break from
KLVZRUNWRVHWHYHU\WKLQJXS+H·VVXUSULVHG(FR3ODQQHUKDVonly one wire,
the power outlet. Being slick and good looking, he plugs it in the living room
without even reading the manual first ² he feels confident around computers
DQGJL]PRV$IWHUWXUQLQJLWRQIRUWKHILUVWWLPHKH·VDVNHGWRFRQILJXUHWKH
wireless settings and enter his Facebook account details.
John then begins to remove the tokens from the plastic bag they came
wrapped in. He notices an area of the Eco Planner screen that represents the
different areas of the house, each with a different color. John places the tokens
that represent the difference activities that happen in his house in the different
DUHDVRIWKHVFUHHQHJWKHWRNHQWKDWUHSUHVHQWV´ZDWFKLQJ79µLQWKHJUHHQ
area, that represents the living room).
John decides to read the manual and continue to work while his wife
GRHVQ·W DUULYH $ FRXSOH RI KRXUV ODWHU 6XVDQ DUULYHV IURP ZRUN 6KH
immediately notices the slick Eco Planner table in her living room, calling
John while walking towards it. John explains his wife that he has everything set
up, while showing her the tokens he set up on the table. John explains to her
the meaning of the tokens that came with the Eco Planner. They start by
defining their actual routine. Susan gets ahead, placing the laundry token on
right side of the screen. John understands that she meant that their routine
includes washing their clothes later in the day, and checks to see if the time
displayed is correct.
When they finish adjusting token, they place the laundry token on the
´&KDQJHDFWLYLW\RSWLRQVµ area of the screen, choosing which options they use
when they do their laundry. John and Susan start debating about these options
for the first time, being aware of how they usually wash their clothes. Susan
SLFNVWKHRSWLRQV´+RWZDWHUµDQG´Half tankµAs they place the tokens on
the main part of the screen they notice tips have appeared indicating greener
options for doing the laundry. On the right side of the tokens there are also
iconic representations of the options John and Susan chose. John and Susan
continue to set every activity for a while, and after they are finished, John taps
on the commit icon. They are then notified that a Facebook update was
posted on their Walls, saying they have committed to their first routine.
Since both John and Susan have been exposed to tips as they built their
routine, they have some small changes they can commit to without causing
them too much hassle in their lives. John starts by committing to turn the TV
off instead of setting it to stand-by. Susan commits to changing the fridge
88
settings to a less cool temperature. By incrementally changing these options on
the Eco Planner, a plus icon appears at the end of the routine, indicating that
they are taking greener steps. They are again notified that a Facebook update
was posted on their Walls with this information. John and Susan go to fix
dinner after turning the Eco Planner off.
5.3 DESIGN
Table 4 Part of the TAC paradigm for the Eco Planner TUI.
Finally, the Interaction diagram allows for WKH GHVLJQ RI WKH V\VWHP·V
response in both the physical and digital world. For the Eco Planner, as users
add/remove tokens to the tabletop surface, or attach/detach tokens together,
the system augments the tokens (or group of tokens) in real-time by updating
the screen with digital information (see Figure 57).
89
5.3.2.1 The TAC palette
90
5.3.2.3 INTERACTION DIAGRAM
Figure 54 Tokens used for the Eco Planner TUI. They can be vertically attached to one another, and
LW·VSRVVLEOHWRDGGWLPHpieces in the front.
91
5.3.1 WIREFRAMES
The design process for the Eco Planner started with iterations over a series of
wireframes that complied with the ideas exposed in the concept phase. The
next set of images represent an example of interaction where the user has five
tokens on the tabletop surface, each representing activities such as commuting
to work or watching TV (see Figure 55). As the user places the token that
UHSUHVHQWV ´GRLQJ WKH ODXQGU\µ RQ WKH RSWLRQV DUHD KH FDQ WKHQ FKRRVH
between different washing options (through multi-touch controls, see Figure
56). Depending on the options chosen the system provides feedback
informing the user if the new routine is better or worse than the previous one,
and allowing him to commit to the changes made (see Figure 57).
Figure 55 The wireframe #1 for the Eco Planner TUI. Five activity tokens are being used. For an
image with higher resolution, please refer to Annex R.
Figure 56 The wireframe #2 for the Eco Planner TUI. One activity token is in the options area. For
an image with higher resolution, please refer to Annex R.
92
Figure 57 The wireframe #3 for the Eco Planner TUI. User can commit to the changes made. For an
image with higher resolution, please refer to Annex R.
Figure 58 The wireframe #4 for the Eco Planner TUI. User has committed to a new routine. For an
image with higher resolution, please refer to Annex R.
5.4 IMPLEMENTATION
Eco Planner was implemented according to the initial concept, but due to
hardware limitations the application was tested under a small different set of
conditions. While the application is perfectly capable of augmenting tokens in
the surface of a tabletop, since no multi-touch surface was available during this
stage of development, tokens where tracked from a top perspective (using
fiducial markers, see Figure 59) 7KLV GLGQ·W allow the tokens to physically
represent activities VLQFH WKH\ FRXOGQ·W KDYH D PHDQLQJIXO VKDSH RU LPDJH
attached to the top of their surface (see Figure 54).
93
Regardless of the testing conditions, the application is capable of detecting
attached tokens (representing a common time for all of them) and supports
routines from 7 am to 23 pm (see Figure 60). It was built using the Processing
environment and reacTIVision technology [JGA+07]. The application code is
available in Annex S, and the XML schemas on Annex T.
Figure 59 Photo taken during the testing phase of the Eco Planner prototype. The tokens are being
scanned from a top perspective using reacTIVision technology [JGA+07].
o FullScreen ² a Processing library that allows for better full screen support
[URL13].
o TUIO ² a client API for tangible multi-touch surfaces. The TUIO
protocol allows the transmission of an abstract description of interactive
surfaces, including touch events and tangible object states. This protocol
encodes control data from a tracker application (e.g. based on computer
vision) and sends it to any client application that is capable of decoding
the protocol [URL21].
o g4p ² a Processing library that provides set of 2D GUI components to
enable simple user input at runtime that can be used in all 2D and 3D
graphics modes in the Processing sketching environment [URL22].
94
Figure 60 Screenshot from a test of the Eco Planner TUI. The tokens are being tracked in a surface,
while the interface is being simulated on a monitor (with tokens as pictures of activities). For an image
with higher resolution, please refer to Annex U.
95
6. D ISCUSSION AND C ONCLUSION
In the last ten years TUIs have become an established are of HCI research, as
attested by the growing body of work and the number of venues dedicated in
TUI research. Seeking to provide seamless interfaces between people, digital
information, and the physical environments, TUIs have shown to have an
enormous potential to enhance the way in which people interact with and
leverage digital information. However, TUI research is still in its infancy.
Definitely, the next step for is for TUI research to better understand the
implications of interlinking the physical and digital worlds, to design Tangible
Interaction techniques for complex real-world application domains, and to
develop technologies that bridge the digital and physical worlds [SH10].
6.1 CONTRIBUTIONS
Most Tangible Interfaces prototypes are either constrained to operate in
specific physical environments (such as augmented rooms [e.g. BGS+09]) or
surfaces (such as multi-touch [e.g. JGA+07]) ² resulting in TUIs confined to
labs or specific work environments ² or are composed of tangible objects that
are only aware of one another (such as the Siftables [MKM07]) ² which focus
on play applications. The first contribution of this dissertation is tackling this
96
absence of TUIs that operate in real-world scenarios, tackling the common
problems of everyday users.
It starts by presenting Mementos, a TUI for tourism and travel. By
supporting the three major activities of a trip ² travelling (by offering real-time
information for navigation aid), in-situ planning (by offering collaborative plan
creation and revision), and the handling of photos and videos from a trip (by
allowing media sharing and revisiting) ² Mementos offers an interface capable
of adapting to different contexts and overcoming most of the problems of the
actual information systems for tourist support, while still maintaining the
benefits expected of a TUI ² physicality, the seamless integration with the
environment, and the support for memory, epistemic actions and
collaboration. Additionally, Mementos combines two important TUI research
areas: Ambient Displays [IU97] and Tangible Tabletop Interaction. Mementos
is presented as a proof-of-concept prototype, showing other researchers a
successful example of a TUI that can be useful to the typical user in a typical
task. This dissertation documents the development process of such TUI, how
it was designed with both HCI and TUI-specific methods, how it combined
existing technologies and made them work together across different scenarios,
and what results were obtained from a user study conducted on part of the
interface. Ultimately, it presents a TUI that has shift its perspective from
information-centric to a more action-centric perspective.
Similarly, this dissertation also documents the development of the first TUI
dedicated to sustainability. Eco Planner allows users to create and revise
household routines while being aware of their impact on the environment.
0RWLYDWHG E\ WKLV GLVVHUWDWLRQ·V LQLWLDO JRDO RI developing and documenting
TUIs that could fit the needs of a typical user in his day-to-day activities, the
design and development of the Eco Planner application combined the areas of
Tangible Tabletop Interaction and Tangible Augmented Reality [KBP+01
LNB+04, ZCC+04] to develop an interface that could be useful and
interaction-rich, leveraging all the benefits of TUIs. By using the Trans-
theoretical Model (TTM) as a guide while brainstorming and designing the
V\VWHP·VIXQFWLRQDOLW\(FR3ODQQHUDLPVWRPRWLYDWHXVHUVWKURXJKWKHYDULRXV
stages of behavior change. 8OWLPDWHO\WKLVSURWRW\SH·VGRFXPHQWDWLRQDLPVDW
informing other TUI researchers on the possibilities of expanding TUIs to the
home environment, while tackling a difficult subject of creating interfaces to
motivate the sustainable behavior of its users.
In both the cases of Mementos and Eco Planner, tangible objects were
designed so that users could use them in their daily lives independent from the
97
system (e.g. tokens acting as souvenirs in the case of Mementos, the token that
UHSUHVHQWV ´GRLQJ WKH ODXQGU\µ EHLQJ SODFHG RYHU WKH ZDVKLQJ PDFKLQH WR
remember the user of his commitments in the case of the Eco Planner).
The second contribution of this dissertation is opening the discussion of
embedding the theory of Embodied Cognition in a new concrete framework
or set of guidelines for TUI design. While the frameworks and methods
presented in this work are of great valuable to TUI developers, they mainly
focus on how a TUI can be constructed, modeled or categorized. They fall
short of addressing the biggest difference between TUIs and GUIs, their
physicality 7KLV UHVXOWV RQ 78,V WKDW DUHQ·W specifically built to maximize
usability or to reflect aspects of human cognition.
Embodied Cognition is a perspective in cognitive science that sees the body
with the central role in how the mind works. It understands human cognition
has being neither centralized nor abstract;; it is instead rooted in sensorimotor
processing [W02]. It further suggests that cognition exploits repeated
interaction with the environment, creating structures which advance and
simplify cognitive tasks [A03]. Embodied Cognition theory also argues that
traditional accounts of memory focus too much on the passive storage of
LQIRUPDWLRQ GHIHQGLQJ WKH LGHD WKDW SDWWHUQV VWRUHG LQ PHPRU\ UHÁHFW WKH
nature of bodily actions and their ability to mesh with situations during a task
[G97]. The way the brain offloads some of its cognitive duties onto the
environment instead of doing all of the computational work on its own is
RIWHQ FDOOHG DV ´VFDIIROGLQJµ >&@ $ SDUWLFXODU NLQG RI ´VFDIIROGLQJµ LV
known as epistemic actions ² which are actions aimed at altering the world so
as to aid and augment cognitive processes instead of trying act directly over
the goal of a task [CC98].
It is clear that most of the foundations for TUIs are rooted on the theories
of Embodied Cognition. An example of work that inspired the beginning of
TUI research is the work of Donald Norman [N88], which introduced the
notion of affordances. They are properties of an object that invite and allow
VSHFLÀF DFWLRQV 6XUHO\ WKH SRZHU RI 78,V OLHV LQ SURYLGLQJ ERWK UHDO DQG
perceived affordances. Even so, there is no way of guiding the design of
tangible objects (e.g. size, shape, material) in order to create the correct
affordances for their applicability. In a different example, while the sense of
touch is our primal and only non-distal sense [SH10]WKHUHLVQ·W a clear guide
on how to employ specific vibrotactile feedback on the tangible objects for
specific purposes. /LNHZLVH ZKLOH LW·V DFFHSWHG WKDW using two hands can
impact performance at the cognitive level, as it changes how users think about
98
a task [SH10@6WLOOWKHUH·VQRIRUPDOZD\RIGHILQLQJDSSURSULDWHWZR-handed
Tangible Interactions.
99
o Develop a series of prototypes in various domains, such as in the domain
of problem solving and spatial planning, information visualization,
tangible reminders, and entertainment and play (see Section 2.4).
o Conduct long term user studies (which lack for TUIs in general [SH10])
on both the prototypes that will be developed and in Mementos and Eco
Planner.
100
7. R EFERENCES
>$@$QGHUVRQ0/´(PERGLHGFRJQLWLRQDILHOGJXLGHµ$UWLI,QWHOO
149, 1 (Sep. 2003), 91-130.
[A07] A. 1 $QWOH ´7KH &7, IUDPHZRUN ,QIRUPLQJ WKH GHVLJQ RI WDQJLEOH
V\VWHPV IRU FKLOGUHQµ LQ 3URFHHGLQJV RI 7(, · SS ²202, NY: ACM,
2007.
>$@ 5 $LVK ´' LQSXW IRU &$$' V\VWHPVµ &RPSXWHU $LGHG 'HVLJQ YRO
11, no. 2, pp. 66²70, 1979.
[A99] R. Abrams ´$GYHQWXUHV LQ WDQJLEOH FRPSXWLQJ 7KH ZRUN RI LQWHUDFWLRQ
GHVLJQHU ¶'XUUHOO %LVKRS· LQ FRQWH[Wµ 0DVWHU·V WKHVLV 5R\DO &ROOHJH RI $UW
London, 1999.
>$.<@ 0 : $OLEDOL 6 .LWD DQG $ <RXQJ ´*HVWXUH DQG WKH SURFHVV RI
VSHHFK SURGXFWLRQ :H WKLQN WKHUHIRUH ZH JHVWXUHµ /DQJXDJH &RJQ itive
Processes, vol. 15, pp. 593²613, 2000.
>$1@ 5 $LVK DQG 3 1RDNHV ´$UFKLWHFWXUH ZLWKRXW QXPEHUVµ &RPSXWHU
Aided Design, vol. 16, no. 6, pp. 321 ²328, 1984.
>%@ % %X[WRQ ´6NHWFKLQJ 8VHU ([SHULHQFHV *HWWLQJ WKH 'HVLJQ 5LJKW DQG
WKH5LJKW'HVLJQµ0RUJDQ.DXIPDQQ3XEOLVKHUV,QF
>%@ %DUVDORX /: ´*URXQGHG FRJQLWLRQµ . Annu. Rev. Psychol. 59: In
press.
>%@0%DQ]L´*HWWLQJ6WDUWHGZLWK$UGXLQRµ25HLOO\
101
>%@ %URRNV 5 $ ´$ UREXVW OD\HUHG FRQWURO V\VWHP IRU D PRELOH
URERWµ,(((-RXUQDORI5RERWLFVDQG$XWRPDWLRQ -23.
[BB96] W. Bruns and V %UDXHU ´%ULGJLQJ WKH JDS EHWZHHQ UHDO DQG YLUWXDO
modeling ³ A new approach to human-FRPSXWHU LQWHUDFWLRQµ LQ 3URFHHGLQJV
of the IFIP 5.10 Workshop on Virtual Prototyping, Providence, September,
1994, IFIP, 1996.
[BCL+05] Borriello, G., Chalmers, M., LaMarca, A., and Nixon, P. 2005.
´'HOLYHULQJUHDO-ZRUOGXELTXLWRXVORFDWLRQV\VWHPVµ&RPPXQ . ACM 48,
36-41.
>%*@ %RXVVHPDUW % DQG *LURX[ 6 ´7DQJLEOH 8VHU ,QWHUIDFHV IRU
Cognitive AssLVWDQFHµ ,Q 3URFHHGLQJV RI WKH VW LQWHUQDWLRQDO &RQIHUHQFH RQ
Advanced information Networking and Applications Workshops - Volume 02
(May 21 - 23, 2007).
[BGS+09] Billinghurst, M., Grasset, R., Seichter, H., and Dünser, A. 2009.
´7RZDUGV $PELHQW $XJPHQWHG 5HDOLW\ ZLWK 7DQJLEOH ,QWHUIDFHVµ ,Q WK
international Conference on Human -Computer interaction. vol. 5612. 387 -396.
[BLS05] BonanQL / /HH & DQG 6HONHU 7 ´$WWHQWLRQ -based design of
DXJPHQWHG UHDOLW\ LQWHUIDFHVµ ,Q &+,
&+,
$&0 1HZ <RUN -
1231.
>%-'@ - %XXU 0 9 -HQVHQ DQG 7 'MDMDGLQLQJUDW ´+DQGV -only scenarios
and video action walls: Novel method VIRUWDQJLEOHXVHULQWHUDFWLRQGHVLJQµLQ
Proceedings of DIS04, pp. 185²192, NY: ACM, 2004.
>%.3@ 0 %LOOLQJKXUVW + .DWR DQG , 3RXS\UHY ´7KH 0DJLF%RRN ³
0RYLQJ VHDPOHVVO\ EHWZHHQ UHDOLW\ DQG YLUWXDOLW\µ ,((( &RPSXWHU *UDSKLFV
and Applications, pp. 1²4, May/June 2001.
>%56@5%DOODJDV05LQJHO06WRQHDQG-%RUFKHUV´L6WXII$SK\VLFDO
XVHU LQWHUIDFH WRRONLW IRU XELTXLWRXV FRPSXWLQJ HQYLURQPHQWVµ LQ 3URFHHGLQJV
RI&+,·SS²544, NY: ACM, 2003.
[BWD07] J. Brewer, A. Williams, and 3 'RXULVK ´$ KDQGOH RQ ZKDWV JRLQJ
on: Combining tangible interfaces and ambient displays for collaborative
JURXSVµLQ3URFHHGLQJVRI7(,SS ²10, NY: ACM.
>&@ &ODUN $QG\ ´5HDVRQV 5RERWV DQG WKH ([WHQGHG 0LQGµ 0LQG
Language 16(2):121-145.
102
>&@ $ &\SKHU HG ´:DWFK :KDW , 'R 3URJUDPPLQJ E\ 'HPRQVWUDWLRQµ
The MIT Press, 1993.
>&@&ODUN$QG\´%HLQJ7KHUHµ&DPEULGJH0$0,73UHVV
>&&@ &ODUN $ &KDOPHUV ' ´7KH H[WHQGHG PLQGµ $QDO\VLV
58(1):7²19.
[CL05] Chiu, ' . DQG /HXQJ + ´7RZDUGV XELTXLWRXV WRXULVW VHUYLFH
FRRUGLQDWLRQ DQG LQWHJUDWLRQµ ,Q 3URF RI WKH WK LQWHUQDWLRQDO &RQIHUHQFH RQ
Electronic Commerce. ICEC '05, vol. 113. 574 -581.
>&/6@+&KXQJ&-/HHDQG76HONHU´/RYHU·VFXSV'ULQNL ng interfaces
DVQHZFRPPXQLFDWLRQFKDQQHOVµLQ3URFHHGLQJVRI&+,·1<$&0
[CPP+05] Chipchase, J., Persson, P., Piippo, P., Aarras, M., and Yamamoto, T.
´0RELOH HVVHQWLDOV ILHOG VWXG\ DQG FRQFHSWLQJµ ,Q 3URFHHGLQJV RI WKH
2005 Conference on Designing For User Experience (San Francisco, California,
November 03 - 05, 2005).
>&55@ 1 &RXWXUH * 5LYLqUH DQG 3 5HXWHU ´*HR78, $ WDQJLEOH XVHU
LQWHUIDFH IRU JHRVFLHQFHµ LQ 3URFHHGLQJV RI 7(, SS ²96, NY: ACM,
2008.
>&:3@ - &RKHQ 0 :LWKJRWW DQG 3 3LHUQRW ´/RJMDP $ WDQJLEOH PXOWL -
person interfacH IRU YLGHR ORJJLQJµ LQ 3URFHHGLQJV RI &+, SS ²135,
NY: ACM, 1999.
>'@ 3 'RXULVK ´:KHUH WKH $FWLRQ ,V 7KH )RXQGDWLRQV RI (PERGLHG
,QWHUDFWLRQµ0,73UHVV
>'+@ 'URU ,( +DUQDG 6 LQ SUHVV ´2IIORDGLQJ FRJQLWLRQ RQWR FRJQLWLYH
WHFKQRORJ\µ &RJQLWLRQ GLVWULEXWHG +RZ FRJQLWLYH WHFKQRORJ\ H[WHQGV RXU
minds. John Benjamins, Amsterdam. 2009.
>(%@ ' (GJH DQG $ %ODFNZHOO ´3HULSKHUDO WDQJLEOH LQWHUDFWLRQ E\ DQDO\WLF
GHVLJQµLQ3URFHHGLQJVRI7(,SS ²76, NY: ACM.
103
>)@ * : )LW]PDXULFH ´*UDVSDEOH 8VHU ,QWHUIDFHVµ 'LVVHUWDWLRQ &RPSXWHU
Science, University of Toronto, Canada, 1996.
>)%@ * : )LW]PDXULFH DQG : %X[WRQ ´$Q HPSLULFDO HYDOXDWLRQ RI
graspable user interfaces: Towards specialized, space -multiplexHG LQSXWµ LQ
Proceedings of CHI97, pp. 43²50, NY: ACM, 1997.
>))@ - )UD]HU DQG 3 )UD]HU ´,QWHOOLJHQW SK\VLFDO WKUHH -dimensional
PRGHOOLQJ V\VWHPVµ LQ 3URFHHGLQJV RI &RPSXWHU *UDSKLFV· SS ²370,
Online Publications, 1980.
>))/@ )URHKOLFK - )LQGODWHU / DQG /DQGD\ - ´7KH GHVLJQ RI HFR -
feedback technolog\µ ,Q 3URFHHGLQJV RI WKH WK LQWHUQDWLRQDO &RQIHUHQFH RQ
Human Factors in Computing Systems (Atlanta, Georgia, USA, April 10 - 15,
2010). CHI '10. ACM, New York, NY, 1999 -2008.
>),%@ * : )LW]PDXULFH + ,VKLL DQG : %X[WRQ ´%ULFNV /D\LQJ WKH
IRXQGDWLRQV IRU JUDVSDEOH XVHU LQWHUIDFHVµ LQ 3URFHHGLQJV RI &+, 95, pp. 442²
449, NY: ACM, 1995.
>).<@ / )HLMV 6 .\IILQ DQG % <RXQJ ´3URFHHGLQJV RI 'HVLJQ DQG
6HPDQWLFV RI )RUP DQG 0RYHPHQWµ ³ DesForM 2005. Foreword. Koninklijke
Philips Electronics N.V. Eindhoven. 3, 2005.
[FMI00] P. Frei, V. Su, B. Mikhak, and + ,VKLL ´&XUO\ERW 'HVLJQLQJ D 1HZ
&ODVV RI &RPSXWDWLRQDO 7R\Vµ LQ 3URFHHGLQJV RI &+, SS ²136, NY:
ACM, 2000.
>)7@ < )HUQDHXV DQG - 7KRODQGHU ´)LQGLQJ GHVLJQ TXDOLWLHV LQ D WDQJLEOH
SURJUDPPLQJVSDFHµLQ3URFHHGLQJVRI&+,SS ²456, NY: ACM, 2006.
>)7-@ < )HUQDHXV - 7KRODQGHU DQG 0 -RQVVRQ ´%H\RQG UHSUHVHQWDWLRQV
Towards an action-FHQWULF SHUVSHFWLYH RQ WDQJLEOH LQWHUDFWLRQµ ,QWHUQDWLRQDO
Journal of Arts and Technology, vol. 1, no. 3/4, pp. 249 ²267, 2008.
[FTJ08b] Y. FernaHXV - 7KRODQGHU DQG 0 -RQVVRQ ´7RZDUGV D QHZ VHW RI
LGHDOV &RQVHTXHQFHV RI WKH SUDFWLFH WXUQ LQ WDQJLEOH LQWHUDFWLRQµ LQ
3URFHHGLQJVRI7(,·SS²230, NY: ACM, 2008.
[G03] S. Goldin-0HDGRZ´+HDULQJ*HVWXUH+RZ2XU+DQGV+HOS8V7KLQNµ
Harvard University Press, 2003.
[G10] Garzón, F. "The reactive brain and the extended mind: A fourth
SRVLWLRQµ 8npublished manuscript, available online (July 2010) .
https://ftp.um.es/logica/paco_calvo/ReactiveBrainExtendedMind.pdf
104
>*@ *OHQEHUJ ´:KDW PHPRU\ LV IRUµ %(+$9,25$/ $1' %5$,1
SCIENCES (1997) 20:1.
>*&@ *ORV - : DQG &DVVHOO - ´5RVHEXG D SODFH IRU LQWHUDFWLRQ
EHWZHHQ PHPRU\ VWRU\ DQG VHOIµ ,Q 3URFHHGLQJV RI WKH QG LQWHUQDWLRQDO
Conference on Cognitive Technology (CT '97) (August 25 - 28, 1997).
>*)@ 6 *UHHQEHUJ DQG & )LWFKHWW ´3KLGJHWV (DV\ GHYHORSPHQW RI SK\VLFDO
LQWHUIDFHV WKURXJK SK\VLFDO ZLGJHWVµ LQ 3URFHHGLQJV RI 8,67· SS ²218,
NY: ACM, 2001.
[GPS+04] Garzotto, F., Paolini, P., Speroni, M., Proll, B., Retschitzegge r, W.,
DQG 6FKZLQJHU : ´8ELTXLWRXV $FFHVV WR &XOWXUDO 7RXULVP 3RUWDOVµ ,Q
Proc. of the Database and Expert Systems Applications. IEEE.
[HB@ +RUQHFNHU ( DQG %XXU - ´*HWWLQJ D JULS RQ WDQJLEOH
LQWHUDFWLRQ D IUDPHZRUN RQ SK\VLFDO VSDFH DQG VRFLDO LQWHUDFWLRQµ ,Q 3URF RI
CHI '06.
>+(@(YDQGHQ+RYHQDQG%(JJHQ´7DQJLEOHFRPSXWLQJLQHYHU\GD\OLIH
Extending current frameworks for tangible user interfaces with personal
REMHFWVµ LQ 3URFHHGLQJV RI (86$, SS ²242, Springer, LNCS 3295,
2004.
>+*+@ +H + $ *UHHQEHUJ 6 DQG +XDQJ ( 0 ´2QH VL]H GRHV
not fit all: applying the transtheoretical model to energy feedback technology
GHVLJQµ ,Q 3URFHHGLQJV RI WKH WK LQWHUQDWLRQDO &RQ ference on Human
Factors in Computing Systems (Atlanta, Georgia, USA, April 10 - 15, 2010).
>++.@ +ROODQ - +XWFKLQV ( DQG .LUVK ' ´'LVWULEXWHG FRJQLWLRQ
toward a new foundation for human -FRPSXWHU LQWHUDFWLRQ UHVHDUFKµ $&0
Trans. Comput.-Hum. Interact. 7, 2 (Jun. 2000), 174-196.
105
[HHO09] B. Hengeveld, C. Hummels, and K. Overbeeke, ´7DQJLEOHV IRU
WRGGOHUV OHDUQLQJ ODQJXDJHµ LQ 3URFHHGLQJV RI 7(, SS ²168, NY: ACM,
2009.
>+,@ - +XUWLHQQH DQG - + ,VUDHO ´,PDJH VFKHPDV DQG WKHLU PHWDSKRULFDO
H[WHQVLRQV ,QWXLWLYH SDWWHUQV IRU WDQJLEOH LQWHUDFWLRQµ LQ 3URFHHGLQJV RI
TEI07, pp. 127²134, NY: ACM, 2007.
>+,:@ - +XUWLHQQH - + ,VUDHO DQG . :HEHU ´&RRNLQJ XS UHDO ZRUOG
EXLVLQHVV DSSOLFDWLRQV FRPELQLQJ SK\VLFDOLW\ GLJLWDOLW\ DQG LPDJH VFKHPDVµ LQ
3URFHHGLQJVRI7(,·SS²246, NY: ACM, 2008.
[HMD+08] Hornecker, E., Marshall, P., Dalton, N. S., and Rogers, Y. 2008.
´&ROODERUDWLRQDQGLQWHUIHUHQFHDZDUHQHVVZLWKPLFHRUWRXFKLQSXWµ,Q3URF
of the ACM 2008 Conference on Computer Supported Cooperative Work.
CSCW '08.
>+5/@ / ( +ROPTXLVW - 5HGVWU|P DQG 3 /MXQJVWUDQG ´7RNHQ -based
DFFHV WR GLJLWDO LQIRUPDWLRQµ LQ 3URFHHGLQJV RI WKH 1st International
Symposium on Handheld and Ubiquitous Computing, (H. Gellersen, ed.), pp.
234²245, Lecture Notes In Computer Science, vol. 1707, London: Springer -
Verlag, 1999.
>+6-@06+RUQ(76RORYH\DQG5-.-DFRE´7DQJLEOHSURJUDPPLQJ
for informal science learning: 0DNLQJ 78,V ZRUN IRU 0XVHXPVµ LQ
Proceedings of 7th International Conference on Interaction Design and
&KLOGUHQ,'&·SS²201, NY: ACM, 2008.
[HYG03] C. J. Huang, E. Yi-/XHQ 'R DQG 0 ' *URVV ´0RXVH+DXV WDEOHµ
in Proceedings of CAAD Futures, 200 3.
>,@ ,VKLL + ´7DQJLEOH ELWV EH\RQG SL[HOVµ ,Q 3URF RI WKH QG
international Conference on Tangible and Embedded interaction. TEI '08.
>,E@ + ,VKLL ´7KH WDQJLEOH XVHU LQWHUIDFH DQG LWV HYROXWLRQµ
Communications of the ACM, vol. 51, no. 6, pp. 32²36, 2008.
>,0/@ ,VKLL + 0D]DOHN $ DQG /HH - ´%RWWOHV DV D PLQLPDO
LQWHUIDFH WR DFFHVV GLJLWDO LQIRUPDWLRQµ ,Q &+,
([WHQGHG $EVWUDFWV RQ
Human Factors in Computing Systems (Seattle, Washington, March 31 - April
05, 2001). CHI '01. ACM, New York, NY, 187-188.
>,8@ + ,VKLL DQG % 8OOPHU ´7DQJLEOH ELWV 7RZDUGV VHDPOHVV LQWHUIDFHV
EHWZHHQ SHRSOH ELWV DQG DWRPVµ LQ 3URFHHGLQJV RI &+, SS ²241, NY:
ACM, 1997.
106
>-@ 6 -RUGj ´2Q VWDJH 7KH UHDFWDEOH DQG RWKHU PXVLFDO WDQ JLEOHV JR UHDOµ
International Journal of Arts and Technology (IJART), vol. 1, no. 3/4, pp.
268²287, Special Issue on Tangible and Embedded Interaction 2008.
>-*$@ 6 -RUGj * *HLJHU 0 $ORQVR DQG 0 .DOWHQEUXQQHU ´7KH
reacTable: Exploring the syner gy between live music performance and tabletop
WDQJLEOHLQWHUIDFHVµLQ3URFHHGLQJVRI7(,·SS ²146, NY: ACM, 2007.
[JGH+08] Jacob, R. J., Girouard, A., Hirshfield, L. M., Horn, M. S., Shaer, O.,
6RORYH\(7DQG=LJHOEDXP-´5HDOLW\ -based interaction: a framework
for post-:,03 LQWHUIDFHVµ ,Q 3URF RI WKH 7ZHQW\ -Sixth Annual SIGCHI
Conference on Human Factors in Computing Systems. CHI '08.
>-,3@5-.-DFRE+,VKLL*3DQJDURDQG-3DWWHQ´$WDQJLEOHLQWHUIDFH
for organizing iQIRUPDWLRQ XVLQJ D JULGµ LQ 3URFHHGLQJV RI &+, · SS ²
346, NY: ACM, 2002.
>.@ 0 .DOWHQEUXQQHU ´:HEVLWH RQ 7DQJLEOH 0XVLFµ 5HDG $SULO
http://modin.yuri.at/tangibles/ .
>.%@ + .DWR DQG 0 %LOOLQJKXUVW HW DO ´9LUWXDO REMHFW PDQLSXODWLRQ on a
table-WRS $5 HQYLURQPHQWµ LQ 3URFHHGLQJV RI ,QWHUQDWLRQDO 6\PSRVLXP RQ
Augmented Reality ISAR 2000, pp. 111 ²119, 2000.
>.%@ - - .DODQLWKL DQG 9 0 %RYH ´&RQQHFWLEOHV 7DQJLEOH VRFLDO
QHWZRUNVµLQ3URFHHGLQJVRI7(,SS ²206, NY: ACM, 2008.
>.%@ + .DWR DQG 0 %LOOLQJKXUVW ´0DUNHU WUDFNLQJ DQG +0' FDOLEUDWLRQ
for a video-EDVHG DXJPHQWHG UHDOLW\ FRQIHUHQFLQJ V\VWHPµ LQ 3URFHHGLQJV RI
the 2nd International Workshop on Augmented Reality (IWAR 99), 1999.
>.+1@ . .RED\DVKL 0 +LUDQR $ 1DULWD DQG + ,VKLL ´$ WDQJLEOH
interface for IP network VLPXODWLRQµ LQ 3URFHHGLQJV RI &+, · H[WHQGHG
abstracts, pp. 800²801, NY: ACM, 2003.
>.+7@ .OHPPHU 6 5 +DUWPDQQ % DQG 7DND\DPD / ´+RZ ERGLHV
PDWWHU ILYH WKHPHV IRU LQWHUDFWLRQ GHVLJQµ ,Q 3URF RI WKH WK &RQIHUHQFH RQ
Designing interactive Systems. DIS '06.
>.//@ 6 5 .OHPPHU - /L - /LQ DQG - $ /DQGD\ ´3DSLHU 0kFKp
7RRONLW VXSSRUW IRU WDQJLEOH LQSXWµ LQ 3URFHHGLQJV RI &+, SS ²406,
NY: ACM, 2004.
>.0@0-.LPDQG0/0DKHU´7KHLPSDFWRIWDQJLEOHXVHULQWHU faces on
GHVLJQHUV· VSDWLDO FRJQLWLRQµ +XPDQ -Computer Interaction, vol. 23, no. 2,
2008.
>.0@ ' .LUVK DQG 3 0DJOLR ´2Q GLVWLQJXLVKLQJ HSLVWHPLF IURP SUDJPDWLF
DFWLRQVµ&RJQLWLYH6FLHQFHYROQRSS ²549, 1994.
107
[KST+09] D. S. Kirk, A. SelOHQ 6 7D\ORU 1 9LOODU DQG 6 ,]DGL ´3XWWLQJ WKH
SK\VLFDO LQWR WKH GLJLWDO ,VVXHV LQ GHVLJQLQJ K\EULG LQWHUDFWLYH VXUIDFHVµ LQ
Proceedings of HCI 2009, 2009.
>/-@ /DNRII * -RKQVRQ 0 ´7KH PHWDSKRULFDO VWUXFWXUH RI WKH KXPDQ
FRQFHSWXDOV\VWHPµ&RJQLWLYH6FLHQFHSS -208.
[LNB+04] G. A. Lee, C. N HOOHV 0 %LOOLQJKXUVW DQG * - .LP ´,PPHUVLYH
DXWKRULQJ RI WDQJLEOH DXJPHQWHG UHDOLW\ DSSOLFDWLRQVµ LQ 3URFHHGLQJV RI
IEEE/ACM International Symposium on Mixed and Augmented Reality, pp.
172²181, IEEE Computer Society, 2004.
>0@ 7 6 0F1HUQH\ ´)URP WXUWOHV WR WDQJLEOH SURJUDPPLQJ EU icks:
([SORUDWLRQV LQ SK\VLFDO ODQJXDJH GHVLJQµ 3HUV 8ELTXLW &RPSXW YRO SS
326²337, 2004.
>0@%-0LOOHU´&RJQLWLRQDQGWKH&LW\µ
>0@ 0DUVKDOO 3 ´'R WDQJLEOH LQWHUIDFHV HQKDQFH OHDUQLQJ"µ ,Q 3URF
of the 1st international Conference on Tangible and Embedded interaction.
TEI '07.
[MDM+04] Matthews, T., Dey, A. K., Mankoff, J., Carter, S., and Rattenbury,
7 ´$ WRRONLW IRU PDQDJLQJ XVHU DWWHQWLRQ LQ SHULSKHUDO GLVSOD\Vµ ,Q
UIST '04. ACM, New York, NY, 247-256.
>0.0@ 0HUULOO ' .DODQLWKL - DQG 0DHV 3 ´6LIWDEOHV towards
VHQVRUQHWZRUNXVHULQWHUIDFHVµ,Q7(,
>035@ 0DUVKDOO 3 3ULFH 6 DQG 5RJHUV < ´&RQFHSWXDOL]LQJ
WDQJLEOHVWRVXSSRUWOHDUQLQJµ,Q,'&
$&01HZ<RUN1< -109.
108
[MRG+07] E. Mugellini, E. Rubegni, S. Gerardi, and O. A. K KDOHG ´8VLQJ
SHUVRQDOREMHFWVDVWDQJLEOHLQWHUIDFHVIRUPHPRU\UHFROOHFWLRQDQGVKDULQJµLQ
3URFHHGLQJVRI7(,·SS²238, NY: ACM, 2007.
>1@ ' 1RUPDQ ´7KH 3V\FKRORJ\ RI (YHU\GD\ 7KLQJVµ 1HZ <RUN %DVLF
Books, 1988.
>2)@ & 2·0DOOH\ DQG ' 6WDQWRQ )UDVHU ´/LWHUDWXUH UHYLHZ LQ OHDUQLQJ ZLWK
WDQJLEOHWHFKQRORJLHVµ1(67$IXWXUHODE report 12, Bristol, 2004.
>3@ 3RODQ\L 0 ´7KH 7DFLW 'LPHQVLRQµ /RQGRQ 5RXWOHGJH .HJDQ 3DXO
Ltd. 108 pp. 1967.
>3@53HUOPDQ´8VLQJ&RPSXWHU7HFKQRORJ\WR3URYLGHD&UHDWLYH/HDUQLQJ
(QYLURQPHQWIRU3UHVFKRRO&KLOGUHQµ0,7/RJR0HPR
>3@ 5 3R\QRU ´7KH KDQG WKDW URFNV WKH FUDGOHµ ,' 0DJD]LQH SS ²65,
May/June 1995.
[PFD+95] Parsons, L. M., Fox, P. T., Downs, J. H., Glass, T., Hirsch, T. B.,
0DUWLQ & & -HUDEHN 3O $ /DQFDVWHU - / µ 8VH RI LPSOLFLW PRWRU
imager\ IRU YLVXDO VKDSH GLVFULPLQDWLRQ DV UHYHDOHG E\ 3(7µ 1DWXUH -
58.
>3*5@ 3DWHO 6 1 *XSWD 6 DQG 5H\QROGV 0 6 ´7KH GHVLJQ DQG
evaluation of an end-user-deployable, whole house, contactless power
FRQVXPSWLRQ VHQVRUµ ,Q 3URFHHGLQJV R f the 28th international Conference on
Human Factors in Computing Systems (Atlanta, Georgia, USA, April 10 - 15,
2010). CHI '10. ACM, New York, NY, 2471 -2480.
>3+@ ( : 3HGHUVHQ DQG . +RUQE N ´PL[L78, $ WDQJLEOH VHTXHQFHU IRU
electronic live performaQFHVµ LQ 3URFHHGLQJV RI 7(, · SS ²230, NY:
ACM, 2009.
>3+%@ 3DUDGLVR - $ +VLDR . DQG %HQEDVDW $ ´7DQJLEOH PXVLF
LQWHUIDFHVXVLQJSDVVLYHPDJQHWLFWDJVµ,Q3URFRI1,0(
>3,@ - 3DWWHQ DQG + ,VKLL ´$ FRPSDULVRQ RI VSDWLDO organization strategies
LQ JUDSKLFDO DQG WDQJLEOH XVHU LQWHUIDFHVµ LQ 3URFHHGLQJV RI 'HVLJQLQJ
$XJPHQWHG5HDOLW\(QYLURQPHQWV'$5(·SS ²50, NY: ACM, 2000.
>3,@ - 3DWWHQ DQG + ,VKLL ´0HFKDQLFDO FRQVWUDLQWV DV FRPSXWDWLRQDO
constraints in tablHWRSWDQJLEOHLQWHUIDFHVµLQ3URFHHGLQJVRI&+,·SS ²
818, NY: ACM, 2007.
109
>305@3RXS\UHY,0DUX\DPD6DQG5HNLPRWR-´$PELHQWWRXFKµ
In UIST '02. ACM, NY, 51-60.
>312@,3RXS\UHY71DVKLGDDQG02NDEH´$FWXDWLRQDQGWDQJLEO e user
LQWHUIDFHV7KH9DXFDQVRQGXFNURERWVDQGVKDSHGLVSOD\VµLQ3URFHHGLQJVRI
7DQJLEOHDQG(PEHGGHGLQWHUDFWLRQ7(,·SS ²212, NY: ACM, 2007.
>35,@ - 3DWWHQ % 5HFKW DQG + ,VKLL ´$XGLRSDG $ WDJ -based interface for
musical performancHµ LQ 3URFHHGLQJV RI WKH ,QWHUQDWLRQDO &RQIHUHQFH RQ 1HZ
Interface for Musical Expression NIME02, pp. 24 ²26, 2002.
>363@ 3LHUFH - 6FKLDQR ' 3DXORV ( ´+RPH +DELWV DQG (QHUJ\
([DPLQLQJ'RPHVWLF,QWHUDFWLRQVDQG(QHUJ\&RQVXPSWLRQµ
[R06] 5RJHUV < ´0RYLQJ RQ IURP :HLVHU·V 9LVLRQ RI &DOP &RPSXWLQJ
(QJDJLQJ 8EL&RPS ([SHULHQFHVµ ,Q 3URF 8ELFRPS SS ²421.
Springer, Heidelberg (2006).
>5@ 0 5HVQLFN ´%HKDYLRU FRQVWUXFWLRQ NLWVµ &RPPXQLFDWLRQV RI WKH $&0
vol. 36, no. 7, pp. 64²71, July 1993.
>5&@ . 5\RNDL DQG - &DVVHOO ´6WRU\0DW $ SOD\ VSDFH IRU FROODERUDWLYH
VWRU\WHOOLQJµ LQ3URFHHGLQJVRI&+,·SS ²273, NY: ACM, 1999.
[RDM10] Riche, Y., Dodge, J., and Metoyer, R. A. 2010. "Studying always -on
HOHFWULFLW\ IHHGEDFN LQ WKH KRPHµ ,Q 3URFHHGLQJV RI WKH WK LQWHUQDWLRQDO
Conference on Human Factors in Computing Systems ( Atlanta, Georgia, USA,
April 10 - 15, 2010). CHI '10. ACM, New York, NY, 1995 -1998.
[RFK+98] Rauterberg, M., Fjeld, M., Krueger, H., Bichsel, M., Leonhardt, U.,
0HLHU0´%8,/'-,7$SODQQLQJWRROIRUFRQVWUXFWLRQDQGGHVLJQµ
ACM CHI Conference, Extended Abstracts. p. 177-178.
>5+%@ 5RJHUV < +D]OHZRRG : %OHYLV ( DQG /LP < ´)LQJHU
talk: collaborative decision-making using talk and fingertip interaction around
D WDEOHWRS GLVSOD\µ ,Q &+,
([WHQGHG $EVWUDFWV RQ +XPDQ )DFWRUV in
Computing Systems (Vienna, Austria, April 24 - 29, 2004).
>5-7@5DIIOH+-RDFKLP0:DQG7LFKHQRU-´6XSHUFLOLDVNLQDQ
LQWHUDFWLYHPHPEUDQHµ,Q&+,
$&01HZ<RUN1< -809.
>50,@ . 5\RNDL 6 0DUWL DQG + ,VKLL ´,2 EUXVK 'UDZLQJ ZLWK HYHU\GD\
REMHFWVDVLQNµLQ3URFHHGLQJV of CHI 2004, pp. 303²310, NY: ACM, 2004.
110
>53,@ + 6 5DIIOHH $ - 3DUNHV DQG + ,VKLL ´7RSRER $ FRQVWUXFWLYH
DVVHPEO\ V\VWHP ZLWK NLQHWLF PHPRU\µ LQ 3URFHHGLQJV RI WKH $&0 &+,·
pp. 647²654, NY: ACM, 2004.
>6@-$6HLW]´7KHERGLO\EDVLVRI WKRXJKWµLQ3URFHHGLQJVRIWK$QQXDO
Symposium of the Jean Piaget Society, Mexico City, Mexico, 1999.
>6E@ 6FKZDUW] '/ ´3K\VLFDO LPDJHU\ NLQHPDWLF YV G\QDPLF PRGHOVµ
Cogn. Psychol. 38:433²64
>6&%@ / 6LWRUXV 6 6 &DR DQG - %XXU ´7DQJLEOH XVHU LQWHUIDFHV IRU
FRQILJXUDWLRQ SUDFWLFHVµ LQ 3URFHHGLQJV RI 7(, SS ²230, NY: ACM,
2007.
>6*+@ 6RORZD\ ( *X]GLDO 0 DQG +D\ . ( ´/HDUQHU -centered
GHVLJQ WKH FKDOOHQJH IRU +&, LQ WKH VW FHQWXU\µ LQWHUDFWLRQV $SU
1994), 36-48.
[SH10] Shaer, O. and HorQHFNHU ( ´7DQJLEOH 8VHU ,QWHUIDFHV 3DVW
3UHVHQW DQG )XWXUH 'LUHFWLRQVµ )RXQG 7UHQGV +XP -Comput. Interact. 3, 1²
2 (Jan. 2010), 1-137.
>6+6@ $ 6LQJHU ' +LQGXV / 6WLIHOPDQ DQG 6 :KLWH ´7DQJLEOH SURJUHVV
Less is more in somewire audio VSDFHVµ LQ 3URFHHGLQJV RI &+, · SS ²
111, NY: ACM, 1999.
[SLC+04]. Shaer, N. Leland, E. H. Calvillo -*DPH] DQG 5 - . -DFRE ´7KH
7$& SDUDGLJP 6SHFLI\LQJ WDQJLEOH XVHU LQWHUIDFHVµ 3HUVRQDO DQG 8ELTXLWRXV
Computing, vol. 8, pp. 359²369.
>6.@ + 6X]XNL DQG + .DWR ´$OJR%ORFN $ WDQJLEOH SURJUDPPLQ g language,
D WRRO IRU FROODERUDWLYH OHDUQLQJµ LQ 3URFHHGLQJV RI WKH WK (XURSHDQ /RJR
conference (Eurologo93), pp. 297 ²303, Athens, Greece, 1993.
111
>6.@ + 6X]XNL DQG + .DWR ´,QWHUDFWLRQ -level support for collaborative
learning: AlgoBlock ³ An open proJUDPPLQJ ODQJXDJHµ LQ 3URFHHGLQJV RI
CSCL, 1995.
[SWK+04] Sharlin, E., Watson, B., Kitamura, Y., Kishino, F., and Itoh, Y.
´2Q WDQJLEOH XVHU LQWHUIDFHV KXPDQV DQG VSDWLDOLW\µ 3HUVRQDO
Ubiquitous Comput. 8, 5, 338-346.
>7@67XUNOH´(YRFDWLYH2EMHFWV7KLQJV:H7KLQN:LWKµ&DPEULGJH0$
MIT Press, 2007.
>8,@ % 8OOPHU DQG + ,VKLL ´(PHUJLQJ IUDPHZRUNV IRU WDQJLEOH user
LQWHUIDFHVµ,%06\VWHPV-RXUQDOYROQR ²4, pp. 915²931, July 2000.
>8,@ - 8QGHUNRIIOHU DQG + ,VKLL ´,OOXPLQDWLQJ OLJKW $Q RSWLFDO GHVLJQ WRRO
with a luminous-WDQJLEOH LQWHUIDFHµ LQ 3URFHHGLQJV RI &+, SS ²549,
NY: ACM, 1998.
[UI9@ - 8QGHUNRIIOHU DQG + ,VKLL ´8US $ OXPLQRXV -tangible workbench for
XUEDQ SODQQLQJ DQG GHVLJQµ LQ 3URFHHGLQJV RI &+, · SS ²393, NY:
ACM, 1999.
>8,E@ % 8OOPHU DQG + ,VKLL ´0HGLDEORFNV 7DQJLEOH LQWHUIDFHV IRU RQOLQH
PHGLDµLQ3URFHHGLQJV RI&+,·SS²32, NY: ACM, 1999.
>8,-@ % 8OOPHU + ,VKLL DQG 5 -DFRE ´7RNHQFRQVWUDLQW V\VWHPV IRU
WDQJLEOH LQWHUDFWLRQ ZLWK GLJLWDO LQIRUPDWLRQµ $&0 7UDQVDFWLRQV RQ
ComputerHuman Interaction, vol. 12, no. 1, pp. 81 ²118, 2005.
[URL1] http://tangible.media.mit.edu/
[URL2] http://www.picocricket.com/
[URL3]
http://tek.sapo.pt/noticias/computadores/computadores_desaconselhados_a_
criancas_com_m_1071229.html
[URL4] http://www.handyboard.com/
[URL5] http://www.handyboard.com/cricket/
[URL6] http://mindstorms.lego.com/
[URL7] http://www.phidgets.com/
112
[URL7] http://www.littlebits.cc
[URL9] http://www.processing.org
[URL4] http://www.handyboard.com/
[URL5] http://www.handyboard.com/cricket/
[URL6] http://mindstorms.lego.com/
[URL7] http://www.phidgets.com/
[URL11] http://www.extrapixel.ch/processing/bluetoothDesktop/
[URL12] http://www.touchatag.com/
[URL13] http://www.superduper.org/processing/fullscreen_api/
[URL15] http://www.libnfc.org/documentation/installation
[URL17] http://www.libnfc.org/
[URL18] http://www.flickr.com/
[URL19] http://flickrj.sourceforge.net/
[URL20] http://www.mysecondplace.org/?p=398
[URL21] http://www.tuio.org/
[URL22] http://www.lagers.org.uk/processing.html
>:@ :LOVRQ 0 ´6L[ YLHZV RI HPERGLHG FRJQLWLRQµ 3V\FKRQRPLF
Bulletin & Review, 9, 625²636.
>:@ 0 :HLVHU ´6RPH FRPSXWHU VFLHQFH LVVXHV LQ XELTXLWRXV FRPSXWLQJµ
Communications of the ACM, vol. 36, no. 7, pp. 74 ²84, 1993.
[W98] F. R. Wilson, The Hand ³ ´+RZ ,WV 8VH 6KDSHV WKH %UDLQ /DQJXDJH
DQG+XPDQ&XOWXUHµ9LQWDJHERRNV5DQG om House, 1998.
113
>:%@ :HLVHU 0 DQG %URZQ -6 ´'HVLJQLQJ &DOP 7HFKQRORJ\µ
http://www.ubiq.com/hypertext/weiser/calmtech/calmtech.htm, December
1995.
>:.6@ 7 :HVWH\Q - .LHQW] 7 6WDUQHU DQG * $ERZG ´'HVLJQLQJ WR\V
with automatic play characterization for supporting the assessment of a childs
GHYHORSPHQWµLQ3URFHHGLQJVRI,'&SS ²92, NY: ACM, 2008.
>:0*@ 3 :HOOQHU : 0DFND\ DQG 5 *ROG ´&RPSXWHU -augmented
HQYLURQPHQWV %DFN WR WKH UHDO ZRUOGµ &RPPXQLFDWLRQV RI WKH $&0 YRO
no. 7, pp. 24²26, 1993.
>:3@ 3 :\HWK DQG + & 3XUFKDVH ´7DQJLEOH SURJUDPPLQJ HOHPHQWV IRU
young chiOGUHQµ LQ 3URFHHGLQJV RI &+, · ([WHQGHG $EVWUDFWV SS ²775,
NY: ACM, 2002.
>::+@ - :HUQHU 5 :HWWDFK DQG ( +RUQHFNHU ´8QLWHG -pulse: Feeling
\RXU SDUWQHUV SXOVHµ LQ 3URFHHGLQJV RI 0RELOH+&, SS ²553, NY:
ACM, 2008.
>=@ - =KDQJ ´7KH QDWXUH RI H[WHUQDO UHSUHVHQWDWLRQV LQ SUREOHP VROYLQJµ
Cognitive Science, vol. 21, pp. 179²217, 1997.
>=$5@ =XFNHUPDQ 2 $ULGD 6 DQG 5HVQLFN 0 ´([WHQGLQJ WDQJLEOH
interfaces for education: digital montessori -LQVSLUHG PDQLSXODWLYHVµ ,Q 3 roc.
of the SIGCHI Conference on Human Factors in Computing Systems. CHI '05.
>=&&@ = =KRX $ ' &KHRN 7 &KDQ - + 3DQ DQG < /L ´,QWHUDFWLYH
HQWHUWDLQPHQW V\VWHPV XVLQJ WDQJLEOH FXEHVµ LQ 3URFHHGLQJV RI ,(
Australian Workshop on Interact ive Entertainment, 2004.
>=+6@-=LJHOEDXP0+RUQ26KDHUDQG5-DFRE ´7KHWDQJLEOHYLGHR
HGLWRU &ROODERUDWLYH YLGHR HGLWLQJ ZLWK DFWLYH WRNHQVµ LQ 3URFHHGLQJV RI 7(,
¶SS²46, NY: ACM, 2007.
>=1@ - =KDQJ DQG ' $ 1RUPDQ ´5HSUHVHQWDWLRQV LQ GLVWULEXWHG FRJQLWLYH
WDVNVµ&RJQLWLYH6FLHQFH vol. 18, no. 1, pp. 87²122, 1994.
114