Professional Documents
Culture Documents
Using Personal Objects As Tangible Inter PDF
Using Personal Objects As Tangible Inter PDF
231
TEI'07, 15-17 Feb 2007, Baton Rouge, LA, USA Chapter 5 - CONTEXT DEPENDENCY AND PHYSICAL ADAPTABILITY
Memodules Approach
According to the concept of associative memory [18],
objects can act as physical reminders to some memories.
Then using those physical reminders it is possible to define
some scenarios which allows materializing user memories
evoked by personal objects. In the same way, the approach
followed in Memodules project to support the process of
memory recall is based on three main phases (Figure 1):
Phase 1 - Tangible object initialization, Phase 2 - Usage
scenario description and Phase 3 - Action materialization. Figure 2 Memodules usage scenario
232
Chapter 5 - CONTEXT DEPENDENCY AND PHYSICAL ADAPTABILITY TEI'07, 15-17 Feb 2007, Baton Rouge, LA, USA
In order to support this type of interaction and to enable Our work is intended to support any type of user to use and
people 2 to use Memodules for organizing, manipulating combine augmented personal objects as a means for
and consuming information and memories it is necessary to memory recollection and experience sharing and to support
provide users an easy way to create, assemble and manage dynamic re-combination of those objects since personal
them. This means providing a user-oriented framework memories associated to objects and events can change over
which allows configuring Memodules objects and time [2]. For this reason users should be provided with a
describing their usage scenarios in a simple and intuitive mechanism for easily associating and re- associating multi-
way. Moreover the framework has to support the easy sensorial information to their personal objects. To achieve
reconfiguration of Memodules usage scenarios since this, we developed a generally applicable model describing
personal memories associated to objects and events are not the objects and the devices within the environment and the
“absolute” but rather they change over time [2]. interaction occurring between them. This model, presented
in the following, provides a common, technology-
independent, description of the whole system which can be
PERSONAL OBJECTS AS TANGIBLE CUES TO implemented using different technologies.
MEMORIES
Finally it is worth noting that even if this work was derived
The innovative works of Wellner’s DigitalDesk [14], and by investigating travelling activity application domain, the
Ishii and Ullmer’s Tangible Bits [7] have pioneered framework we are going to present in the next paragraphs is
exploration of post-WIMP (Windows, Icons, Menus, flexible to be applied in different contexts.
Pointers) tangible interfaces. Ubiquitous computing
applications have motivated these novel physical interfaces
even more. However, tangible interfaces are difficult and MEMOML MODEL
time consuming to build, requiring a specific technological
expertise that limits their widespread adoption. Several MemoML (Memodules Markup Language) has been
research efforts have been done during recent years to developed to respond to the need of providing a universally
develop tools for prototyping tangible user interfaces and applicable model which is technology-independent and
designing ubiquitous computing architectures. Some which, as a consequence, can be implemented over different
examples of those projects are: Smart-Its [10], Phidgets platforms. MemoML is an XML-based modeling language
[11], the iStuff [7] and iCAP [6]. These projects were which formalizes the previously presented Memodules
mainly focused on providing rich prototyping toolkits or approach.
programming languages for developing Tangible User MemoML is based on Document Engineering approach
Interfaces and ubiquitous devices from a developer point of [15] which is emerging as a new discipline for specifying,
view instead of focusing on end-users. Other examples of designing, and implementing systems belonging to very
interesting projects focusing more on supporting non-expert different business domains by using XML technologies.
in programming users to manage personal ubiquitous The choice of a declarative modeling approach 3 allows us
environments are Accord [3,4], Oxygen [8] and GAS [14] to describe what the framework is like, rather than how to
projects. All these three projects have focused on giving create it, enabling system flexibility, reusability and
end users the ability to construct or modify ubiquitous portability over different implementation technologies.
computing environments.
More specifically MemoML describes the components and
Our work has been inspired by the results achieved within the structure of a Memodules environment. The
the framework of those projects. However, some components are the building blocks of the environment (e.g.
fundamental conceptual differences between those projects Memodules objects, devices, communication protocols,
and our work exist: (1) our focus on personal objects and actions, etc.) which can be combined together in order to
souvenirs as tangible interfaces instead of generic domestic define Memodules usage scenarios. The structure defines
objects and devices, and (2) our focus on memory what a usage scenario is, and the rules it has to undergo
recollection and sharing instead of management of generic such as for instance the constraints associated to actions
everyday activities in the home environment. While those which restrain the types of information an action can
projects refer to the concept of smart objects and their
combination to facilitate everyday activities, we concentrate
more on the concept of reminiscence and on the association 3
between personal objects and multimedia resources in order Declarative programming is an approach to computer
to enhance the experience of memory recall. programming that involves the creation of a set of
conditions that describe a solution space, but leaves the
interpretation of the specific steps needed to arrive at that
2
In the rest of the paper the terms “user”, “end user”, “non- solution up to an unspecified interpreter. For more details
expert programmer”, “normal user” are indifferently used to on declarative programming see
refer to any type of user, e.g. people who do not have any http://www.csc.liv.ac.uk/~frans/OldLectures/2CS24/declara
PC-experience. tive.html#detail
233
TEI'07, 15-17 Feb 2007, Baton Rouge, LA, USA Chapter 5 - CONTEXT DEPENDENCY AND PHYSICAL ADAPTABILITY
operate upon (e.g. the action “play music” could not operate shown on the business card. Then when Sabrina adds the
upon some photos). business card of her best friend to the system, then this
Memodule is automatically associated to the pre-defined
The two most important elements in the model are the
send_email scenario and Sabrina is able to send e-mails to
concepts of Memodules environment (MemoEnvironment
her best friend without doing any other thing.
element 4 in Figure 3) and Memodules usage scenario
(MemoScenario element in Figure 3). The first comprises
the whole collection of available Memodule objects,
devices and resources which can be assembled in order to
define Memodules usage scenarios. The second refers to
Memodules usage scenarios and describes which
information is associated to a Memodules and how this
information is rendered to the user.
Since Memodules are physical objects that activate actions,
a usage scenario describes how a Memodule (or a
combination of Memodules) interacts with the surrounding
environment. A MemoScenario is composed of three main
parts: the Memodule which is in turn associated to the
information to which the object reminds according to the Figure 4 Memodules element
user mental model (e.g. the seashell souvenir reminds to the
pictures of some Greek holidays), the input device which is
responsible for identifying the Memodule and starting the MemoClusters are virtual folders defined by the user
scenario (InputDevice element in Figure 3), and the containing collections of information (a set of pictures, a
Operation to perform once the Memodule is identified by collection of video, a compilation of songs, etc.).
the input device. Information in a cluster is constrained to be of the same
type (e.g. the “type picture” will include “.jpg”, “.gif”,
“.bmp”, the “type music” will include “.mp3”, “.wav”) in
order to guarantee consistence of action operated upon a
MemoCluster. Alternatively, it is possible to associate
multiple MemoClusters to a single Memodule and perform
a different action upon each of them.
234
Chapter 5 - CONTEXT DEPENDENCY AND PHYSICAL ADAPTABILITY TEI'07, 15-17 Feb 2007, Baton Rouge, LA, USA
• seashell Memodule + business card Memodule + Input As presented in the next paragraph, MemoML is the base
device1 = scenario1 upon which the framework is built. More specifically, the
Action Builder visual editor, presented in the following
• seashell Memodule + book Memodule + Input device1 section, implements the concept of MemoScenario,
= scenario2 supporting user creation of Memodules usage scenarios.
Operation element describes the action the user wants to
perform when interacting with the Memodule object in
order to render the associated memories and events. Each MEMODULES FRAMEWORK
Operation is identified by (Figure 6): Memodules framework is a user-oriented framework
• the Action to perform, enabling end users to create their own Memodules
environment and it is built upon the previously described
• the Application to use to materialize the action (since MemoML model. Configuring Memodules means linking
the same action could be carried out using different them to multimedia information and describing how they
applications – e.g. the “play music” action could be interact with the surrounding environment (i.e. how and
carried out using “windows media player” or “java when information associated to them is rendered). The
music player” application), framework provides a one-stop platform 5 : a single platform
that allows users to add new Memodules objects to the
• the OutputDevice where the action is to be executed,
existing Memodules environment, to describe their usage
• the MemoCluster of information the action operate scenarios and to manage the execution of those scenarios.
upon, Memodules framework allows persons to create and
manage their own Memodules environment by dynamically
• the Mode of actions execution (parallel or sequential) if arranging heterogeneous Memodules and device
multiple operations are defined within the same components with digital information. Moreover, it provides
scenario. users with the ability of easily configuring and
reconfiguring Memodules usage scenarios. Memodules
framework, which supports the 3-phases approach
described earlier, is composed of the following
components:
• Memodules Lay&Play which handles the phase 1,
corresponding to tangible object initialization
• Memodules Action Builder which supports the phase 2,
corresponding to usage scenario description
Figure 6 Operation element
• Memodules Console which handles the phase 3,
corresponding to action materialization
MemoEnvironment as shown in Figure 3 is composed of:
• the collection of Memodule objects,
Phase 1 - Tangible Object Initialization
• the collection of all the MemoClusters, called This phase is handled by Memodules Lay&Play. As stated
MemoResouces (see Figure 5), before, Memodules are augmented personal objects used to
• the set of available devices and applications which can access information and to help controlling daily life
be used within a scenario to materialize actions devices. Memodules Lay&Play allows to easily “create”
described within the MemoConfig element (see Figure Memodules and to add them to the Memodules
7) environment. Memodules Lay&Play is composed of a
webcam and an RFID reader which allows simultaneously
to take a picture of the object laying over the Lay&Play
system and to read the ID of the RFID tag attached to the
object. The digital counterpart of the physical object is
created (Memodule creation) and added to the list of
5
the term one-stop platform originates from the e-
government research domain to indicate a single access
point to all public services which provide users with
Figure 7 MemoConfig element integrated services.
235
TEI'07, 15-17 Feb 2007, Baton Rouge, LA, USA Chapter 5 - CONTEXT DEPENDENCY AND PHYSICAL ADAPTABILITY
available Memodules objects displayed on the Memodule the workspace it clones itself and becomes a symbolic link.
Action Builder editor. Useless pieces in the workspace can be thrown in the trash
(on the bottom-right of the workspace). In order to connect
puzzle pieces together it is necessary to drag a particular
piece in the vicinity of a fitting target piece. If the two
Webcam pieces are matching a visual feedback is provided to the
user (the border of the puzzle pieces become red-colored),
on the contrary if the user tries to drag a non-compatible
RFID
reader
piece the system will not allow their assembly. When a
Memodule puzzle piece is selected the MemoCluster panel
(on the right) displays information clusters associated to
that Memodule which have been created and used in
previous scenarios.
Figure 8 Memodules Lay&Play Scenarios can be edited starting from puzzle pieces
belonging to any category.
Phase 2 - Usage Scenario Description Control Puzzle pieces
panel categories
This phase is supported by Memodules Action Builder. The
Memodules Action Builder is a visual editor based on the
puzzle metaphor which handles the creation of usage
scenarios for every Memodule. The choice of the puzzle
metaphor was inspired by the results achieved within the
Available
framework of Accord project [17]. The project designed a components
visual editor based on the puzzle metaphor to enable end
users to easily configure their home ubiquitous
environment. The evaluation activity they did with real
users confirmed extensively the suitability of their approach
[5]. However, although the puzzle metaphor was easily Working space for MemoCluster
grasped, poor design choice for icons graphics made it creating scenarios panel
difficult to couple them with their physical counterpart [5].
Memodules Action Builder uses the puzzle metaphor
proposed by Accord and extends it in order to support more Figure 9 Memodules Action Builder
complex puzzle configurations combining parallel and
sequential actions. Moreover, thanks to Memodules
Lay&Play component, icons representing the digital
counterparts of personal objects in our visual editor are
created using the picture of the object itself, thus facilitating
physical object to digital representation coupling. Memodules Connectors
Memodules Action Builder allows users to create Input Devices Output Devices
interaction scenarios which describe how Memodules Actions MemoClusters
interact with the surrounding environment and which
devices are involved in the scenario. Scenarios are created Figure 10 Puzzle pieces categories
using the puzzle metaphor: the user connects components
through a series of left-to-right couplings of puzzle pieces,
providing an easy to understand mechanism for connecting Puzzle pieces are grouped into 6 different categories (see
the different pieces. Moreover, constraining puzzle Figure 10) and displayed with different colors: the blue
connections to be left to right also gives the illusion of a ones represent Memodules objects, the green ones regroup
pipeline of information flow which creates a cause-effect Input Devices (i.e. devices that can identify Memodules
relationship between the different components. objects and start corresponding actions), the pink ones
represent Actions to carry out (e.g. play music, show
The puzzle editor is composed of a number of different pictures, send an e-mail, etc.), the yellow ones stand for
panels (Figure 9). The control panel (Figure 9 on the top) information that can be associated to Memodules objects
contains the list of available puzzle pieces grouped in six (e.g. some photos, music, etc.). As described in MemoML
different categories (see below for more details about model information is grouped in “logical” clusters called
categories). Puzzle pieces can be dragged and dropped in Memo Clusters. This means that no physical clusters are
the editing panel (or workspace) and assembled in order to created but information is browsed from every where in the
create the scenarios. When a puzzle piece is dragged onto
236
Chapter 5 - CONTEXT DEPENDENCY AND PHYSICAL ADAPTABILITY TEI'07, 15-17 Feb 2007, Baton Rouge, LA, USA
PC and grouped into logical collections. Black-colored • When Sabrina approaches the seashell (souvenir of
pieces identify Output Devices, i.e. devices where the some Greek holidays) and the business card of her
action is to be carried out (a TV for showing some photos, a friend to the Memodules Console the photos of that
Hi-fi stereo for playing some music, etc.). Finally red- vacation are sent to the e-mail address of her friend via
colored pieces refer to Connectors which allow creating the PC.
complex scenarios. To date two types of connectors have
been implemented: the “AND” connector and “TIMER”
connector. The first is used to combine (1) multiple Phase 3 - Action Materialization
Memodules, for instance when Sabrina approaches the This phase is handled by Memodules Console. Memodules
seashell Memodule and the business card Memodule a Console is based on Phidgets toolkit [4], and allows the
specific action has to be performed), and/or (2) multiple user to start a scenario and to manage subsequent
actions (e.g. when Sabrina approaches the seashell both the interactions with digital contents. When a Memodule is
photos and the music have to be played). The second, the approached to the Console, the system identifies the
“TIMER” connector, adds a time constraint to the execution Memodule object and retrieves the usage scenario
of the scenario (e.g. when Sabrina approaches the seashell associated to it. Since the Console integrates three different
Memodule to the system, the system has to show the Input Devices (corresponding to the three RFID readers in
pictures and 1 minute after the slideshow has finished it has Figure 12) for each Memodule it is possible to create three
to play the music). different scenarios, as described in the MemoML model. In
Finally the newly created scenario needs to be named and order to provide a visual feedback to the user, once the
saved. Once this has been done, the user can “play” the Memodule has been identified, the green LED associated to
scenario and test it to ensure that everything works exactly the RFID reader (see Figure 12) lights on.
as it has been intended.
LCD
screen
RFID LEDs
readers
237
TEI'07, 15-17 Feb 2007, Baton Rouge, LA, USA Chapter 5 - CONTEXT DEPENDENCY AND PHYSICAL ADAPTABILITY
238