Learning Module Hci1.

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 139

LEARNING MODULE

INTRODUCTION TO HUMAN
COMPUTER INTERACTION

1
Intro. To Human Computer Interaction
TABLE OF CONTENTS

LESSON 1 – I. Introduction & History of HCI ………………………………………. 3


LESSON 2 – II. User Centered Design …………………………………………….…. 14

LESSON 3 – III. Usability Principles ……………………………………………..……. 17


LESSON 4 – IV. Human Abilities……………………………………………..…………. 21
LESSON 5 – V. Predictive Evaluation ………………………………………………... 30
LESSON 6 – VI. Understanding Users, Req. Gathering …………………………... 33
LESSON 7 – VII. Tasks Analysis ………………………………………………………… 36

LESSON 8 – VIII. DOET ……………………………………………………………………………………. 39


LESSON 9 – IX. Design …………………………………………………………………………………... 45
LESSON 10 – X. Graphic Design ……………………………………………………………………... 48
LESSON 11 – XI. Handling errors & help Data types….……………………………………….. 53
LESSON 12 – XII. Prototyping & UI ……………………………………………………… 58
LESSON 13-14 – XIII-XIV Interaction Styles 1 & 2 …………………………………….. 61
LESSON 15 – XV User Models…………………………………………………..….…….. 83
LESSON 16 – XVI Predictive Models …………………………………………………….. 92
LESSON 17 – XVII Universal Design ……………………………………………………… 97
LESSON 18 – XVIII Information Virtualization ………………………………………….. 99
LESSON 19 – XIX Web ……………………………………………………………………... 105
LESSON 20 – XX Embodied agents ……………………………………………………... 129
LESSON 21 – XXI Ubicomp ………………………………………………………………... 136
References ………………………………………………………………………………………… 139

2
Intro. To Human Computer Interaction
LESSON 1

Introduction & History of HCI

Human Computer Interaction - brief introduction

Human-computer interaction (HCI) is an area of research and practice that


emerged in the early 1980s, initially as a specialty area in computer science embracing
cognitive science and human factors engineering. HCI has expanded rapidly and
steadily for three decades, attracting professionals from many other disciplines and
incorporating diverse concepts and approaches. To a considerable extent, HCI now
aggregates a collection of semi-autonomous fields of research and practice in human-
centered informatics. However, the continuing synthesis of disparate conceptions and
approaches to science and practice in HCI has produced a dramatic example of how
different epistemologies and paradigms can be reconciled and integrated in a vibrant
and productive intellectual project.

Where HCI came from

Until the late 1970s, the only


humans who interacted with
computers were information
technology professionals and
dedicated hobbyists. This changed
disruptively with the emergence of
personal computing in the later
1970s. Personal computing,
including both personal software
(productivity applications, such as
text editors and spreadsheets, and
interactive computer games) and
personal computer platforms
(operating systems, programming
languages, and hardware), made everyone in the world a potential computer user, and
vividly highlighted the deficiencies of computers with respect to usability for those who
wanted to use computers as tools.

3
Intro. To Human Computer Interaction
Figure 2.1 A-B: Personal
computing rapidly
pushed computer use
into the general
population, starting in
the later 1970s. However,
the non-professional
computer user was often
subjected to arcane
commands and system
dialogs.

The challenge of personal computing became manifest at an opportune time. The


broad project of cognitive science, which incorporated cognitive psychology, artificial
intelligence, linguistics, cognitive anthropology, and the philosophy of mind, had
formed at the end of the 1970s. Part of the programme of cognitive science was to
articulate systematic and scientifically informed applications to be known as "cognitive
engineering". Thus, at just the point when personal computing presented the practical
need for HCI, cognitive science presented people, concepts, skills, and a vision for
addressing such needs through an ambitious synthesis of science and engineering.
HCI was one of the first examples of cognitive engineering.

4
Intro. To Human Computer Interaction
Figure 2.2: The Model Human Processor was an early cognitive engineering model
intended to help developers apply principles from cognitive psychology.

5
Intro. To Human Computer Interaction
This was facilitated by analogous developments in engineering and design areas
adjacent to HCI, and in fact often overlapping HCI, notably human factors engineering
and documentation development. Human factors had developed empirical and task-
analytic techniques for evaluating human-system interactions in domains such as
aviation and manufacturing, and was moving to address interactive system contexts in
which human operators regularly exerted greater problem-solving discretion.
Documentation development was moving beyond its traditional role of producing
systematic technical descriptions toward a cognitive approach incorporating theories
of writing, reading, and media, with empirical user testing. Documents and other
information needed to be usable also.

Figure 2.3: Minimalist


information emphasized
supporting goal-directed activity
in a domain. Instead of topic
hierarchies and structured
practice, it emphasized succinct
support for self-directed action
and for recognizing and
recovering from error.

Other historically fortuitous


developments contributed to the
establishment of HCI. Software
engineering, mired in
unmanageable software
complexity in the 1970s (the
“software crisis”), was starting to focus on nonfunctional requirements, including
usability and maintainability, and on empirical software development processes that
relied heavily on iterative prototyping and empirical testing. Computer graphics and
information retrieval had emerged in the 1970s, and rapidly came to recognize that
interactive systems were the key to progressing beyond early achievements. All these
threads of development in computer science pointed to the same conclusion: The way
forward for computing entailed understanding and better empowering users. These

6
Intro. To Human Computer Interaction
diverse forces of need and opportunity converged around 1980, focusing a huge burst
of human energy, and creating a highly visible interdisciplinary project.

From cabal to community


The original and abiding technical focus of HCI was and is the concept of usability.
This concept was originally articulated somewhat naively in the slogan "easy to learn,
easy to use". The blunt simplicity of this conceptualization gave HCI an edgy and
prominent identity in computing. It served to hold the field together, and to help it
influence computer science and technology development more broadly and effectively.
However, inside HCI the concept of usability has been re-articulated and
reconstructed almost continually, and has become increasingly rich and intriguingly
problematic. Usability now often subsumes qualities like fun, well being, collective
efficacy, aesthetic tension, enhanced creativity, flow, support for human development,
and others. A more dynamic view of usability is one of a programmatic objective that
should and will continue to develop as our ability to reach further toward it improves.

Figure 2.4: Usability is an emergent quality


that reflects the grasp and the reach of HCI.
Contemporary users want more from a system
than merely “ease of use”.

Although the original academic home for HCI was computer science, and its original
focus was on personal productivity applications, mainly text editing and spreadsheets,
the field has constantly diversified and outgrown all boundaries. It quickly expanded
to encompass visualization, information systems, collaborative systems, the system
development process, and many areas of design. HCI is taught now in many

7
Intro. To Human Computer Interaction
departments/faculties that address information technology, including psychology,
design, communication studies, cognitive science, information science, science and
technology studies, geographical sciences, management information systems, and
industrial, manufacturing, and systems engineering. HCI research and practice draws
upon and integrates all of these perspectives.

A result of this growth is that HCI is now less singularly focused with respect to core
concepts and methods, problem areas and assumptions about infrastructures,
applications, and types of users. Indeed, it no longer makes sense to regard HCI as a
specialty of computer science; HCI has grown to be broader, larger and much more
diverse than computer science itself. HCI expanded from its initial focus on individual
and generic user behavior to include social and organizational computing, accessibility
for the elderly, the cognitively and physically impaired, and for all people, and for the
widest possible spectrum of human experiences and activities. It expanded from
desktop office applications to include games, learning and education, commerce,
health and medical applications, emergency planning and response, and systems to
support collaboration and community. It expanded from early graphical user
interfaces to include myriad interaction techniques and devices, multi-modal
interactions, tool support for model-based user interface specification, and a host of
emerging ubiquitous, handheld and context-aware interactions.

There is no unified concept of an HCI professional. In the 1980s, the cognitive science
side of HCI was sometimes contrasted with the software tools and user interface side
of HCI. The landscape of core HCI concepts and skills is far more differentiated and
complex now. HCI academic programs train many different types of professionals: user
experience designers, interaction designers, user interface designers, application
designers, usability engineers, user interface developers, application developers,
technical communicators/online information designers, and more. And indeed, many
of the sub-communities of HCI are themselves quite diverse. For example, ubiquitous
computing (aka ubicomp) is subarea of HCI, but it is also a superordinate area
integrating several distinguishable subareas, for example mobile computing, geo-
spatial information systems, in-vehicle systems, community informatics, distributed
systems, handhelds, wearable devices, ambient intelligence, sensor networks, and
specialized views of usability evaluation, programming tools and techniques, and
application infrastructures. The relationship between ubiquitous computing and HCI
is paradigmatic: HCI is the name for a community of communities.

8
Intro. To Human Computer Interaction
Figure 2.5 A-B: Two visualizations of the variety of disciplinary knowledge and skills
involved in contemporary design of human-computer interactions

9
Intro. To Human Computer Interaction
Indeed, the principle that HCI is a community of communities is now a point of
definition codified, for example, in the organization of major HCI conferences and
journals. The integrating element across HCI communities continues to be a close
linkage of critical analysis of usability, broadly understood, with development of novel
technology and applications. This is the defining identity commitment of the HCI
community. It has allowed HCI to successfully cultivate respect for the diversity of
skills and concepts that underlie innovative technology development, and to regularly
transcend disciplinary obstacles. In the early 1980s, HCI was a small and focused
specialty area. It was a cabal trying to establish what was then a heretical view of
computing. Today, HCI is a vast and multifaceted community, bound by the evolving
concept of usability, and the integrating commitment to value human activity and
experience as the primary driver in technology.

Beyond the desktop


Given the contemporary shape of HCI, it is important to remember that its origins are
personal productivity interactions bound to the desktop, such as word processing and
spreadsheets. Indeed, one of biggest design Ideas of the early 1980s was the so-called
messy desk metaphor, popularized by the Apple Macintosh: Files and folders were
displayed as icons that could be, and were scattered around the display surface. The
messy desktop was a perfect incubator for the developing paradigm of graphical user
interfaces.
Perhaps it
wasn’t quite
as easy to
learn and
easy to use
as claimed,
but people
everywhere
were soon
double
clicking,
dragging
windows and
icons around
their
displays,
and losing
track of
things on
their desktop interfaces just as they did on their physical desktops. It was surely a
stark contrast to the immediately prior teletype metaphor of Unix, in which all
interactions were accomplished by typing commands.

10
Intro. To Human Computer Interaction
Figure 2.6: The early Macintosh desktop metaphor: Icons scattered on the desktop
depict documents and functions, which can be selected and accessed (as System Disk
in the example)

Even though it can definitely be argued that the desktop metaphor was superficial, or
perhaps under-exploited as a design paradigm, it captured imaginations of designers
and the public. These were new possibilities for many people in 1980, pundits
speculated about how they might change office work. Indeed, the tsunami of desktop
designs challenged, sometimes threatened the expertise and work practices of office
workers. Today they are in the cultural background. Children learn these concepts
and skills routinely.

As HCI developed, it moved beyond the desktop in three distinct senses. First, the
desktop metaphor proved to be more limited than it first seemed. It’s fine to directly
represent a couple dozen digital objects as icons, but this approach quickly leads to
clutter, and is not very useful for people with thousands of personal files and folders.
Through the mid-1990s, HCI professionals and everyone else realized that search is a
more fundamental paradigm than browsing for finding things in a user interface.
Ironically though, when early World Wide Web pages emerged in the mid-1990s, they
not only dropped the messy desktop metaphor, but for the most part dropped
graphical interactions entirely. And still they were seen as a breakthrough in usability
(of course, the direct contrast was to Unix-style tools like ftp and telnet). The design
approach of
displaying and
directly
interacting
with data
objects as
icons has not
disappeared,
but it is no
longer a
hegemonic
design
concept.

11
Intro. To Human Computer Interaction
Figure 2.7: The early popularity of messy desktops for personal information spaces
does not scale.

The second sense in which HCI moved beyond the desktop was through the growing
influence of the Internet on computing and on society. Starting in the mid-1980s,
email emerged as one of the most important HCI applications, but ironically, email
made computers and networks into communication channels; people were not
interacting with computers, they were interacting with other people through
computers. Tools and applications to support collaborative activity now include
instant messaging, wikis, blogs, online forums, social networking, social bookmarking
and tagging services, media spaces and other collaborative workspaces, recommender
and collaborative filtering systems, and a wide variety of online groups and
communities. New paradigms and mechanisms for collective activity have emerged
including online auctions, reputation systems, soft sensors, and crowd sourcing. This
area of HCI, now often called social computing, is one of the most rapidly developing.

Figure 2.8 A-B-C: A huge and expanding variety of social network services are part of
everyday computing experiences for many people. Online communities, such as Linux
communities and GitHub, employ social computing to produce high-quality knowledge
work.

The third way that HCI moved beyond the desktop was through the continual, and
occasionally explosive diversification in the ecology of computing devices. Before
desktop applications were consolidated, new kinds of device contexts emerged, notably
laptops, which began to appear in the early 1980s, and handhelds, which began to
appear in the mid-1980s. One frontier today is ubiquitous computing: The pervasive
incorporation of computing into human habitats — cars, home appliances, furniture,
clothing, and so forth. Desktop computing is still very important, though the desktop
habitat has been transformed by the wide use of laptops. To a considerable extent, the
desktop itself has moved off the desktop.

12
Intro. To Human Computer Interaction
Figure 2.9 A-B-C: Computing moved off the desktop to
be everywhere all the time. Computers are in phones,
cars, meeting rooms, and coffee shops.

The focus of HCI has moved beyond the desktop, and its focus will continue to move.
HCI is a technology area, and it is ineluctably driven to frontiers of technology and
application possibility. The special value and contribution of HCI is that it will
investigate, develop, and harness those new areas of possibility not merely as
technologies or designs, but as means for enhancing human activity and experience.

13
Intro. To Human Computer Interaction
LESSON 2

User Centered Design

What is User Centered Design?


User-centered design (UCD) is an iterative design process in which designers focus on
the users and their needs in each phase of the design process. In UCD, design teams
involve users throughout the design process via a variety of research and design
techniques, to create highly usable and accessible products for them.

UCD is an Iterative Process


In user-centered design, designers use a mixture of investigative methods and tools
(e.g., surveys and interviews) and generative ones (e.g., brainstorming) to develop an
understanding of user needs.
Generally, each iteration of the UCD approach involves four distinct phases. First, as
designers working in teams, we try to understand the context in which users may use
a system. Then, we identify and specify the users’ requirements. A design phase
follows, in which the design team develops solutions. The team then proceeds to an
evaluation phase. Here, you assess the outcomes of the evaluation against the users’
context and requirements, to check how well a design is performing. More specifically,
you see how close it is to a level that matches the users’ specific context and satisfies
all of their relevant needs. From here, your team makes further iterations of these four
phases, and you continue until the evaluation results are satisfactory.

User-centered design is an iterative process that focuses on an understanding of the


users and their context in all stages of design and development.

14
Intro. To Human Computer Interaction
UCD Considers the Whole User Experience
In UCD, you base your projects upon an explicit understanding of the users, tasks
and environments. The aim of the process is to capture and address the whole user
experience. Therefore, your design team should include professionals from across
multiple disciplines (e.g., ethnographers, psychologists, software and hardware
engineers), as well as domain experts, stakeholders and the users themselves. Experts
may carry out evaluations of the produced designs, using design guidelines and
criteria. However, you should bear two crucial points in mind. First, to span the entire
user experience, you must involve the users for evaluation. Second, you'll need to
ensure long-term monitoring of use.

15
Intro. To Human Computer Interaction
Self-Check 1.1
I. Twitter
By default, the Android Twitter app shows users notifications about all sorts of things
like if they were mentioned in a tweet, if someone likes their tweet, if they get a new
follower, ect. Some users like these notifications but others find them annoying. Figure
1 shows the sequence of screens necessary to disable notifications for “Mentions,
replies, and photo tags”. Answer the following questions about this sequence of
screenshots:

Question: Describe a pro and a con of using each of the following methodologies to
determine if the Twitter app is usable in terms of this task.

16
Intro. To Human Computer Interaction
LESSON III

Usability Principles

An Introduction to Usability

Usability and user experience (UX) are not the same thing: the usability of a product is
a crucial part that shapes its UX, and hence falls under the umbrella of UX. While
many might think that usability is solely about the “ease of use” of a product, it
actually involves a great deal more than that. So, let’s find out more about usability
here and be absolutely certain about the nature of this fundamental building block of
our craft!
The ISO 9241-11 standard on usability describes it as:
“The extent to which a product can be used by specified users to achieve specified
goals, with effectiveness, efficiency and satisfaction in a specified context of use. ”
– ISO 9241 Ergonomics of Human System Interaction
Usability is hence more than just about whether users can perform tasks easily (ease-
of-use). It also deals with user satisfaction—for a website to be usable, it has to be
engaging and aesthetically pleasing, too.

Why Does Usability Matter?


Before we delve deeper into what usability entails, it’s vital to address the importance
of usability. Usability matters because if users cannot achieve their goals efficiently,

17
Intro. To Human Computer Interaction
effectively and in a satisfactory manner, they are likely to seek an alternative solution
to reach their goals. And for websites and apps, alternative solutions are abundant.
Quite simply: if your product is not usable, its UX will be bad, and users will leave you
for your competitors. Given that, designers looking to develop products with longevity
need to ensure that those products are usable or risk losing users to their
competitors.
In fact, a 2015 joint research by Huff Industrial Marketing, KoMarketing and
BuyerZone (see the link at the end of this piece) on B2B web users showed that 46% of
users leave a website because they can’t tell what the company does (i.e., a lack of
effective messaging), 44% of users leave due to lack of contact information, and 37% of
users leave due to poor design or navigation. This goes to show the potential harm bad
usability can bring to your website.
Usability is the outcome of a user-centered design process. That is a process which
examines how and why a user will adopt a product and seeks to evaluate that use.
That process is an iterative one and seeks to continuously improve following each
evaluation cycle.
“Usability, fundamentally, is a matter of bringing a bit of human rights into the world
of computer-human interaction. It's a way to let our ideals shine through in our
software, no matter how mundane the software is. You may think that you're stuck in
a boring, drab IT department making mind-numbing inventory software that only five
lonely people will ever use. But you have daily opportunities to show respect for
humanity even with the most mundane software.”
― Joel Spolsky, software engineer, writer and creator of project management software
Trello
The 5 Characteristics of Usable Products
In 2001, Whitney Quesenbery, the UX and Usability Expert and former President of
the Usability Professionals’ Association (UXPA), offered five criteria that a product
must meet to be usable:

 Effectiveness
 Efficiency
 Engagingness
 Error Tolerance
 Ease of Learning

1. Effectiveness
Effectiveness is about whether users can complete their goals with a high degree of
accuracy. Much of the effectiveness of a product comes from the support provided to
users when they work with the product; for example, fixing a credit card field so that it

18
Intro. To Human Computer Interaction
only accepts a valid credit card number entry can reduce data entry errors and help
users perform their tasks correctly. There are many different ways to provide
support—the key is to be as
informative as possible in a
meaningful way to the user.

In the IDF payment form, we


restrict the number of digits you
can enter into the credit card
number field to reduce data entry
errors.
You might also want to examine the language used in your product—the clearer and
simpler that language is (ideally 6th-grade level), the more likely that your information
will have the right impact on the user. Using the right level of technicality—for
example, reducing the number of technical coding terms for a design-focused
website—also helps make your messages clearer and meaningful to users.
Redundancy in navigation can sometimes be beneficial; if users have multiple paths to
their objective, they are more likely to get there. This may reduce the overall efficiency
of the process, however, so a balance has to be struck.
2. Efficiency
Effectiveness and efficiency have come to be blurred in the mind. They are, however,
quite different from a usability perspective. Efficiency is all about speed. How fast can
the user get the job done?
You’ll want to examine the number of steps (or indeed clicks/keystrokes) to achieving
the objective; can they be reduced? This will help develop efficient processes. Clearly
labeled navigation buttons with obvious uses will also help, as will the development of
meaningful shortcuts (think about the number of hours you’ve saved using Ctrl+C and
Ctrl+V to copy and paste text).
So as to maximize efficiency, you need to examine how your users prefer to work—are
they interacting via a smartphone or a desktop computer with a large keyboard and
mouse? The two require very different approaches to navigation.
3. Engagement
Engagement refers to the level of engagement a system offers. Indeed, engagement has
become something of a buzzword, but if you cut through the fluff, you’ll find that
engagement occurs when the user finds the product pleasant and gratifying to use.
Aesthetics matter here, and it’s why many companies invest a small fortune in graphic
design elements—but they’re not the only factors involved in how engaging a design is.
Engagement is not only about looking nice; it’s also about looking right. Proper
layouts, readable typography and ease of navigation all come together to deliver the

19
Intro. To Human Computer Interaction
right interaction for the user and make it engaging. Looking nice isn’t everything, as
Wikipedia (famous for its ultra-basic design) proves.

Wikipedia has an ultra-


basic layout, but this
didn’t prevent its
success! The value it
offers to people
worldwide is so
immense that aesthetics
are a secondary
concern. The focus here
is on the ability to find
information and the
structure of the
presentation of the
information to facilitate
reading. Thus, curious readers can discover facts and access related pages quickly (or
efficiently!).

4. Error Tolerance
It seems unlikely that—for any degree of complexity—you can completely eliminate
errors in products; in particular, digital products may be error-prone because of the
ecosystem in which they dwell – an ecosystem which is beyond the designer’s control.
However, the next best thing is to minimize errors from occurring and to ensure that a
user can easily recover from an error and get back to what he or she was doing. This is
error tolerance.
Promoting error tolerance, according to Whitney Quesenbery, requires:

 Restricting opportunities to do the wrong thing. Make links/buttons clear and


distinct, keep language clear and simple, don’t use jargon unless absolutely
necessary and keep dependencies in forms or actions together. Limit options to
correct choices if you can and give examples and support when asking people to
provide data.
 Offering the opportunity to “redo”. Give users a way to reset what they’ve just
done and go back and start again.
 Assume everyone is going to do things you don’t expect them to do. Then, either
facilitate that or offer advice/support
to get back on the right path.

20
Intro. To Human Computer Interaction
Dropbox has an undo function, in case users accidentally delete items in their folders.

5. Ease of Learning
If you want a product to be used regularly, then you want the users to be able to learn
the product easily so that when they use it again, it comes as second nature.
You also need to accommodate ease of learning when releasing new functionality and
features; otherwise, a familiar and happy user may quickly become frustrated with
your latest release. This is something that tends to happen a lot on social networks;
whenever a new set of features is released, they tend to be greeted with howls of
outrage from comfortable users, even if they’re better than the old features. And this is
true even when the new features are also easy to learn. Facebook’s switch to the
‘Timeline’ format in 2012 (after giving users months to shift from the previous format)
offered a prime example of this curious reaction from some users.
The best way to support ease of learning is to design systems that match a user’s
existing mental models. A mental model is simply a representation of something in the
real world and how it is done from the user’s perspective. It’s why virtual buttons look
a lot like real buttons – we know that we push buttons. The form elicits the
appropriate action in the user, hence making it easy to learn.

21
Intro. To Human Computer Interaction
LESSON IV

Human Abilities

Human factors
Human factors, as a discipline, derives from the problems of designing equipment
operable by humans during World War II (Sanders and McCormick, 1987) and thus
originally had strong military ties. The focal point of the discipline was sensory-motor
aspects of man-machine interaction (for example the design of flight controls and
other military hardware). The interaction between man and machine were originally
not viewed in terms of communicative and cognitive aspects but viewed at the very
tangible, muscle-operated level. This was, among other things, what became a problem
for the discipline as new advances in monitor/screen technologies changed the
interaction style. In the early 80s, the discipline of HCI spawned from the Human
Factors community (as well as from other disciplines such as Cognitive
Psychology/Science) and focused on the cognitive or epistemological coupling between
user and system.

The (sometimes problematic) assumptions of Human Factors


It is very telling that the word 'factors' in 'human factors' occasionally is equated with
'limitations'. From the Human Factors literature, one gets the impression that the
'factors' of computer systems were regarded at least fairly predictable, whereas the
'human factors' were not and hence attempted 'ruled out'.
An assumption of the Human Factors community was that insight into e.g. cognitive,
sensory-motor, and perceptual aspects of human behaviour could instigate the design
of an optimum performance of 'man in concert with his machine', viewed as one
united functional system/unit. With knowledge of 'generic' human factors or 'basic'
human behaviour and cognition, it was thought that an optimal coupling between
man and machine could be found. However, as the discipline evolved, the hunt for
'human constants' proved problematic as well as it proved highly difficult to establish
the design implications of such constants.
The stance of the Human Factors community toward this kind of 'singular ontology',
i.e. that truth or meaning is 'out there' as in a natural science, came under heavy
criticism by many, e.g. Suchman (1987) and Winograd and Flores (1987). In
consequence, the HCI community, which had spawned from the Human Factors
community, took a "contextual turn" toward more humanistic and sociological
accounts of meaning from the late 80s and onward. Aspects like context were now

22
Intro. To Human Computer Interaction
given higher priority and meaning and truth was seeked in the interaction between
man and machine or as a product of the wider context. For a short review of the
contextual turn, see Jensen and Soegaard (2004).

Human factors, or limitations, include:

 Impatience
 Limited memory
 Need analogies
 Limited concentration
 Changes in mood
 The need for motivation
 Prejudices
 Fears
 Make errors
 Misjudgement
 Prefer speech
 Process information non-linearly
 Near-sightedness
 Colour-blindness
 Distraction
 Can only perform a limited number of concurrent tasks
 Short-term memory works differently than long-term memory
 Users are all different
 Think in terms of ideas composed of words, numbers, multimedia, and
intuitions.
 Fatigue
 Must see and hear to understand
 Physical inability
 Need information presented in sets of threes
 Need complex information presented hierarchically
 Confined to one physical location at a time
 Require practice to become good at doing things
 Embarrassment can act as a limitation to accomplishing some tasks
 Tend to do things the easy way
 Resistance to change
 Can be physically harmed by some tasks
 Prefer to learn by doing than by explanation
 Have difficulty converting ideas into modes of communication
 Have difficulty converting modes of communication into ideas
 Act irrationally
 Sometimes affected adversely by stimuli such as colour and patterns

23
Intro. To Human Computer Interaction
 Become nervous
 Miss details when tasks are memorized and performed cursorily
 Can be affected by socio/political climate.
 Prefer standard ways of doing things
 Constrained by time
 Incentive driven
 Work better in groups than individually (1+1=3)
 Require tasks to be modularized in order to work in groups
 Use intuitions to construe information that is sometimes wrong
 Rely on tools to complete tasks (like spell checking) thus causing dependency
 Must delegate responsibility in order to free the mind of complexity
 Become addicted
 Associate unrelated things
 Sometimes do not trust what is not understood
 Death (typically a concern in trains or aeroplanes)
The list is adapted from Michael Osofsky, published source unknown.

Human Factors Evaluation Methods


Ethnographic analysis: Using methods derived from ethnography, this process
focuses on observing the uses of technology in a practical environment. It is a
qualitative and observational method that focuses on "real-world" experience and
pressures, and the usage of technology or environments in the workplace .
Focus Groups: Focus groups are another form of qualitative research in which one
individual will facilitate discussion and elicit opinions about the technology or process
under investigation. This can be on a one to one interview basis, or in a group session.
This method can be used to gain a large quantity of deep qualitative data, though due
to the small sample size; it can be subject to a higher degree of individual bias. It can
be used at any point in the design process, as it is largely dependent on the exact
questions to be pursued, and the structure of the group. But it can be extremely
costly.
Iterative design: This method is also known as prototyping, the iterative design
process seeks to involve users at several stages of design, in order to correct problems
as they emerge. As prototypes emerge from the design process, these are subjected to
other forms of analysis and the results are then taken and incorporated into the new
design. Trends amongst users are analyzed, and products redesigned. This can
become a costly process, and needs to be done as soon as possible in the design
process before designs become too concrete.
Meta-analysis: A supplementary technique used to examine a wide body of already
existing data or literature in order to derive trends or form hypotheses in order to aid
design decisions. As part of a literature survey, a meta-analysis can be performed in
order to discern a collective trend from individual variables.

24
Intro. To Human Computer Interaction
Subjects-in-tandem: Two subjects are asked to work concurrently on a series of tasks
while vocalizing their analytical observations. This is observed by the researcher, and
can be used to discover usability difficulties. This process is usually recorded.
Surveys and Questionnaires: A commonly used technique outside of Human Factors
as well, surveys and questionnaire have an advantage in that they can be
administered to a large group of people for relatively low cost, enabling the researcher
to gain a large amount of data. The validity of the data obtained is, however, always in
question,
as the questions must be written and interpreted correctly, and are, by definition,
subjective. Those who actually respond are in effect selfselecting as well, widening the
gap between the sample and the population further.
Task analysis: A process with roots in activity theory, task analysis is a way of
systematically describing human interaction with a system or process to understand
how to match the demands of the system or process to human capabilities. The
complexity of this process is generally proportional to the complexity of the task being
analyzed, and so can vary in cost and time involvement. It is a qualitative and
observational process.
Think aloud protocol: Also known as "concurrent verbal protocol", this is the process
of asking a user to execute a series of tasks or use technology, while continuously
verbalizing their thoughts so that a researcher can gain insights as to the users'
analytical process. It can be useful for finding design flaws that do not affect task
performance, but may have a negative cognitive affect on the user. It is also useful for
utilizing experts in order to better understand procedural knowledge of the task in
question. This method is less expensive than focus groups, but tends to be more
specific and subjective.
User analysis: This process is based around designing for the attributes of the
intended user or operator, establishing the characteristics that define them, creating a
persona for the user. A user analysis will attempt to predict the most common users,
and the characteristics that they would be assumed to have in common. This can be
problematic if the design concept does not match the actual user, or if the identified
are too vague to make clear design decisions from. This process is, however, usually
quite inexpensive, and commonly used.
Wizard of Oz: This is a comparatively uncommon technique but has seen some use in
mobile devices. Based upon the Wizard of Oz experiment, this technique involves an
operator who remotely controls the operation of a device in order to imitate the
response of an actual computer program. It has the advantage of producing a highly
changeable set of reactions, but can be quite costly and difficult to undertake. While
user interface designs are key outputs of any information technology project, the user
interface design process itself can also contribute other, ancillary benefits to the
development process, quite apart from the final designs.

25
Intro. To Human Computer Interaction
Human Factors in Interactive Software
Shneiderman summarizes, the ultimate success or failure of a computerized system is
highly dependent upon the man-machine interaction. Even though interactive
software was developed to improve productivity of the user, the user's productivity is
often reduced by "cumbersome data entry procedures, obscure error messages,
intolerant error handling, inconsistent procedures, and confusing sequences of
cluttered screens.
The most important feature of modern interactive computer systems is the level of
support they provide for the end user. A lot of work has been done in the field of
system designing but human performance capabilities, skill limitation, and response
tendencies were not adequately considered in the designs of the new systems. This
results in the user frequently becoming puzzled or irritated when trying to interact
with the system. The following human factors activities should be considered when
designing and implementing interactive software:
Study Phase Activities: User Involvement: Early user involvement in a system can
be very beneficial. End users may have little knowledge of interactive software
development, but they can be very good resources for describing system needs and
problems associated with their job. In order to determine these needs, one member of
the design team should act as liaison with the end user. It has been suggested that
the interface design responsibility be limited to one person in order to ensure
continuity. It is imperative that the interface liaison establish a good rapport with both
the end user and other members of the design team. Early user involvement is also an
effective method of overcoming resistance to change. "If the initiators involve the
potential resistors in some aspect of the design and implementation of the change,
they can often forestall resistance. With a participative change effort, the initiators
listen to the people the change involves and use their advice."

Problem Definition: The design team can next proceed to define the problems by
combining their own observations with the information obtained. Because the design
team's problem definition may vary from that of management or the end user, both
groups should be consulted to determine if they concur. This variance is compounded
with interactive software in that the future user of the system may not fully
understand the nature of real time systems. If either management or the end users do
not agree with the problem definition, representatives from each group must meet to
resolve any conflict. Once the problem definition is agreed upon, alternative solutions
to the problem can then be formulated. Once formulated, the design team should
again consult management and the end user to determine the best solution to the
problem.

Design Phase Activities: Interface Design: : Novice and experienced users differ in
both competence and their demands on an interactive system. While frequent users
emphasize usability, novices or infrequent users emphasize ease of learning. Software
designers must consider both factors when designing the user interface.

26
Intro. To Human Computer Interaction
Test Requirements: During this phase procedures are to be established for testing
each screen format, logical branch and system message for both logic and spelling
errors. Since most interactive software packages consist of more than one module or
program, with several calls between modules, all logic errors may not be discovered
during simple module testing. Integration test procedures need to be established for
this purpose.
User Consultation: Design Phase activities may require many months and user
requirements may change during that period. Major changes in personnel, technology,
or business strategies can promote a change in design requirements. Before
proceeding to the Development Phase, management and the user should determine
whether or not any changes should be implemented during this release, or deferred for
implementation at a later date. Almost all interactive designs require some trade-offs.
Hardware limitations, personnel, time, and money are the major reasons alternative
designs or procedures are necessary. Since the end users are ultimately affected, they
should be included in major trade-off evaluations.
Development/ Operation Phase Activities-Documentation: Documentation is
considered a necessary evil among interactive software designers. Although they
recognize the necessity of good documentation, they prefer not to do it. They tend to
focus more on technical aspects of the design, rather than how to use the interactive
software. Good documentation is especially critical with an interactive system,
because user productivity can be highly affected when time is spent looking for
information which either does not exist, or is poorly documented. For these reasons,
technical writers should be consulted to write user interface manuals, as they are
more skilled in explaining how a system operates. Operation manuals for interactive
software have traditionally consisted of an alphabetical listing of commands and their
definitions. Manuals of this type provide easy reference for the experienced user, but
novice users find them very difficult to understand. One alternative is to present the
method of performing the most frequently used procedures, then listing and defining
the computer commands necessary to perform the procedure. It is also useful to
provide a listing of the commands in alphabetical order. Although this method is
redundant, it is much more easily understood by novice users.
Module and Integration Testing: Good design and thorough testing require time and
money, but are worthwhile investments. A thoroughly tested, well-designed system is
more easily implemented and requires less maintenance. As Shneiderman states,
"Faster task performance, lower error rates, and higher user satisfaction should be
paramount goals of the designer". Each screen format, logical branch, and system
message is to be thoroughly tested. Any modules containing errors should be
corrected and retested. Module integration testing should be performed when all
modules have passed individual tests. Some changes may be necessary to protect the
system from the user. Users of interactive systems do not always utilize the system in
the same manner in which it was designed. This is especially true when they are
confused or frustrated. They start issuing a variety of commands, or pressing keys at
random to get out of their present dilemma. This could be catastrophic. Although it is
impossible to predict all user actions, the test team should attempt to place the

27
Intro. To Human Computer Interaction
system under rigorous testing. If a catastrophic situation should occur, code changes
would be necessary to remedy the situation.
System Testing Once integration testing procedures are complete, the system should
be tested with the anticipated user population as the subjects. By this time the design
team is familiar with the software, and not likely to use the directions and
documentation. The novice user is highly dependent upon good directions and
documentation and is therefore more likely to detect confusing or erroneous
information. If the design team is able to unobtrusively observe the users during
system test, they may witness situations which could lead to user frustration and/or
error. They should interview the test subjects to determine other possible problems. At
this time, minor changes to the software can be made. Major design changes which
enhance the software, but not correct errors, should be scheduled for implementation
at a later date.

User Training: Holt and Stevenson state that when human errors occur, interactive
system designers tend to blame the human interface or data display designs. They cite
one example of a company which had an influx of human errors when a new system
was implemented. The system designers' solution to the problem was to replace the
data entry clerks with more highly skilled personnel and substitute CRT terminals for
input forms. Before implementing this solution, a special the data entry clerks were
not properly trained in the use of the system, and that the input forms were
cumbersome. System performance soon rose above the acceptable level when input
forms were redesigned and users were trained. In addition to the initial training
session, ongoing training to accommodate new or transferred employees needs to be
considered. Computer assisted instruction (CAI) is an effective method of training
multiple users. The major factor to be considered with CAI training is that it is critical
that the instruction modules be updated each time significant changes are made to
the system. User experimentation and browsing should be encouraged. Prior to total
implementation of the system, a pilot system should be implemented to allow this.
Pilot systems help the end user overcome any fears of the system. They are able to
become acquainted with the system without the fear of accidently destroying it.
Code and Documentation: Changes Regardless of how careful one has been, there
will always be unforeseen code and documentation changes. Some changes may be
due to actual errors, while others may be due to user confusion or frustration.

28
Intro. To Human Computer Interaction
Self check 2

I. Enumerate and explain in your own words the (sometimes problematic) assumptions
of Human Factors.
II. Enumerate and explain in your own words the Human Factors Evaluation Methods.
III. Enumerate and explain in your own words the Human Factors Evaluation
Methods.

29
Intro. To Human Computer Interaction
LESSON V

Predictive Evaluation

Evaluation Techniques
Evaluation

 Tests usability and functionality of system


 Occurs in laboratory, field and/or in collaboration with users
 Evaluates both design and implementation
 Should be considered at all stages in the design of life cycle
Goals of Evaluation

 Assess extent of System functionality


 Assess effect of interface on user
 Identify specific problems

Evaluating Designs
Cognitive Walkthrough proposed by Polson et al.

 Evaluates design on how well it supports user in learning task


 Usually performed by expert in cognitive psychology
 Expert ‘walks through’ design to identify potential problems using psychological
principles.
 Forms used to guide analysis.
Cognitive Walkthrough (ctd)

 For each task walkthrough considers


o What impact will interaction have on user?
o What cognitive processes are required?
o What learning problem may occur?
 Analysis focuses on goals and knowledge: does the design lead the user to
generate the correct goals?
Heuristic Evaluation Proposed by Nielsen and Molich.

30
Intro. To Human Computer Interaction
 Usability criteria (heuristics) are identified
 Design examined by experts to see if these are violated
 Examples heuristics
o System behaviour is predictable
o System behaviour is consistent
o Feedback is provided
 Heuristic evaluation ‘debugs’ design.
Review-based Evaluation

 Results from the literature used to support of refute parts of design.


 Care needed to ensure results are transferable ton new design.
 Model-based evaluation
 Cognitive Models used to filter design options
o e.g GOMS prediction of user performance.
 Design rationale can also useful evaluation information

Evaluating Implementations
Experimental Evaluation

 Controlled evaluation of specific aspects of interactive behaviour


 Evaluator chooses hypothesis to be tested
 A number of experimental conditions are considered which differ only in the
value of some controlled variable.
 Change in behavioural measure are attributed to different conditions

Experimental Factors

 Subjects
o Who – representative, sufficient sample
 Variables
o Things to modify and measure
 Hypothesis
o What you’d like to show
 Experimental Design
o To how you are going to do it
Variables

 independent variables (IV)


o characteristics changed to procedure different conditions
o e.g. interface style, number of menu items
 Dependent Variable (DV)
o Characteristics measured in the experiment
o e.g. time taken, number of errors.

31
Intro. To Human Computer Interaction
Hypothesis

 prediction of outcome
o framed in terms of IV and DV
o e.g. “Error rate will increase as font size decrease ”
 null hypothesis
o states no difference between conditions
o aim is to disprove this
o e.g. null hyp. = “no change with font size”

32
Intro. To Human Computer Interaction
LESSON VI

Understanding Users, Req. Gathering

Explore New Ways of Understanding Users


This will require the articulation of diverse methodologies. Over the last decade we
have seen, for example, techniques rooted in design-based practices (such as cultural
probes) come to prominence. These have complemented existing techniques of
understanding that have emerged from scientific and engineering traditions – Human
Factors and Cognitive Science, for instance. Other ways of extending and
complementing existing techniques will be required beyond design; these may include
views from more diverse disciplines and cultural traditions. The use of conceptual
analysis as the first stage of a new HCI is a case in point. This technique derives from
analytic philosophy, and entails clarifying the systems of meaning and value any
particular set of activities involve.
Requirements Gathering
A great user experience is all about enabling users achieve their objective when using
your artifact – be it a website, a software system or anything that you create. Now take
a step back. Trying to understand how to make it easy for users to achieve their goals
would be pointless if you don’t place it within the context of what you know about
your users. The more you understand your users, their work and the context of their
work, the more you can support them in achieving their goals – and hence, the more
usable your system will be! So, you inevitably ask the question “how would I know
what my users’ needs are?” This article is about requirements gathering … and it
answers this question.
What are Requirements?
A requirement is a statement about an intended product that specifies what it should
do or how to do it. For requirements to be effectively implemented and measured, they
must be specific, unambiguous and clear. For example, a requirement may be that a
specific button must
enable printing of the
contents of the current
screen.

Diagrammatic
representation of the
different types of

33
Intro. To Human Computer Interaction
requirements (Source: SatheesPractice)
Since this article focuses on requirements gathering of systems, we will focus on the
two types of System Requirements:

Techniques used in requirements gathering

 Interviews and questionnaires


o Discussion with commissioning client
o Interview or questionnaire to:
 end-users
 relevant professionals
 other interactive designers in the field
 Observation
o in the natural work setting to understand organisational and social
characteristics
 Document analysis
 Task analysis techniques for existing products
 Prototyping can help in requirements gathering

Requirements gathering outcome

 A representation of the problem with the current system


 A representation of the requirements of the new system to be developed

Functional requirements

 In HCI context functional requirements specify both what the system and the
user must do
 Functional specification: document resulting from gathering and analysing
functional requirements
o Partitioned in a hierarchical manner into modules
 Top level of hierarchy consists of abstract description of the
system
 Bottom level of hierarchy specifies in more detail the function
 Functional requirements cannot be gathered completely at the start of the
design: iteration and some design are needed before full requirements gathering
 Functional requirements are often specified with charting techniques, e.g.
dataflow diagram
 Constraints must be specified

o System's constraints
o Development process constraints

Data requirements

 Specify the meaning and structure of the data


 Gathered using, interviews, observation, document analysis, etc.
 Data analysis establishes:

34
Intro. To Human Computer Interaction
o What data is required
o How it is structured
o How it is logically stored
 Entity: group of basic data elements
 Data requirements seek to represent the entities and relationships in an
application and the constraints that apply to the data
 Use of entity relationship (ER) diagrams and data dictionary

Usability requirements
 Focus more on the usability of the system
 To be tested, usability is expressed in terms of:
o Learnability: time and effort to reach a specified level of user performance
(ease of learning)
o Throughput: for tasks accomplished by experienced users, the speed of
task execution and the errors made (ease of use)
o Flexibility: multiplicity of ways the user and system exchange
information and the extend to which the system can accommodate
changes beyond those initially specified
o Attitude: the positive attitude created in users by the system
 Gathering techniques: observation, interviews, questionnaires, etc.
 Three types of analysis for determining usability requirements
o Task analysis: cognitive characteristics required of the users by the
system
o User analysis - user modelling: intellectual ability, cognitive processing
ability, previous experience, physical ability, etc.
o Environment analysis
 Usability requirements may be expressed in terms of usability metrics such as:
o Completion time for specified tasks
o Number of errors per task
o Time spent to recover from errors
o Time spent using documentation
o Number of time user express satisfaction

Self-check

I. Enumerate the Techniques used in requirements gathering


II. Enumerate the data requirements
III. Enumerate the Functional Requirements
IV. Enumerate the Usability Requirements

35
Intro. To Human Computer Interaction
LESSON VII

Design process and Tasks Analysis

HCI Design

HCI design is considered as a problem-solving process that has components like


planned usage, target area, resources, cost, and viability. It decides on the
requirement of product similarities to balance trade-offs.

The following points are the four basic activities of interaction design –

 identifying requirements
 Building alternative designs
 Developing interactive versions of the designs
 Evaluating designs
Three principles for user-centered approach are −

 Early focus on users and tasks


 Empirical Measurement
 Iterative Design

Design Methodologies

Various methodologies have materialized since the inception that outline the
techniques for human–computer interaction. Following are few design methodologies –
 Activity Theory − This is an HCI method that describes the framework where
the human-computer interactions take place. Activity theory provides
reasoning, analytical tools and interaction designs.

 User-Centered Design − It provides users the center-stage in designing where


they get the opportunity to work with designers and technical practitioners.

 Principles of User Interface Design − Tolerance, simplicity, visibility,


affordance, consistency, structure and feedback are the seven principles used
in interface designing.

 Value Sensitive Design − This method is used for developing technology and
includes three types of studies − conceptual, empirical and technical.
o Conceptual investigations works towards understanding the values of the
investors who use technology.
o Empirical investigations are qualitative or quantitative design research
studies that shows the designer’s understanding of the users’ values.

36
Intro. To Human Computer Interaction
o Technical investigations contain the use of technologies and designs in
the conceptual and empirical investigations.
Participatory Design

Participatory design process involves all stakeholders in the design process, so that
the end result meets the needs they are desiring. This design is used in various areas
such as software design, architecture, landscape architecture, product design,
sustainability, graphic design, planning, urban design, and even medicine.

Participatory design is not a style, but focus on processes and procedures of


designing. It is seen as a way of removing design accountability and origination by
designers.

Task Analysis

Task Analysis plays an important part in User Requirements Analysis.

Task analysis is the procedure to learn the users and abstract frameworks, the
patterns used in workflows, and the chronological implementation of interaction with
the GUI. It analyzes the ways in which the user partitions the tasks and sequence
them.

What is a TASK?

Human actions that contributes to a useful objective, aiming at the system, is a task.
Task analysis defines performance of users, not computers.

Hierarchical Task Analysis

Hierarchical Task Analysis is the procedure of disintegrating tasks into subtasks that
could be analyzed using the logical sequence for execution. This would help in
achieving the goal in the best possible way.

"A hierarchy is an organization of elements that, according to prerequisite


relationships, describes the path of experiences a learner must take to achieve any
single behavior that appears higher in the hierarchy. (Seels & Glasgow, 1990, p. 94)".

37
Intro. To Human Computer Interaction
Techniques for Analysis

 Task decomposition − Splitting tasks into sub-tasks and in sequence.


 Knowledge-based techniques − Any instructions that users need to know.
‘User’ is always the beginning point for a task.
 Ethnography − Observation of users’ behavior in the use context.
 Protocol analysis − Observation and documentation of actions of the user. This
is achieved by authenticating the user’s thinking. The user is made to think
aloud so that the user’s mental logic can be understood.

Engineering Task Models

Unlike Hierarchical Task Analysis, Engineering Task Models can be specified formally
and are more useful.

Characteristics of Engineering Task Models

 Engineering task models have flexible notations, which describes the possible
activities clearly.

 They have organized approaches to support the requirement, analysis, and use
of task models in the design.

 They support the recycle of in-condition design solutions to problems that


happen throughout applications.

 Finally, they let the automatic tools accessible to support the different phases of
the design cycle.

ConcurTaskTree (CTT)

CTT is an engineering methodology used for modeling a task and consists of tasks
and operators. Operators in CTT are used to portray chronological associations
between tasks. Following are the key features of a CTT −

 Focus on actions that users wish to accomplish.


 Hierarchical structure.
 Graphical syntax.
 Rich set of sequential operators.

38
Intro. To Human Computer Interaction
LESSON VIII

DOET

The two gulfs of human-computer interaction

Gulf of Evaluation and Gulf of Execution


BY
The term 'Gulfs of Evaluation and Execution' were introduced in Norman (1986) and
popularised by his book, The Design of Everyday Things (Norman 1988), originally
published as The Psychology of Everyday Things.

The Gulf of Execution

The gulf of execution is the degree to which the interaction possibilities of an artifact,
a computer system or likewise correspond to the intentions of the person and what
that person perceives is possible to do with the artifact/application/etc. In other
words, the gulf of execution is the difference between the intentions of the users and
what the system allows them to do or how well the system supports those actions
(Norman 1988). For example, if a person only wants to record a movie currently being
shown with her VCR, she imagines that it requires hitting a 'record' button. But if the
necessary action sequence involves specifying the time of recording and selection of a
channel there is a gulf of execution: A gap between the psychological language (or
mental model) of the user's goals and the very physical action-object language of the
controls of the VCR via which it is operated. In the language of the user, the goal of
recording the current movie can be achieved by the action sequence " Hit the record
button," but in the language of the VCR the correct action sequence is:

1) Hit the record button.


2) Specify time of recording via the controls X, Y, and Z.
3) Select channel via the channel-up-down control.
4) Press the OK button.

Thus, to measure or determine the gulf of execution, we may ask how well the action
possibilities of the system/artifact match the intended actions of the user.

In the rhetoric of the GOMS model (see this), bridging the gulf of execution means that
the user must form intentions, specify action sequences, execute actions, and select
the right interface mechanisms (GOMS stands for Goals, Operators, Methods and
Selection Rules).

The Gulf of Evaluation

The gulf of evaluation is the degree to which the system/artifact provide


representations that can be directly perceived and interpreted in terms of the
expectations and intentions of the user (Norman 1988). Or put differently, the gulf of

39
Intro. To Human Computer Interaction
evaluation is the difficulty of assessing the state of the system and how well the
artifact supports the discovery and interpretation of that state (Norman 1991). "The
gulf is small when the system provides information about its state in a form that is
easy to get, is easy to interpret, and matches the way the person thinks of the system"
(Norman 1988: p. 51).

Thus, if the system does not "present itself" in a way that lets the user derive which
sequence of actions will lead to the intended goal or system state, or derive whether
previous actions have moved the user closer to her goal, there is a large gulf of
evaluation. In this case, the person must exert a considerable amount of effort and
expend significant attentional ressources to interpret the state of the system and
derive how well her expectations have been met. In the VCR example from above, the
design of the controls of the VCR should thus 'suggest' how to be used and be easily
interpretable (e.g. when recording, the 'record' control should signal that is is activated
or a display should).

To sum up, the gulfs of evaluation and of execution refer to the mismatch between our
internal goals on the one side, and, on the other side, the expectations and the
availability of information specifying the state of the world (or an artifact) and how me
may change it (Norman 1991).

The idea that the discrepancy between user and system should be conceived of gulfs
came from Jim Hollan and Ed Hutchins during a revision of a chapter on direct
manipulation in the book User-Centered System Design (1986).

Further notes

Donald Norman's writings must be understood within the theoretical framework of


information-processing cognitive psychology. The fundamental ontological and
epistemological assumptions of this paradigm, as well as some of its methods, are
considered very problematic by many. As such, Norman's theories described above are
subject to many of the criticisms that have been raised against the information-
processing approach to human cognition.

The Seven Stages of Users Action

Seven stages of action is a term coined by the usability consultant Donald Norman. He
explains this phrase in chapter two of his book The Design of Everyday Things, in the
context of explaining the psychology of a person behind the task performed by him or
her.

History
The history behind the action cycle starts from a conference in Italy attended by
Donald Norman. This excerpt has been taken from the book The Design of Everyday
Things:

I am in Italy at a conference. I watch the next speaker attempt to thread a film


onto a projector that he never used before. He puts the reel into place, then
takes it off and reverses it. Another person comes to help. Jointly they thread

40
Intro. To Human Computer Interaction
the film through the projector and hold the free end, discussing how to put it on
the take up reel. Two more people come over to help and then another. The
voices grow louder, in three languages: Italian, German and English. One
person investigates the controls, manipulating each and announcing the result.
Confusion mounts. I can no longer observe all that is happening. The
conference organizer comes over. After a few moments he turns and faces the
audience, who had been waiting patiently in the auditorium. "Ahem," he says,
"is anybody expert in projectors?" Finally, fourteen minutes after the speaker
had started to thread the film (and eight minutes after the scheduled start of
the session) a blue-coated technician appears. He scowls, then promptly takes
the entire film off the projector, rethreads it, and gets it working.

Norman pondered on the reasons that made something like threading of a projector
difficult to do. To examine this, he wanted to know what happened when something
implied nothing. In order to do that, he examined the structure of an action. So to get
something done, a notion of what is wanted – the goal that is to be achieved, needs to
be started. Then, something is done to the world i.e. take action to move oneself or
manipulate someone or something. Finally, the checking is required if the goal was
made. This led to formulation of Stages of Execution and Evaluation.

Stages of Execution
Execution formally means to perform or do something. Norman explains that a person
sitting on an armchair while reading a book at dusk, might need more light when it
becomes dimmer and dimmer. To do that, he needs to switch on the button of a lamp
i.e. get more light (the goal). To do this, one must need to specify on how to move one's
body, how to stretch to reach the light switch and how to extend one's finger to push
the button. The goal has to be translated into an intention, which in turn has to be
made into an action sequence.

Thus, formulation of stages of execution:

 Start at the top with the goal, the state that is to be achieved.
 The goal is translated into an intention to do some action.
 The intention must be translated into a set of internal commands, an action
sequence that can be performed to satisfy the intention.
 The action sequence is still a mutual event: nothing happens until it is
executed, performed upon the world.

Stages of Evaluation
Evaluation formally means to examine and calculate. Norman explains that after
turning on the light, we evaluate if it is actually turned on. A careful judgement is then
passed on how the light has affected our world i.e. the room in which the person is
sitting on the armchair while reading a book.

The formulation of the stages of evaluation can be described as:

 Evaluation starts with our perception of the world.


 This perception must then be interpreted according to our expectations.

41
Intro. To Human Computer Interaction
 Then it is compared (evaluated) with respect to both our intentions and our
goals.

Seven Stages of Action


Seven Stages of Action constitute four stages of execution, three stages of evaluation
and our goals.

1. Forming the target.


2. Forming the intention
3. Specifying an action
4. Executing the action
5. Perceiving the state of the world
6. Interpreting the state of the world
7. Evaluating the outcome

Norman's Three Levels of Design

In the human mind there are numerous areas responsible for what we refer to as
emotion; collectively, these regions comprise the emotional system. Don Norman
proposes the emotional system consists of three different, yet interconnected levels,
each of which influences our experience of the world in a particular way. The three
levels are visceral, behavioral, and reflective. The visceral level is responsible for the
ingrained, automatic and almost animalistic qualities of human emotion, which are
almost entirely out of our control. The behavioral level refers to the controlled aspects
of human action, where we unconsciously analyze a situation so as to develop goal-
directed strategies most likely to prove effective in the shortest time, or with the fewest
actions, possible. The reflective level is, as Don Norman states, "...the home of
reflection, of conscious thought, of learning of new concepts and generalisations about
the world". These three levels, while classified as separate dimensions of the emotional
system, are linked and influence one another to create our overall emotional
experience of the world.

In Emotional Design: Why we love (or hate) everyday things, Don Norman (a prominent
academic in the field of cognitive science, design, and usability engineering)

42
Intro. To Human Computer Interaction
distinguishes between three aspects, or levels, of the emotional system (i.e. the sum of
the parts responsible for emotion in the human mind), which are as follows: the
visceral, behavioral and reflective levels. Each of these levels or dimensions, while
heavily connected and interwoven in the emotional system, influences design in its
own specific way. The three corresponding levels of design are outlined below:
Visceral Design – "Concerns itself with appearances". This level of design refers to the
perceptible qualities of the object and how they make the user/observer feel. For
example, a grandfather clock offers no more features or time-telling functions than a
small, featureless mantelpiece clock, but the visceral (deep-rooted, unconscious,
subjective, and automatic feelings) qualities distinguish the two in the eyes of the
owner. Much of the time spent on product development is now dedicated to visceral
design, as most products within a particular group (e.g., torches/flashlights, kettles,
toasters, and lamps) tend to offer the same or a similar set of functions, so the
superficial aspects help distinguish a product from its competitors. What we are
essentially referring to here is 'branding'—namely, the act of distinguishing one
product from another, not by the tangible benefits it offers the user but by tapping
into the users’ attitudes, beliefs, feelings, and how they want to feel, so as to elicit
such emotional responses. This might be achieved by using pictures of children,
animals or cartoon characters to give something the appearance of youthfulness, or by
using colors (e.g., red for 'sexy' and black for 'scary'), shapes (e.g., hard-lined shapes)
or even styles (e.g., Art Deco) that are evocative of certain eras. Visceral design aims to
get inside the user's/customer's/observer's head and tug at his/her emotions either to
improve the user experience (e.g., improving the general visual appeal) or to serve
some business interest (e.g., emotionally blackmailing the customer/user/observer to
make a purchase, to suit the company's/business's/product owner's objectives).

Behavioral Design – "...has to do with the pleasure and effectiveness of use."


Behavioural design is probably more often referred to as usability, but the two terms
essentially refer to the practical and functional aspects of a product or anything
usable we are capable of using in our environment. Behavioral design (we shall use
this term in place of usability from now on) is interested in, for example, how users
carry out their activities, how quickly and accurately they can achieve their aims and
objectives, how many errors the users make when carrying out certain tasks, and how
well the product accommodates both skilled and inexperienced users. Behavioral
design is perhaps the easiest to test, as performance levels can be measured once the
physical (e.g., handles, buttons, grips, levers, switches, and keys) or usable parts of an
object are changed or manipulated in some way. For instance, buttons responsible for
two separate operations might be positioned at varying distances from one another so
as to test how long it takes the user to carry out the two tasks consecutively.
Alternatively, error rates might be measured using the same manipulation(s).
Examples of experiences at the behavioral level include the pleasure derived from
being able to find a contact and make a call immediately on a mobile phone, the ease
of typing on a computer keyboard, the difficulty of typing on a small touchscreen
device, such as an iPod Touch, and the enjoyment we feel when using a well-designed
computer game controller (such as, in my humble opinion, the N64 control pad). The
behavioral level essentially refers to the emotions we feel as a result of either
accomplishing or failing to complete our goals. When products/objects enable us to
complete our goals with the minimum of difficulty and with little call for conscious
effort, the emotions are likely to be positive ones. In contrast, when products restrict
us, force us to translate or adjust our goals according to their limitations, or simply

43
Intro. To Human Computer Interaction
make us pay close attention when we are using them, we are more inclined to
experience some negative emotion.

Reflective Design – "...considers the rationalization and intellectualization of a


product. Can I tell a story about it? Does it appeal to my self-image, to my pride?" This
is the highest level of emotional design; representing the conscious thought layer,
where we consciously approach a design; weighing up its pros and cons, judging it
according to our more nuanced and rational side, and extracting information to
determine what it means to us as an individual. Reflective thinking allows us to
rationalise environmental information to influence the behavioural level. Take for
example smartwatches." On that note, researchers Jaewon Choi and Songcheol Kim at
the University of Korea examined users’ intentions to adopt smartwatches under two
major factors, namely a user’s perception of the device as a technological innovation
and as a luxury fashion product. Users’ view of the smartwatch as a technological
innovation relates to their perception of the device’s usefulness and ease of use (the
behavioral level). On the other hand, the users’ view of the smartwatch as a luxury
fashion product relates to their perception of how much they would enjoy the
smartwatch and the levels of self-expressiveness the device will afford to them, i.e. the
ability to express themselves and to enhance their image. Both enjoyment and self-
expressiveness are influenced by the visceral level ("Does the watch look beautiful?")
but also very much by the reflective level ("What will my friends think when they see
me wearing this watch? "). The reflective level mediates the effects of the behavioral
level – users may well put up with difficulties and shortcomings in the usability of the
smartwatch because they believe they will gain other, non-functional benefits from it.
Apple’s first version of their smartwatch was riddled with functional problems and
usability issues, but that didn’t stop the company from generating the second largest
worldwide revenue in the watch industry within the first year of selling it!

Self-check

Enumerate and explain in your own words the ff.


I. Design methodologies
II. Techniques for Analysis
II. The two gulfs of human-computer interaction
III. The Seven Stages of Users Action
IV. Norman's Three Levels of Design

44
Intro. To Human Computer Interaction
LESSON IX

Design

What is Interaction Design?

Interaction design is an important component within the giant umbrella of user


experience (UX) design. In this article, we’ll explain what interaction design is, some
useful models of interaction design, as well as briefly describe what an interaction
designer usually does.

A simple and useful understanding of interaction design

Interaction design can be understood in simple (but not simplified) terms: it is the
design of the interaction between users and products. Most often when people talk
about interaction design, the products tend to be software products like apps or
websites. The goal of interaction design is to create products that enable the user to
achieve their objective(s) in the best way possible.

If this definition sounds broad, that’s because the field is rather broad: the interaction
between a user and a product often involves elements like aesthetics, motion, sound,
space, and many more. And of course, each of these elements can involve even more
specialised fields, like sound design for the crafting of sounds used in user
interactions.

As you might already realise, there’s a huge overlap between interaction design and
UX design. After all, UX design is about shaping the experience of using a product,
and most part of that experience involves some interaction between the user and the
product. But UX design is more than interaction design: it also involves user research
(finding out who the users are in the first place), creating user personas (why, and

45
Intro. To Human Computer Interaction
under what conditions, would they use the product), performing user testing and
usability testing, etc.

The 5 dimensions of interaction design

The 5 dimensions of interaction design(1) is a useful model to understand what


interaction design involves. Gillian Crampton Smith, an interaction design academic,
first introduced the concept of four dimensions of an interaction design language, to
which Kevin Silver, senior interaction designer at IDEXX Laboratories, added the fifth.

1D: Words
Words—especially those used in interactions, like button labels—should be
meaningful and simple to understand. They should communicate information to
users, but not too much information to overwhelm the user.

2D: Visual representations


This concerns graphical elements like images, typography and icons that users
interact with. These usually supplement the words used to communicate information
to users.

3D: Physical objects or space


Through what physical objects do users interact with the product? A laptop, with a
mouse or touchpad? Or a smartphone, with the user’s fingers? And within what kind
of physical space does the user do so? For instance, is the user standing in a crowded
train while using the app on a smartphone, or sitting on a desk in the office surfing
the website? These all affect the interaction between the user and the product.

4D: Time
While this dimension sounds a little abstract, it mostly refers to media that changes
with time (animation, videos, sounds). Motion and sounds play a crucial role in giving
visual and audio feedback to users’ interactions. Also of concern is the amount of time
a user spends interacting with the product: can users track their progress, or resume
their interaction some time later?

5D: Behaviour
This includes the mechanism of a product: how do users perform actions on the
website? How do users operate the product? In other words, it’s how the previous
dimensions define the interactions of a product. It also includes the reactions—for
instance emotional responses or feedback—of users and the product.

Digital materials and interaction design

Digital things are what interaction design shapes. This is essentially to say that
interaction designers work in digital materials - software, electronics, communication
networks, and the like. And, as pointed out above, the digital materials pose specific
requirements on, e.g., sketching practices. When designing an innovative interaction
technique, where there is not much previous experience to rely on, it is sometimes

46
Intro. To Human Computer Interaction
necessary to experiment with constructions in software and/or hardware. Those
constructions should be made with a sketching mindset, however, which among other
things means that it is quickly made, focuses on behaviors and effects, is disposable
and ideally also that it is one among many variations on the same theme.

Historically, the digital things made by interaction designers were largely tools -
contraptions intended to be used instrumentally, for solving problems and carrying
out tasks, and mostly to be used individually. Much of our ingrained best-practice
knowledge in the field emanates from this time, expressed in concepts such as user
goals, task flows, usability and utility. However, it turns out that digital technology in
society today is mostly used for communication, i.e., as a medium. And as a medium,
it has characteristics that set it apart from previously existing personal and mass
communication media. For example, it lowers the thresholds of media production to
include virtually anyone, it provides many-to-many communication with persistent
records of all exchanges that transpire, and it offers access to ongoing modifications of
its infrastructures. These characteristics of what we might call collaborative media are
only beginning to be understood in interaction design, and one might expect that this
will be one of the most significant areas for future conceptual developments in our
field.

By limiting the scope of interaction design to digital things (including media), we also
exclude large parts of service design, organizational design, sociopolitical intervention,
and so on. A historical analogy may be the typical experience of an enterprise systems
consultant in the 1980s whose client asked for a new system to manage payroll.
Analyzing the current situation might have turned up the insight that the old system
as such had no major shortcomings, but that the workflow of the personnel
department was severely convoluted and crippled. Would the consultant propose a
new system anyway, or more rightly point out the need for an organizational
development consultant? Or perhaps even try her own hand on organizational
intervention?

Similar situations are legion in contemporary interaction design, as the use of digital
technology is often deeply intertwined with other aspects of everyday life in the design
situations approached by the interaction designer. What I propose - that interaction
design creates digital things - should be understood as a recognition of the
complexities and professional demands involved in related disciplines such as service
design, urban development and political change. Essentially, the position adopted here
is that when an interaction design process moves into the territory of non-digital
intervention, the ideal scenario would see the establishment of a multidisciplinary
design team. In practical work, however, this is not always a feasible option. The
short-term benefits of being able to deliver must then be weighed against the potential
long-term risks of doing a less-than-professional job in a related field.

47
Intro. To Human Computer Interaction
LESSON X

Graphic Design

What is Graphic Design?

Graphic design is the craft of creating visual content to communicate messages.


Applying visual hierarchy and page layout techniques, graphic designers use
typography and pictures to meet users’ specific needs and focus on the logic of
displaying elements in interactive designs to optimize the user experience.

Graphic Design – Molding Users’ Experience Visually


Graphic design is an ancient craft, dating back past Egyptian hieroglyphs to 17,000-
year-old cave paintings. As a term originating in the 1920s’ print industry and
covering a range of activities including logo creation, it concerns aesthetic appeal and
marketing – attracting viewers using images, color and typography. However, graphic
designers working in user experience (UX) design must justify stylistic choices
regarding, say, image locations and font with a human-centered approach, focusing
on—and seeking maximum empathy with—users while creating good-looking designs
that maximize usability. Aesthetics must serve a purpose – in UX design we don’t
create art for art’s sake. So, when doing graphic design for UX, you should consider
the information architecture of your interactive designs, to ensure accessibility for
users, and leverage graphic design skills in creating output that considers the entire
user experience, including users’ visual processing abilities. For instance, if an
otherwise pleasing mobile app can’t offer users what they need in several thumb-
clicks, its designers will have failed to marry graphic design to user experience. The
scope of graphic design in UX covers creating beautiful designs that users find highly
pleasurable, meaningful, and usable.

“Design is a solution to a problem. Art is a question to a problem.”

— John Maeda, President of Rhode Island School of Design

48
Intro. To Human Computer Interaction
Graphic Design is Emotional Design
Although the digital age entails designing with interactive software, graphic design still
revolves around age-old principles. Striking the right chord with users from the first
glance is crucial. As a graphic designer, you should have a firm understanding of color
theory and how vital the right choice of color scheme is. Color choices must reflect not
only the organization (e.g., blue suits banking) but also users’ expectations (e.g., red
for alerts; green for notifications to proceed). You should design with an eye for how
elements match the tone (e.g., sans-serif fonts for excitement/happiness) and overall
effect, noting how you shape users’ emotions as you guide them from, say, a landing
page to a call to action. Often, graphic designers are involved in motion design for
smaller screens, carefully monitoring how the work’s aesthetics match users’
expectations and enhance usability in a flowing, seamless experience by anticipating
their needs and mindsets. With user psychology in mind some especially weighty
graphic design considerations are:

 Symmetry and Balance (including symmetry types)


 Flow
 Repetition
 Pattern
 The Golden Ratio (i.e., proportions of 1:1.618)
 The Rule of Thirds (i.e., how users’ eyes recognize good layout)
 Typography (encompassing everything from font choice to heading weight)
 Audience Culture (re Color Use and Reading Pattern)

User Experience – UX

"User Experience", often abbreviated "UX", is the quality of experience a person has
when interacting with a specific design.

Originally used in reference to human-computer interactions - and still largely


associated with those disciplines - the term is now used to refer to any specific
human-design interaction, ranging from a digital device, to a sales process, to an
entire conference. Perhaps due to its organic development and lack of formalization,
"User Experience" may be defined by, and the responsibility of, very different
departments from organization to organization: in some organizations, it is owned by
marketing; in others, it falls under information technology (IT). Then, from a solutions
perspective, some organizations base their "User Experiences" around the research
and academic-based approaches of human-computer interaction (HCI); others treat
interface and/or product design as the source for "User Experience," while still others
let marketing or IT drive it.

An early example of the use of the word "User Experience" is E.C. Edwards and D.J.
Kasik's "User Experience With the CYBER Graphics Terminal". Subsequently, there
are numerous other examples of "User Experience" in use through the late 1970's and
early 1980's, largely restricted to the human-computer interaction communities and
particularly in the context of user-centered design (UCD).

49
Intro. To Human Computer Interaction
"User Experience" was popularized by Don Norman's self-selected title of User
Experience Architect at Apple Computer, Inc. in 1993. Because of Norman 's status as
a thought leader in the HCI community, this unconventional title raised awareness of
the term. By the mid-1990s, many technology companies used the term to represent a
commitment to and focus on higher quality human-computer interactions as a key
product differentiator. As the dot.com technology boom reached its apex in 2000, a
variety of books that included "User Experience" in the title - almost exclusively
focused on elements of web design - were published.

During its growth in the 1990's "User Experience" suffered from confusion due to the
popularity of two other, similar terms, "User-Centered Design" (UCD) and "Experience
Design." Over time, UCD has been further clarified as a specific process and approach
to product design, while Experience Design has largely shifted to describe a hybrid
design discipline that focuses on environmental and multi-sensorial design,
particularly in the context of digital displays and installations.

In recent years, "User Experience" has transcended simple interactions within


computing environments and is used as a qualifier for various on- and offline
experiences, ranging from person-to-person interactions, such as customer service, as
well as analogue products such as the automobile. Many companies today have "User
Experience" teams and departments, and the term has assumed a broader meaning.
An industry association, the User Experience Network (UXnet), is dedicated to
furthering this emergent discipline. User experience has thus evolved from HCI to
broader issues of customer satisfaction and competitive differentiation, suggesting
that it will remain a pertinent issue for design and business in the future.

IxD and other disciplines

Recently a lot of designers in other fields, industrial design, graphic design,


architecture change their career to IxD. I think it is quite reasonable because of the
huge demand for employment.

Industrial design versus IxD?


In traditional Industrial design, designers focus on the design of static form instead of
interactivity, such as a cup, a kettle, or a chair. The traditional discipline does not
have a language with which to discuss the design of rich, dynamic behavior and
changing user interfaces. Chairs would not respond to people’s behavior unless it
breaks when someone sits on it.
Now lots of Industrial designers have attempted to design digital products, such as a
laptop, smart phones, Bluetooth earphones, etc. As the intelligence of devices grows,
the interaction between users and devices would be increasingly complicated. For
example, an automobile designer joined a multidisciplinary team working on self-
driving vehicles. He needs to study users’ behaviors in cars when talking to intelligent
systems or switching to manual override. I think he can be considered as Interaction
designer.

There are five dimensions in IxD: Text, Visual presentation, Physical object or spaces,
Time, Behavior.

50
Intro. To Human Computer Interaction
IxD is more than interactions on a 2D screen but in a physical world. Interaction
can happen with physical objects, sounds, gesture, body movement, even eye contact.
And IxD is such a broad area that concerned with a huge number of disciplines,
practices, and approaches. As the picture below shows:

It provides some insights for people in other fields who want to change their career:
audio engineer, developer, psychologist, cognitive scientist. To be honest, as well as
you have basic knowledge in CS, Human factor or Design, and you have passion in
digital staff, you can learn and become an interaction designer.

One reason that IxD attracts is that I can meet people from variant background and
work in a multidisciplinary team, which would always give me inspiration.

Is IxD the same with UX?


In many companies, UX designers have overlapping working process with interaction
designer due to the organization of an organization. If we jump out of the box, we can
find they still have something different.
Designers’ output
In industry, the process of product design is similar: Research → Analysis → Design →
Develop → Test → Iterate.
As far as I know, interaction designers would focus on design work, or create the
prototype. UX designers do more on research and analysis.

51
Intro. To Human Computer Interaction
I have worked as an Interaction designer in a start-up team. My job included
producing interaction flow, making prototype, and UI design. Later I joined in a
multidisciplinary team in an R&D department of an engineering company as a UX
designer. I interviewed users, collected and analyzed qualitative data, made the
prototype and tested it. For me as an interaction designer, I need to give as many
details about the product as possible. I spent most of my time on designing the
specific user interface and writing interaction documentation. While as a UX designer,
I do more research work, prototype and test.
So I think in industry IxD works more on creating something new, while UX has a
longer view of the product, especially after it becomes mature.

Self-check

Enumerate and explain in your own words the ff.


I. The 5 dimensions of interaction design

52
Intro. To Human Computer Interaction
LESSON XI

Handling errors & help Data types

Human error (slips and mistakes)

By
James Reason (1990) has extensively analysed human errors and distinguishes
between mistakes and slips. Mistakes are errors in choosing an objective or specifying
a method of achieving it whereas slips are errors in carrying out an intended method
for reaching an objective (Sternberg 1996). As Norman (1986: p. 414) explains: "The
division occurs at the level of the intention: A Person establishes an intention to act. If
the intention is not appropriate, this is a mistake. If the action is not what was
intended, this is a slip."
For example, a mistake would be to buy a Microsoft Excel licence because you want to
store data that should be made accesible to web clients through SQL-queries, as
Microsoft Excel is not designed for that purpose. In other words, you choose a wrong
method for achieving your objective. However, if you installed a Postgresql Server for
the same reason but in your haste forgot to give the programme privileges to go
through your firewall, that would be a slip. You chose the right method of achieving
your objective, but you made an error in carrying out the method.

Both Reason (1990) and Norman (1988) have described several kinds of slips (see
'related terms' below). According to Sternberg (1996), "slips are most likely to occur (a)
when we must deviate from a routine, and automatic processes inappropriately
override intentional, controlled processes; or (b) when automatic processes are
interrupted - usually as a result of external events or data, but sometimes as a result
of internal events, such as highly distracting thoughts." See the glossary term Capture
Error for an example.

Overall, it should be noted that "The designer shouldn't think of a simple dichotomy
between errors and correct behavior: rather, the entire interaction should be treated
as a cooperative endeavor between person and machine, one in which misconceptions
can arise on either side." (Norman, 1988: p. 140)

Human-Data Interaction
BY RICHARD MORTIER, HAMED HADDADI, TRISTAN HENDERSON, DEREK MCAULEY, JON
CROWCROFT AND ANDY CRABTREE

We have moved from a world where computing is siloed and specialised, to a world
where computing is ubiquitous and everyday. In many, if not most, parts of the world,
networked computing is now mundane as both foreground (e.g., smartphones, tablets)
and background (e.g., road traffic management, financial systems) technologies. This

53
Intro. To Human Computer Interaction
has permitted, and continues to permit, new gloss on existing interactions (e.g., online
banking) as well as distinctively new interactions (e.g., massively scalable distributed
real-time mobile gaming). An effect of this increasing pervasiveness of networked
computation in our environments and our lives is that data are also now ubiquitous:
in many places, much of society is rapidly becoming “data driven”.

Many of the devices we use, the networks through which they connect – not just the
Internet but also alternative technologies such as fixed and cellular telephone
networks – and the interactions we
experience with these technologies
(e.g., use of credit cards, driving on
public highways, online shopping)
generate considerable trails of data.
These data are created both
consciously by us – whether
volunteered via, e.g., our Online Social
Network (OSN) profiles, or observed as
with our online shopping behaviour
(World Economic Forum 2011) – and
they are inferred and created about us
by others – not just other people but,
increasingly, machines and
algorithms, too.

The Evolution of Human-Computer Interaction


We observe that Human-Computer Interaction (HCI) has grown out of and been
traditionally focused on the interactions between humans and computers as artefacts,
i.e., devices to be interacted with. As described by Jonathan Grudin (1990a, 1990b), a
Principal Researcher at Microsoft Research in the field of HCI, the focus of work in HCI
has varied from psychology (Card, Moran, and Newell 1983) to hardware to software to
interface, and subsequently deeper into the organisation. This trend, moving the focus
outward from the relatively simple view of an operator using a piece of hardware,
continued with consideration of the richness of the inter-relationships between users
and computer systems as those systems have pervaded organisations and become
networked, and thus the need to “explode the interface”, e.g., Bowers and Rodden
(1993), professors at Newcastle and Nottingham Universities respectively.

The evolution of Human-Computer Interaction:

We believe that the continuing and accelerating trend towards truly ubiquitous and
pervasive computing points to a need to emphasise another facet of the very general
topic of how people interact with computer systems: how people should interact with
data. That is, not so much the need for us to interact directly with large quantities of
data (still a relatively rare occupation), but the need for us all to have some

54
Intro. To Human Computer Interaction
understanding of the ways in which our behaviours, the data they generate, and the
algorithms which process these data increasingly shape our lives. A complex
ecosystem, often collaborative but sometimes combative (Brown 2014), is forming
around companies and individuals engaging in the use of thee data. The nascent,
multi-disciplinary field of Human-Data Interaction (HDI) responds to this by placing
the human at the centre of these data flows, and it is concerned with providing
mechanisms for people to interact explicitly with these systems and data.

We think that it’s crucial to understand 1) how our


behaviours, 2) how the data our behaviours generate,
and 3) how the algorithms which process these data
increasingly shape our lives. Human-Data Interaction
(HDI) places the human at the centre of these data
flows, and HDI provides mechanisms which can help
the individual and groups of people to interact
explicitly with these systems and data.

In this article we will next go into more detail as to


why HDI deserves to be named as a distinct
problematic (§2) before defining just what it is we
might mean by HDI (§3). We will then give our story of
the development of HDI to its state by the mid-2010s, starting with Dataware, an early
technical attempt to enable HDI (§4). We follow this with a deeper discussion of what
exactly the “I” in HDI might mean – how interaction is to be construed and
constructed in HDI – and a recent second attempt at starting to define a technical
platform to support HDI with that understanding in mind (§5 and §6 respectively). We
conclude with a brief discussion of some exciting areas of work occurring in the
second half of the 2010s that we identify (§7), though there are no doubt many more!
Finally, after summarising (§8), we give a few indications of where to go to learn more
(§9).

Why Do We Need HDI?

Life Goes On: We Still Need Privacy

55
Intro. To Human Computer Interaction
Privacy is not an outdated model. We need it more than ever.

“One thing should be clear, even though we live in a world in which we share personal
information more freely than in the past, we must reject the conclusion that privacy is
an outmoded value ...we need it now more than ever.”

– Barack Obama, President of the USA (US Consumer Privacy Bill of Rights 2012)

Privacy has long remained a topic of widespread societal interest and debate as digital
technologies generate and trade in personal data on an unprecedented scale.
Government and Industry proclaim the social and economic benefits to be had from
personal data, against a counterpoint of a steady flow of scare stories detailing misuse
and abuse of personal data. Industry efforts to quell anxiety proffer encryption as the
panacea to public concerns, which in turn becomes a matter of concern to those
charged with state security. Use of encryption in this way also glosses, hides or at
least renders opaque a key threat to consumer or user privacy: the ability to “listen in”
and stop devices “saying” too much about us. As Professor of Computer Science and
Law at Stanford University Keith Winstein (2015) puts it,

“Manufacturers are shipping devices as sealed-off products that will speak, encrypted,
only with the manufacturer’s servers over the Internet. Encryption is a great way to
protect against eavesdropping from bad guys. But when it stops the devices’ actual
owners from listening in to make sure the device isn’t tattling on them, the effect is
anti-consumer.”

– Keith Winstein
Many Internet businesses rely on extensive, rich data collected about their users,
whether to target advertising effectively or as a product for sale to other parties. The
powerful network externalities that exist in rich data collected about a large set of
users make it difficult for truly competitive markets to form. We can see a concrete
example in the increasing range and reach of the information collected about us by
third-party websites, a space dominated by a handful of players, including Google,
Yahoo, Rubicon Project, Facebook and Microsoft (Falahrastegar et al. 2014, 2016).
This dominance has a detrimental effect on the wider ecosystem: online service
vendors find themselves at the whim of large platform and Application Programming
Interface (API) providers, hampering innovation and distorting markets.

The Paradox of Privacy: The More We Reveal, the More Privacy We Desire
Personal data management is considered an intensely personal matter however: e.g.,
professor of Informatics Paul Dourish (2004) argues that individual attitudes towards
personal data and privacy are very complex and context dependent. Studies have
shown that the more people disclose on social media, the more privacy they say they
desire, e.g., Taddicken and Jers (2011), of the Universities of Hamburg and
Hohenheim respectively. This paradox implies dissatisfaction about what participants
received in return for exposing so much about themselves online and yet, “they
continued to participate because they were afraid of being left out or judged by others
as unplugged and unengaged losers”. This example also indicates the inherently social
nature of much “personal” data: as Andy Crabtree, Professor of Computer Science at
the University of Nottingham, and Richard Mortier, University Lecturer in the

56
Intro. To Human Computer Interaction
Cambridge University Computer Laboratory (2015) note, it is impractical to withdraw
from all online activity just to protect one’s privacy.

Context sensitivity, opacity of data collection and drawn inferences, trade of personal
data between third parties and data aggregators, and recent data leaks and privacy
infringements all motivate means to engage with and control our personal data
portfolios. However, technical constraints that ignore the interests of advertisers and
analytics providers, and so remove or diminish revenues supporting “free” services and
applications, will fail (Vallina-Rodriguez et al. 2012; Leontiadis et al. 2012).

The Internet of Things Reshaped the Nature of Data Collection: From Active to
Passive
The Internet of Things (IoT) further complicates the situation, reshaping the nature of
data collection from an active feature of human-computer interaction to a passive one
in which devices seamlessly communicate personal data to one another across
computer networks. Insofar as encryption is seen as the panacea to privacy concerns –
and it is not: consumer data remains open to the kinds of industry abuses that we are
all becoming increasingly familiar with – this gives rise to “walled gardens” in which
personal data is distributed to the cloud before it is made available to end-users. Open
IoT platforms, such as Samsung’s ARTIK, do not circumvent the problem either: they
are only open to developers. This is not an IoT specific objection. However, IoT throws
it into sharp relief: while security is clearly an important part of the privacy equation,
it is equally clear that more is required.

Reclaiming Humanity: Active Players not Passive Victims of the Digital Economy
There is need in particular to put the end-user into the flow of personal data; to make
the parties about whom personal data is generated into active rather than passive
participants in its distribution and use. The need to support personal data
management is reflected in a broad range of legal, policy and industry initiatives, e.g.,
Europe’s General Data Protection Directive (European Parliament 2014), the USA’s
Consumer Privacy Bill of Rights (US Consumer Privacy Bill of Rights 2012) and
Japan’s revision of policies concerning the use of personal data (Strategic
Headquarters for the Promotion of an Advanced Information and Telecommunications
Network Society 2014).

Here, issues of trust, accountability, and user empowerment are paramount. They
speak not only to the obligations of data controllers – the parties who are responsible
for processing personal data and ensuring compliance with regulation and law – but
seek to shift the locus of agency and control towards the consumer in an effort to
transform the user from a passive “data subject” into an active participant in the
processing of personal data. That is, into someone who can exercise control and
manage their data and privacy, and thus become an active player or participant in –
rather than a passive victim of – the emerging data economy.

Having discussed why HDI is a topic that should concern us, we now turn to a more
detailed discussion of just what it is that we might mean when we use the term HDI.

57
Intro. To Human Computer Interaction
LESSON XII

Prototyping & UI

PROTOTYPING

What is Prototyping?
Prototyping involves creating a basic system which is a (semi / fully) working
version of the current incarnaion of the system - a concrete but partial
implementation of a system design. It

 Is realistic and professional-looking,


 Can use for formal acceptance with client
 Identifies problems early

Different flavours of Prototyping

Full prototype

 whole system description


 Full functionality
 but lower performance

Horizontal prototype

 All or most aspects of user interface, little functionality


 only top level functions work
 Everything on screen should work, but not 'go' anywhere

Vertical prototype

 One or two threads of interaction in depth


 from top to bottom
 Only some things on screen should work - links to interaction you want
user/tester to follow

Incremental prototype

 Designed, developed and evaluated stage-by-stage, eventually becomes


the final product
 Evolutionary prototyping and rapid prototyping are similar concepts
 This is useful when the user is unsure what the system should be like. It
is a simple way of ensuring that the design is right. It allows the user to
evaluate the system as it develops and user feedback can then be used in
the further development of the system.

58
Intro. To Human Computer Interaction
Hi fidelity prototype
 Mockups and models
 Make computers disappear: eg build boxes to house computer screens
 Generally use high quality, realistic representation medium - video,
computer animation
 May use software tools such as html, PowerPoint, visual basic, director

Low fidelity prototype

 Don’t look very much like finished product


 Simple, cheap and easy to modify
 Often used in conceptual design to encourage discovery
 Generally use a medium that is far away from final implementation
 Never integrated directly into finished product
 Examples: paper prototype

What is User Interface (UI) Design?

User interface (UI) design is the process of making interfaces in software or


computerized devices with a focus on looks or style. Designers aim to create designs
users will find easy to use and pleasurable. UI design typically refers to graphical user
interfaces but also includes others, such as voice-controlled ones

Designing UIs for User Delight


User interfaces are the access points where users interact with designs. Graphical
user interfaces (GUIs) are designs’ control panels and faces; voice-controlled interfaces
involve oral-auditory interaction, while gesture-based interfaces witness users
engaging with 3D design spaces via bodily motions. User interface design is a craft
that involves building an essential part of the user experience; users are very swift to
judge designs on usability and likeability. Designers focus on building interfaces users
will find highly usable and efficient. Thus, a thorough understanding of the contexts
users will find themselves in when making those judgments is crucial. You should
create the illusion that users aren’t interacting with a device so much as they’re trying
to attain goals directly and as effortlessly as possible. This is in line with the intangible
nature of software – instead of depositing icons on a screen, you should aim to make
the interface effectively invisible, offering users portals through which they can
interact directly with the reality of their tasks. Focus on sustaining this “magic” by
letting users find their way about the interface intuitively – the less they notice they
must use controls, the more they’ll immerse themselves. This dynamic applies to
another dimension of UI design: Your design should have as many enjoyable features
as are appropriate.

59
Intro. To Human Computer Interaction
Facebook’s easy-
to-use layout
affords instant
brand recognition.

Self-check

I. Choose one of and explain why it is effective?


II. Enumerate and Explain Different flavours of Prototyping

60
Intro. To Human Computer Interaction
LESSON XIII

Interactions Styles

Interaction Styles

The concept of Interaction Styles refers to all the ways the user can communicate or
otherwise interact with the computer system. The concept belongs in the realm of HCI
or at least have its roots in the computer medium, usually in the form of a workstation
or a desktop computer. These concepts do however retain some of their descriptive
powers outside the computer medium. For example, you can talk about menu
selection (defined below) in mobile phones.

In HCI textbooks, such as Shneiderman (1997) and Preece et al. (1994), the types of
interaction styles mentioned are usually command language, form fillin, menu
selection, and direct manipulation.

Command language (or command entry)


Command language is the earliest form of interaction style and is still being used,
though mainly on Linux/Unix operating systems. These "Command prompts" are used
by (usually) expert users who type in commands and possibly some parameters that
will affect the way the command is executed. The following screen dump shows a
command prompt - in this case, the user has logged on to a (mail) server and can use
the server's functions by typing in commands.

Figure 1: Command prompt. The command "ls- al" has


just been executed
('ls' stands for 'list' and the parameters '-al' specify that
the list command should display a detailed list of files).

Command language places a considerable cognitive burden on the user in that the
interaction style relies on recall as opposed to recognition memory. Commands as well
as their many parameterised options have to be learned by heart and the user is given
no help in this task of retrieving command names from memory. This task is not made
easier by the fact that many commands (like the 'ls' command in the above example)
are abbreviated in order to minimize the number of necessary keystrokes when typing
commands. The learnability of command languages is generally very poor.

61
Intro. To Human Computer Interaction
Advantages and disadvantages of Command Language
Some of the following points are adapted from Shneiderman (1997) and Preece et al.
(1994)

Advantages
 Flexible.
 Appeals to expert users.
 Supports creation of user-defined "scripts" or macros.
 Is suitable for interacting with networked computers even with low bandwidth.

Disadvantages
 Retention of commands is generally very poor.
 Learnability of commands is very poor.
 Error rates are high.
 Error messages and assistance are hard to provide because of the diversity of
possibilities plus the complexity of mapping from tasks to interface concepts
and syntax.
 Not suitable for non-expert users.

Form fillin
The form fillin interaction style (also called "fill in the blanks") was aimed at a different
set of users than command language, namely non-experts users. When form fillin
interfaces first appeared, the whole interface was form-based, unlike much of today's
software that mix forms with other interaction styles. Back then, the screen was
designed as a form in which data could be entered in the pre-defined form fields. The
TAB-key was (and still is) used to switch between the fields and ENTER to submit the
form. Thus, there was originally no need for a pointing device such as a mouse and
the separation of data in fields allowed for validation of the input. Form fillin interfaces
were (and still is) especially useful for routine, clerical work or for tasks that require a
great deal of data entry. Some examples of form fillin are shown below.

Figure 2.B: More modern-day form fillin,


could be from a web page.
Figure 2.A: Classic Form fillin via a terminal

Even today, a lot of computer programs like video rental software, financial systems,
pay roll systems etc. are still purely forms-based.

Advantages and disadvantages of Form Fillin


Some points below are adapted from Shneiderman (1997) and Preece et al. (1994).

62
Intro. To Human Computer Interaction
Advantages
 Simplifies data entry.
 Shortens learning in that the fields are predefined and need only be
'recognised'.
 Guides the user via the predefined rules.

Disadvantages
 Consumes screen space.
 Usually sets the scene for rigid formalisation of the business processes.
 Please note that "form fillin" is not an abbreviation of "form filling". Instead, it
should be read "form fill-in".

Menu selection

A menu is a set of options displayed on the screen where the selection and execution
of one (or more) of the options results in a state change of the interface (Paap and
Roske-Hofstrand, 1989, as cited in Preece et al. 1994). Using a system based on
menu-selection, the user selects a command from a predefined selection of commands
arranged in menus and observes the effect. If the labels on the menus/commands are
understandable (and grouped well) users can accomplish their tasks with negligible
learning or memorisation as finding a command/menu item is a recognition as
opposed to recall memory task (see recall versus recognition). To save screen space
menu items are often clustered in pull-down or pop-up menus. Some examples of
menu selection is shown below.

Figure 3.B: Menu selection in the form of a webpage (microsoft.com).


Webpage in general can be said to be based on menu selection.

Figure 3.A: Contemporary menu selection


(Notepad by Microsoft Cooperation)

63
Intro. To Human Computer Interaction
Advantages and disadvantages of Menu Selection
Some points below are adapted from Shneiderman (1997) and Preece et al. (1994).

Advantages
 Ideal for novice or intermittent users.
 Can appeal to expert users if display and selection mechanisms are rapid and if
appropriate "shortcuts" are implemented.
 Affords exploration (users can "look around" in the menus for the appropriate
command, unlike having to remember the name of a command and its spelling
when using command language.)
 Structures decision making.
 Allows easy support of error handling as the user's input does not have to be
parsed (as with command language).

Disadvantages
 Too many menus may lead to information overload or complexity of
discouraging proportions.
 May be slow for frequent users.
 May not be suited for small graphic displays.

Direct manipulation

Direct manipulation is a central theme in interface design and is treated in a separate


encyclopedia entry (see this). Below, Direct manipulation is only briefly described.

The term direct manipulation was introduced by Ben Shneiderman in his keynote
address at the NYU Symposium on User Interfaces (Shneiderman 1982) and more
explicitly in Shneiderman (1983) to describe a certain ‘direct’ software interaction style
that can be traced back to Sutherlands sketchpad (Sutherland 1963). Direct
manipulation captures the idea of “direct manipulation of the object of interest”
(Shneiderman 1983: p. 57), which means that objects of interest are represented as
distinguishable objects in the UI and are manipulated in a direct fashion.

Direct manipulation systems have the following characteristics:

 Visibility of the object of interest.


 Rapid, reversible, incremental actions.
 Replacement of complex command
language syntax by direct manipulation
of the object of interest.

Figure 4.A: The text-book example of Direct Manipulation, the


Windows File Explorer,

64
Intro. To Human Computer Interaction
where files are dragged and dropped.

Figure 4.B: One of the earliest commercially available


direct manipulation interfaces was MacPaint.

Advantages and disadvantages of Direct Manipulation


Some points below are adapted from Shneiderman (1997) and Preece et al. (1994).

Advantages
 Visually presents task concepts.
 Easy to learn.
 Errors can be avoided more easily.
 Encourages exploration.
 High subjective satisfaction.
 Recognition memory (as opposed to cued or free recall memory)
Disadvantages
 May be more difficult to programme.
 Not suitable for small graphic displays.
 Spatial and visual representation is not always preferable.
 Metaphors can be misleading since the “the essence of metaphor is
understanding and experiencing one kind of thing in terms of another” (Lakoff
and Johnson 1983: p. 5), which, by definition, makes a metaphor different from
what it represents or points to.
 Compact notations may better suit expert users.

Conceptual Interaction Models

Preece, Rogers and Sharp in Interaction Design propose that understanding users’
conceptual models for interaction can guide HCI designers to the proper interaction
techniques for their system

The most important thing to design is the user’s conceptual model. Everything
else should be subordinated to making that model clear, obvious, and
substantial. That is almost exactly the opposite of how most software is
designed. (David Liddle, 1996, Design of the conceptual model, In Bringing
Design to Software, Addison-Wesely, 17-31)

The HCI designers’ goal is to understand the interaction in terms that the users
understanding of them. Preece, Rogers and Sharp propose four conceptual models for
interaction concepts, based on the type of activities users perform during the
interaction.

65
Intro. To Human Computer Interaction
 Instructing – issuing commands to the system
 Conversing – user ask the system questions
 Manipulating and Navigation – users interact with virtual objects or
environment
 Exploring and Browser – system provides structured information

I propose additional conceptual interactions that are more passive:

 Passive Instrumental – the system provides passive feedback to user actions


 Passive informative – the system provides passive information to the users.

User may interact with a system using more than one conceptual interaction model.

Instructional Interactions
Issuing commands is an example of instructional interactions. Instructional
interactions are probably the most common form of conceptual interactions. It allows
the user the most control over the system. Specific examples vary form using a VCR
to programming. In most cases operating system interactions are instructional
interactions. Icons, menus and control keys are examples of improving the usability
command line instructional interaction. Instructional interactions tend to be quick
and efficient.

Conversational Interactions
Conversational Interactions model the interaction as a user-system dialog. Examples
of systems that are primarily conversational are help-systems and search engines.
Agents (such as the paper clip) use conversational interaction. Implementing
conversational model may require voice recognition and text parsing or could use
forms. The advantage of conversational model is that it can be more natural, but it can
also be a slower interaction. For example, using automated phone based systems is a
slow conversational interaction interface. Another disadvantage of conversational
interaction is that the user may believe that the system is smarter then it really is,
especially if the system uses an animated agent.

Manipulating and Navigational Interactions


This model describes the interaction of manipulating virtual objects or navigating
virtual worlds. Navigational interactions are popular in computer games. Manipulating
interactions occur in drawing software. Navigational interactions occur even in word
processors, for example zooming and using the scroll bar. Direct manipulations are
manipulating interactions. Ben Shneiderman (1983) coined the phase and posed three
properties:

 continuous representations of objects


 rapid reversible incremental actions with immediate feedback
 physical actions

66
Intro. To Human Computer Interaction
Apple was the first computer company to design an operating system using direct
manipulation in the desk top. Direct manipulation and navigational interactions have
a lot of benefits. They are easy to learn, easy to recall, tend to have less little error, give
immediate feedback, and produce less user anxiety. But they have several
disadvantages: the interactions are slower and the user may believe that the
interaction is more than it really is. Poor metaphors such as moving the icon of a
floppy to eject the floppy can confuse the user.

Explorative and Browsing Interactions


Explorative and browsing interactions refer to searching structured information.
Examples of systems using explorative interactions are Music CDs, Movie DVDs, Web,
portals. Not much progress has been made in this conceptual model for interactions,
probably because the structuring information in more than a trivial task and is hard
to model.

Passive Instrumental Interactions


Passive instrumental interactions are similar to instruments, for example the
speedometer in an auto dashboard. They can provide feedback to users’ actions or
movements, such as a GPS interface. The can also provide information to changes in
the environment such as a light meter or an image in a viewfinder. Smart phones are
frequently used as instruments and make use of passive instrumental interactions.

Passive Informative Interactions


Smart phones are frequents used to read books. The interaction is very passive and
one way. The system is providing information to the users. The user primarily gestures
to progress through the book. Viewing images is another passive informative
interaction with only interactions for zooming and panning. Passive informative
interaction may be a simplified Manipulating and Navigational Interactions.

We can make a table summarize interaction styles and conceptual interaction models.
The related conceptual interaction model is the most common model that is supported
by the interaction style. Implementation is how hard for the designer to implement.
Because I made the table we should go through it and correct it.

Interface User speed Flexibility Learning Implementation Conceptual Model


Command fast high slow easy instructing

Line

Menus slow medium medium medium instructing

Dialog slow none fast easy conversing

Forms medium not much fast easy instructing,

conversing,

browsing

67
Intro. To Human Computer Interaction
Spreadsheet slow high slow hard instructing,

conversing

Point & Click slow – fast none fast easy instructing,

manipulating,

browsing

Natural fast very slow-fast hard conversing

Language

Command fast-medium medium hard hard instructing

Gesturing

General fast high slow-fast hard manipulating

Gesturing

Mobile Interactions
We will explore the interactions possible in mobile web app via the technologies that
enable the interactions.

 HTML
 CSS
 Twitter Bootstrap (as an example of CSS framework)
 HTML 5
 Various other JavaScript Libraries

HTML
HTML is the lowest level technology enabling interactions on the web. As a the base
the base level technology it is very learn on enhanced features. The client browser that
implement HTML so exact interaction varies with the browser.

The best resource for HTML is W3 School.

http://www.w3schools.com/html/default.aspViews and Layouts

Views and Layout Tools


The primary challenge of the HTML is that provide a syntax that is independent of the
browser and window dimension. Consequently, HTML only offers the most primitive
layout tools that basically flow from the top of the page to the bottom of the page.
Some of the layout tools are

68
Intro. To Human Computer Interaction
Paragraph tag – example of section of the page
image tag – example of non-text layout
Table tags – advance layout for displaying data
Layout tags – another advance layout for conveying semantic information
IFrame tag – layout that enable displaying another a page within another page
The original intent of the tables tag was to display data similar to spreadsheet. It was
not designed be a layout tool, but in early HTML, the the only tool to control layout
was the table. Table is an awkward syntax for expressing layout, so older HTML
editors and IDE focused on making the table tags usable by the web designers to
express layout. The main fault with tables for expressing layout is that the layouts are
not responsive to different window dimensions.

Interactions
The basic HTML interactions consist of links and forms:

 Links
 Button
 Text Fields
 Radio Buttons
 Check Boxes
 Browser Back
 Browser Closed
 Browser Address

The original HTML interaction technique was traversing a link. The link concept of a
was developed by Nelson (1965) as extension of Bush’s (1945) Memex description.
Original the link was a powerful tool to relate documents. Later the link was used to
initiate an user generated event for the browser to detect.

When messaging boards were developed for HTML, forms tags were developed. The
form input tags include text fields, radio buttons and check boxes. The typical
interaction is for user to use input tags to define the input data and then click submit
which generates a POST request to the server. The form data is mapped in the body of
the request.

We should also include the interactions that the browser offers which include back
button, the browser address field and the closing button.

CSS
In original HTML the styling were attributes the element and coded the tag. Clearly
this made it hard to maintain large website styles. To change the style one had to
search and edit all the element tags. Cascading Style Sheet syntax was develop to
locate the style away from the tags and all in one place. Cascading comes from the
priority that styles definition based on the location of the rules: user defined, inline, in
the page, or in a separate file.

CSS can express more than just the style. It can express animation or changes in style
for common events such as hovering, clicking etc. A good resource for CSS is

69
Intro. To Human Computer Interaction
http://www.w3schools.com/css/

Twitter Bootstrap
Twitter Bootstrap is an example of a css framework. In essence, Twitter Bootstrap is a
style sheet with some JavaScript code. Twitter Bootstrap has a very complete style set
and was one the first frameworks to be “mobile first.” CSS frameworks provide
consistent styling for all the HTML tags.

The best resource for Twitter Bootstrap is at the official website and W3 Schools.

http://getbootstrap.com/

http://www.w3schools.com/bootstrap/default.asp

Views and Layouts


Besides styling for the HTML, Twitter Bootstrap’s offer advance views and layouts.

grids
Jumbotron
The grid is used to define the responsiveness of the layout, meaning how elements
should be laid out for different window widths. A row is a horizontal layout and
columns divide the row. The column class specification defines the size of the width
and the window width break points. Twitter Bootstrap is “mobile first” design, meaning
that generally each column is a single row in a small device and the break points
define the number of column size for larger window width.

Jumbotrom is a large display. The original it referred to Sony’s 1985 giant display. The
are popular for home page titles and images.

Interactions
Besides interactions provided by HTML, Twitter offers some advance interaction
widgets.

 Navigation Bar (Navbar) – menu bar can be located at the top (static or fix) or on
the side
 drop-down menu – set of links dropping down from menu items or buttons
 Notices – panels that conditional display
 Modal – a window that overlays and holds focus until the user response
 Accordions – collapsing panels
 progress bar

Navbar enable a menu similar to what user are familiar with in desktop applications.
In essence, the navbar help to enable the idea of web apps. The combination of
navbar and drop-downs can make the web app functionality as vast as a desktop
application.

Modal widows grap focus and force the user to respond. They are good for alerting the
user before they might delete a database entry. But modal’s should be used with

70
Intro. To Human Computer Interaction
caution. If you fin yourself writing modal with just one button, I suggest reconsidering
your design. A notice might be better or at least have a check box with “do not show
again.” Modal’s or overlays are also used to show expanded views. Accordions are good
for concealing and revealing detail information in a list.

HTML 5
HTML 5 our proposed standards to extend HTML functionality. Many of the proposed
standards have already been implemented by major browsers. Some of these
standards add additional interaction technique via JavaScript.

Interactions
The API listed below add interactions techniques to most modern browsers. Google
Chrome browser supports all of the API:

 Geolocation – http://dev.w3.org/geo/api/spec-source.html
 Multimedia – http://www.w3.org/html/wg/drafts/html/master/embedded-
content.html#media-elements
 Canvas – http://www.w3.org/html/wg/drafts/html/master/scripting-
1.html#the-canvas-element
 SVG – http://www.w3.org/Graphics/SVG/
 Motion Sensors – http://w3c.github.io/deviceorientation/spec-source-
orientation.html
 Form Virtual Keyboards –
http://www.w3.org/html/wg/drafts/html/master/Overview.html
 Touch Events – http://www.w3.org/TR/touch-events/
 CSS 3 Transitions – http://www.w3.org/TR/css3-transitions/
 CSS 3 Animations – http://dev.w3.org/csswg/css-animations/
 WebGL – https://www.khronos.org/webgl/
 HTML Media Capture – http://www.w3.org/TR/html-media-capture/
 Web Speech API – https://dvcs.w3.org/hg/speech-api/raw-
file/tip/speechapi.html
 Vibration API – http://www.w3.org/TR/vibration/

The Geolocation API gives access to the GPS even when the browser is offline.
Multimedia provides audio and video tags. Motion Sensor gives access to the device’s
accelerometers and consequently can be used to implement the compass and level.
Form Virtual Keyboards give mobile devices different keyboards depending on the text
field attributes. Touch Events give access to continuous X-Y page and screen
coordinates, so can be used to implement drawing and gesturing.HTML Media Capture
gives access to the camera in the device so that photos videos can be captures. Web
Speech API provides speech recognition and Vibration API vibrates the device, but
these are implemented only in Chrome.

Most but not all these APIs are implemented by the major modern browsers, you
should check the implementation status at

http://mobilehtml5.org/

71
Intro. To Human Computer Interaction
The above APIs give web apps nearly the same functionality of native apps. Some of
the interactions are:

 GPS Location
 Touch
 Gesturing
 Orientation
 Photo
 Vibration
 2D drawing
 3D rendering

Other JavaScript Libraries


There are many JavaScript libraries that build on the base features to provide advance
interaction techniques. Below is only a short list of libraries that you may find useful.

Google Maps API


Google Maps API is a convenient JavaScript library for displaying maps and map
icons. The displays offer user control including locating icons.

The best resources are Google Developers W3 Schools websites:

 https://developers.google.com/maps/
 http://www.w3schools.com/googleAPI/default.asp

JQuery and JQueryUI


JQueryUI is an advance library built on top of the very porpular JQuery library. More
than 500 plugins widgets (for example autocompletion) offer advance user interface
interactions. Below is short list of useful resources:

 http://www.w3schools.com/jquery/default.asp
 http://jquery.com/
 http://plugins.jquery.com/
 http://jqueryui.com/
 http://learn.jquery.com/

Interactions
We should list interaction techniques available on the smart phones and associate
their constraints and opportunities.

Interactions Opportunities Constraints/Difficulties


Viewing Any where and time Small screenLow resolution
Touch Basic input Space for only a few buttons,
Small buttons
Long Touch Context menu User are unaware, requires time
Gesturing More expression to touch Small spaceLimited gestures
Keying Text input Small keyboard, Error prone,

72
Intro. To Human Computer Interaction
slow
Spinner Alternative to text input Only a few selections
Auto Completion Assist text input Error prone, Complex use
GPS location location,documentation Low resolution, 30 meters, Slow
Orientation Alternative inputprovide Noisy, User imprecision
direction
Microphone Alternative text inputOther Poor quality, Transcription hard
inputs and error prone
Speaker Alternative outputfeedback Poor quality, Inappropriate use
in public
Time Documentation Little use
GPS Motion Alternative input, Direction, Area Imprecise, Slow
Accelerometer Activities Alternative inputMeasure Small vocabulary, Imprecise
activity
Photo Documentation,Vast Hard to interpret, Large storage
informationAlternative space, Slow
inputAlternative text input
Vibration Low noise outputDoes not Small
require view vocabularyImpreciseUnnoticed
WiFi Vast information Slow, Not always available, Small
screenLinks hard to touch
Bluetooth Locale communicationTransfer Public, Complicate connect
information protocol, Insecure
Bluetooth Devices Many opportunities More than one device,
Complicated communications
NF Tangible interfaces More secure, range 1 meter

In general mobile apps frequently use:

 Viewing
 Touch
 Spinners
They avoid the using:

 Keying
 Audio

When the opportunity arises, they should smart use of:

 Gestures
 GPS location
 Orientation
 Time
 Photos
 Vibrations
 WiFi
 Bluetooth

73
Intro. To Human Computer Interaction
New opportunities for interactions techniques are provided by:

 GPS
 Accelerometers
 Photos
 Vibrations
 Bluetooth
 Bluetooth devices
 Near Field

CONCEPTUAL MODELS

For many designers, especially those new to user-interface design, the next step is to
sketch the control panels and dialog boxes of their product or the pages of their Web
service. Such initial sketches are usually high-level and low-fidelity—showing only
gross layout and organization.

If you begin your design phase by sketching, we believe you've missed a step.
Sketching amounts to starting to design how the system presents itself to users. It is
better to start by designing what the system is to them. That is, by designing a
conceptual model.

Let's consider examples of conceptual models. Assume you are designing:

 a Web site. Is the site


a) a collection of linked pages, or
b) a hierarchy of pages with some crosslinks?
 breadcrumbs for Web site navigation. Do they show
a) the history of pages you have gone through to arrive here, or
b) the place of this page in the hierarchy of pages?
 support for discussion grouped around topics. Is the structure
a) a set of threaded lists, one for each subject, or
b) a set of postings each with potentially related subjects?
 an application for creating newsletters. Is a newsletter
a) a list of items, or
b) a set of pages each with layout of items?
 A platform for creating questionnaires. Is the questionnaire
a) a linear list of questions, or
b) a branching tree of questions?

These decisions matter. Depending on how you choose, users will think of things
differently, the objects will be different, the operations users can do on them will be
different, and how users work will be different. If you try to avoid choosing, to have it
both ways (and, of course, most designs have more than two ways of going), users will
get a confused understanding of the system and confused direction on how to think
about their work. Not choosing is tempting, because these decisions are almost always

74
Intro. To Human Computer Interaction
difficult to make: Usually they involve tradeoffs between simplicity and power (tough
call!). In addition, they always depend on what the user is doing, which means being
clear about the users tasks. But in the end some sort of decision on the conceptual
model will be made, even if only as a side-effect (often bent and uncertain) out of the
rest of the design process.

Tough decisions, but essential, as we see it. And better done right up front when it is
not made even more difficult by being encumbered with lots of dependent details. Our
position: Get the bone structure right, then flesh it out.

By carefully crafting an explicit conceptual model focused squarely on the target task-
domain, and then, and only then, designing a user interface from that, the resulting
product or service will be simpler, more coherent, and easier to learn. In contrast, if
you jump straight into designing the user interface, you are much more likely to
develop a product or service that seems arbitrary, incoherent, and overly complex, not
to mention heavily laden with computer-isms. (For an example, see Sidebar 1: A Web
App Without A Task-Based Conceptual Model.)

Designers with strong backgrounds in human-computer interaction and user-interface


design are probably well aware of the value of conceptual models. However, our
experience with our clients indicates that conceptual models of this sort are almost
completely unknown outside of the HCI community, especially among Web designers
and software programmers.

What a Conceptual Model Is


A conceptual model is a high-level description of how a system is organized and
operates. It specifies and describes:

 the major design metaphors and analogies employed in the design, if any.
 the concepts the system exposes to users, including the task-domain data-
objects users create and manipulate, their attributes, and the operations that
can be performed on them.
 the relationships between these concepts.
 the mappings between the concepts and the task-domain the system is
designed to support.

In using an interactive system (electronic appliance, software program, or Web


service), reading its documentation, and talking with other people who use it, users
construct a model in their minds of the system and how it works. This allows them to
predict its behavior and generalize what they learn to new situations. If the designers
take the trouble to design and refine a conceptual model for the system before they
design a user interface for it, users will be able to more quickly "figure it out."
Furthermore, the model they "figure out" will be more like the one the designers
intended. A conceptual model of an interactive system is therefore:

 an idealized view of the how the system works—the model designers hope users
will internalize;
 the ontological structure of the system: the objects, their relationships, and
control structures;

75
Intro. To Human Computer Interaction
 the mechanism by which users accomplish the tasks the system is intended to
support.

For example, suppose you are designing an online library catalog. The conceptual
model might include:

 metaphors and analogies: e.g., the information is organized as in a physical


card-catalogue.
 concepts: e.g., item (with attributes: title, ISBN, status; with actions: check-
out, check-in, reserve), subtypes of item (e.g., book, periodical issue, LP, video),
periodical volume, user account (with attributes: name, items checked out),
librarian;
 relationships: e.g., a book is one type of item, periodical volumes contain
issues;
 mappings: e.g., each item in the system corresponds to a physical item in the
library;

Simple: A conceptual model should be as simple as possible while providing the


required functionality. An important guideline for designing a conceptual model is:
"Less is more." If, for example, you're designing a search facility for the Web, do your
intended users really need full Boolean search capability? If not if a simpler search
mechanism covers the user's needs don't burden the design with the more complex
capability. Similarly, if you're designing a route-following application, is "turn NNE"
needed, or only "turn right"1. And beware, simple ain't simple: it often takes a lot of
thinking (and testing) to deciding which model will be simplest!

Task-Focused: The more direct the mapping between the system's operation and the
task-domain it serves, the greater the chances that the designers' target conceptual
model will be correctly reproduced and adopted by the users (Norman, 1986).

For example:

You are designing a software product for creating and managing organization charts.
Is an organization chart

 a collection of boxes, box labels, box layout, connector lines, and attributes
thereof, or
 a collection of organizations, sub-organizations, employees, and attributes
thereof?

Model "b" maps more directly to the users' task-domain, and so will be easier for the
users who presumably already understand organizations to master. In contrast, Model
"a" focuses on the graphic appearance of an organization chart, rather than on its
function

What a Conceptual Model Is Not


The conceptual model of an interactive system is not the user interface. It is not about
how the software looks or how it feels. It does not mention keystrokes and mouse-
actions, screen graphics and layout, commands, navigation schemes, dialog boxes,

76
Intro. To Human Computer Interaction
controls, data presentation, or error messages. It does not say whether the software is
operated through a GUI on a personal computer or by voice-commands over a
telephone. It describes only what people can do with the system and what concepts
they need to understand to operate it. It refers only to task-domain objects, attributes,
and actions.

The conceptual model is not the users' mental model of the system. Users' mental
models of systems are not accessible to designers in any objective sense. Designers
should not waste time trying to determine what the users' "mental models" of the
system are (Nardi, 1993). Different users are likely to have different mental models of a
given interactive system anyway. Conceptual models are more usefully thought of as a
design tool way for designers to straighten out their thinking before they start laying
out widgets. It is the designers' responsibility to devise a conceptual model that makes
sense to users based on users' understanding of the task domain. In other words, a
conceptual model may be the basis for users' mental models of the system, but that is
not its primary purpose.

The conceptual models are not use cases (also known as task-level scenarios). Use
cases are stories about the domain tasks that users will have to carry out in their
work. They are supposed to be expressed in a system-neutral way, so as not to specify
the design of the system. Use cases emerge from study and analysis of the task
domain through interviews, ethnographies, focus groups, contextual inquiry, and
other methods. They can either be input to the design of the conceptual model or they
can emerge from it; therefore, they are often included in documents about conceptual
models. However, a set of use cases is not a conceptual model: use cases focus on
tasks; the conceptual model focuses on the system.

Finally, a conceptual model is not an implementation architecture. An implementation


architecture contains concepts objects, attributes, actions, and control structures that
are required to implement the system. Some of these concepts in the implementation
architecture may correspond to concepts in the conceptual model (e.g., a Bank
Account class vs. the concept of a bank account), but if so, one is a technical object
while the other is an abstract construct. Of course, an implementation architecture
will also include implementation objects that are of no concern to users (e.g., streams
to the file system), which should have no place in the conceptual model.

Object/Actions Analysis
An important component of a conceptual model is an Objects/Actions analysis: an
enumeration of all the concepts in the model all the user-understood objects in the
system, user-understood attributes of those objects, and the actions that users can
perform on each of those objects (Johnson et. al., 1989; Card, 1996). The
Objects/Actions analysis, therefore, is a declaration of the concepts that are exposed
to users. Follow this rule: "If it isn't in the conceptual model, the system should not
require users to be aware of it."

Because computer-based systems often provide new capabilities, concepts not found
in the task domain especially a pre-computerized one often creep into the conceptual
model. For example, hard-copy documents in a physical filing system can only be
organized one way, but files in an electronic document system can easily be organized
in multiple ways simultaneously.

77
Intro. To Human Computer Interaction
However, each new concept comes at a high cost, for two reasons:

 It adds a concept that users who knows the task domain will not recognize and
therefore must learn.
 It potentially interacts with every other concept in the system. As concepts are
added to a system, the complexity of the system rises not linearly, but
exponentially!
Therefore, additional concepts should be strongly resisted, and admitted into the
conceptual design only when they provide high benefit and their cost can be
minimized through good user-interface design (see the discussion of Quicken in
Sidebar 2: Managing Checking Accounts: Objects, Attributes, Actions). Remember:
Less is more!

Relationships Between Concepts


Enumerating the objects and actions of the task-domain allows designers to notice
actions that are shared among objects. Designers can then use the same user
interface for actions across a variety of objects. For example, consider a drawing
application that allows users to manipulate both rectangles and ellipses. If creation
works the same way for both types of objects, when a user knows how to create a
rectangle and wants to create an oval, they already know how to do it. Similarly, if
users can constrain rectangles to be squares they should also be able to constrain
ellipses to be circles. This makes for a conceptual model that has fewer distinct
concepts, is simpler and more coherent, and is more easily mastered.

If objects in a task-domain share actions, they can probably be organized in a


specialization or type hierarchy, in which certain conceptual objects are
specializations of others. If so, making that hierarchy explicit in the conceptual model
may help users comprehend it more easily. While only programmers understand
object-oriented analysis, most users can understand the idea of specialization. For
example, a checking account is a type of bank account, and a book is one type of
product or item a store might sell.

Depending on the application, objects may also be related by a containment hierarchy,


in which some objects can contain other objects. For example, an email folder contains
email messages, and an organization can contain employees.

Finally, concepts in a task-domain are related to each other in importance. Some


concepts are encountered by users more frequently than others. For example, closing
a checking account is an infrequent operation compared to, say, entering a
transaction into an account. The relative importance can be used to focus the design:
It is more important to make frequent operations easy, even at the expense of less
frequent ones.

From Conceptual Model to Completed Project

Developing a conceptual model as the first design step provides several benefits in
later steps:

Lexicon. Once the development team assigns names to the objects, actions, and
attributes enumerated in the conceptual model, they have a lexicon of terms to be

78
Intro. To Human Computer Interaction
used in the application and its documentation. As the interface is developed, the
software coded, and the documentation written, the lexicon can be consulted to
ensure that terms are used consistently throughout.

Although the entire team develops the lexicon, it is best managed and enforced by the
team's technical writer. This lexicon-manager—whoever gets the job—should
constantly be on the lookout for inconsistencies in what things are called. For
example: "Yo, Bill. We called this thing a 'cell' in this dialog box, but we call it a
'container' in this other dialog box. Our official name for them is 'cells,' so we need to
fix that inconsistency." Software developed without a lexicon often suffers from two
common user interface "bloopers": 1) multiple terms for a given concept, and 2) the
same term for multiple distinct concepts (Johnson, 2000).

It is also the lexicon-manager's role to be on the lookout for user-visible concepts in


the interface, software or documentation that aren't in the lexicon, and to resist them.
For example: "Hey Sue, I see that this window refers to a 'hyper-connector.' That isn't
in our conceptual model or lexicon. Is it just the wrong name for something we already
have in our conceptual model, or is it something new? If it's something new, can we
get rid of it, or do we really, really need it?"

Task scenarios or use-cases. A conceptual model allows the development team to


write scenarios of the product in use, at a level of description that matches the target
task-domain. Such scenarios are often called use-cases. They are useful in checking
the soundness of the design. They can be used in product documentation, in product
functional reviews, and as scripts for usability tests. They also provide the basis for
more detailed scenarios written at the level-of-detail of the eventual interface design.

Once a conceptual model has been crafted, one can write use-cases or task-scenarios
depicting people using the application, using only terminology from the conceptual
model. In the case of the checkbook application, for example, it should be possible to
write scenarios such as:

John uses the program to check his checking account balance. He then deposits a
check in his account and transfers funds into the account from his savings account.

Note that this scenario refers to task-domain objects and actions only, not to specifics
of any user interface. The scenario does not say whether John is interacting with a
GUI on a personal computer or a voice-controlled interface over a telephone.

User-interface. A conceptual model gives the designer a clear target for what the
interface has to deliver to the user: The look and feel of the objects and actions have to
be created, the relationships embodied in the design. The conceptual model then offers
the basis for tests of how well the user interface works: Can the users manipulate the
objects through their representations as the designer intended. (Note: It is tempting to
think that the user can tell you about the conceptual model of the system that they
have formed in these tests. Resist it! That is setting the bar way too high, and for no
reason. It is not at all necessary for the successful use of most systems for users
either to have the conceptual model "right," or to be able to talk clearly about it. Doing
does not require talking!)

79
Intro. To Human Computer Interaction
The user interface design translates the abstract concepts of the conceptual model
into concrete presentations, controls, and user-actions. The user interface should be
designed after the conceptual model has been designed. Task-scenarios can then be
rewritten at the level of the user-interface design, for example:

John double-clicks on the icon for his account to open it. A separate window opens
showing the current balance. He then clicks in the blank entry field below the last
recorded entry and enters the name and amount of a check he recently received.

Implementation. Readers who are programmers will have noticed the similarity
between the object/action analysis described here and the object-oriented analysis
that is a common early step in software engineering. Although object/action analysis
is restricted to user-understood concepts while object-oriented analysis is not, having
done an object/actions analysis provides a first cut at the object-oriented analysis.
Therefore, developing a conceptual model is not a simple added cost for a project; it
produces outputs that save costs in the software development stage.

Documentation. A conceptual model provides the documentation team with the


material that they will have to provide to the user to help with learning the system
(help material, documentation). A clearly defined conceptual model is a good place to
start, and should be coupled at all points with the descriptions of tasks and interface
actions.

Design process. Because almost everyone on the development team is orienting to the
conceptual model, the conceptual model can also be a central coordination point for
members of the team as they design and develop the system.

The centrality of the conceptual model and its potential role in orchestrating the
design process has one very strong implication for design activities and their
relationship with the conceptual model: Unilateral additions of concepts to the
conceptual model by any team member is not allowed.

For example, if a programmer thinks a new concept needs to be added to the software,
she must first persuade the team to add the concept to the conceptual model; only
then should it appear in the software. Or again, if a documenter finds that they have
to introduce an additional concept to explain the system, that change must be
reflected first in the conceptual model (with the whole team's agreement), and then it
will appear in the documentation.

The process will usually not be linear. As design proceeds from conceptual model to
user interface to implementation, it is most likely that these downstream designs will
reveal problems in the conceptual model. (It is tough to get it right the first, or even
the fifth time!) Early usability testing can, and should, be designed to accelerate this
process. Low fidelity, quick prototypes can be focused on the important parts of, and
questions in, the conceptual model. Lightweight usability testing can thus evaluate the
conceptual model as well as the UI design.

If testing exposes problems in the conceptual model, go back and change it. Resist the
temptation to treat the conceptual model as "dead" after an initial UI has been
designed from it. If you don't keep the conceptual model current as you improve the

80
Intro. To Human Computer Interaction
design, you will regret it in the end, when you have no single coherent high-level
description on which to base user documentation, training, or later system
enhancements.

Of course, changing the conceptual model is painful: it affects the user interface, the
documentation, and the implementation. The entire team is affected. But the
conceptual model is the single most important part of your design. Therefore, it pays
to make it as simple and task-oriented as you can, then do whatever you need to do to
reconcile the rest of the design with it. Otherwise, your poor users will have little
chance of understanding the user interface, because it will be based on a muddled
conceptual model.

Conclusion

Good user interfaces start with clean, simple, task-oriented conceptual models. The
conceptual model is the bones of the design. One nice thing about this is that the
conceptual model is much smaller than the whole design. It is something that can be
held in mind and worked on. Get the conceptual model in hand before adding all the
complexity of everything else.

Once you have the conceptual design, all the other design and implementation
activities can and should be grounded in it, feeding it further (task scenarios,
evaluation), building on it (user interface, lexicon, implementation, documentation,
evaluation). Because the conceptual model is so central, it is important to ensure that
everyone agrees on it. In addition, because changes that affect the conceptual model
affect everyone, all changes must be made jointly. The conceptual model is the central
point of discussion and site of debate.

So at the outset, and throughout, let the sketching follow the modeling. Before you
design, design what you are designing: Design a conceptual model.

81
Intro. To Human Computer Interaction
Self-check

Complete the ff. table

Interface User speed Flexibility Learning Implementation Conceptual Model


Command

Line

Menus

Dialog

Forms

Spreadsheet

Point & Click

Natural

Language

Command

Gesturing

General

Gesturing

82
Intro. To Human Computer Interaction
LESSON XV

User Models

Standardizing User Models (Universal Access in Human-Computer Interaction)

A model can be defined as "a simplified representation of a system or phenomenon


with any hypotheses required to describe the system or explain the phenomenon, often
mathematically". The concept of modelling is widely used in different disciplines of
science and engineering ranging from models of neurons or different brain regions in
neurology to construction model in architecture or model of universe in theoretical
physics. Modelling human or human systems is widely used in different branches of
physiology, psychology and ergonomics. A few of these models are termed as user
models when their purpose is to design better consumer products. By definition a user
model is a representation of the knowledge and preferences of users that the system
believes the user posses.

There was a plethora of systems developed during the last three decades that are
claimed to be user models. Many of them modelled users for certain applications -
most notably for online recommendation and e-learning systems. These models in
general have two parts – a user profile and an inference machine (Figure 1). The user
profile section stores detail about user relevant for a particular application and
inference machine use this information to personalize the system. A plethora of
examples of such models can be found at the User Modelling and User-Adapted
Interaction journal and proceedings of User Modelling, Adaptation and Personalization
conference.

On a different dimension, ergonomics and computer animation follow a different


view of user model. Instead of modelling human behaviour in detail, they aim to
simulate human anatomy or face which can be used to predict posture, facial
expression and so on.

Fig. 1. Simplistic view of a user model

83
Intro. To Human Computer Interaction
Finally, there is a bunch of models which merges psychology and artificial intelligence
to model human behaviour in detail. In theory they are capable of modelling any
behaviour of users while interacting with environment or a system. This type of models
is termed as cognitive architecture (e.g. SOAR [6], ACT-R/PM [1], EPIC [5] and so on)
and has also been used to simulate human machine interaction to both explain and
predict interaction behaviour. A simplified view of these cognitive architectures is
known as the GOMS model [4] and still now is most widely used in human computer
interaction.

Considering all these approaches together, it becomes challenging to define what a


user model actually is. This lack of definition also makes the interoperability of user
models difficult. On the other hand, there was a plethora of standards about human
factors, user interface design, interface description language, workplace ergonomics
and so on [7] that can be used to develop user models.

In this paper we have taken an approach to standardize the different user modelling
approaches. One novelty of our approach is that we have also considered user models
for people with disabilities which is not studied in as detail as their able bodied
counterparts. In the rest part of the paper, we will try to consolidate different user
modeling approaches and standards into a single set of standardization features and
also describe a case study to explain our concept.

Areas of Standardization
We have identified the following features of the user model and corresponding
development process and applications for standardization. Through these features, we
aim to develop a common set of vocabulary that can be used to disseminate
information and data across different user modeling systems. They are as follows:

• Conceptualization: This feature defines the user model and sets up the context of
its development and application for discussing the other features.

• Development process: This feature summarizes different stages of the user model
development process and aims to bring synergy among user models at different stages
of development and developed for different purposes and applications.

• User study: User model development process always involves plenty of user studies
to test the accuracy of the model. This feature ensures that data and results gathered
in one user study can be shared with others.

• Data storage: This feature further elaborates dissemination of data collected in


different user studies.

• Evaluation: This feature highlights how accurate a user model is and how it should
be used in other projects besides the one it is developed for.

In the following sections we have further elaborated these concepts.

84
Intro. To Human Computer Interaction
Conceptualization
Definition. Since there is a lot of ambiguity about what does it mean by a user model,
the definition of the model should be cleared at first.

Purpose. After the definition, the purpose of the model should be specified. A few
examples of developing user models are as follows:

• Visualization

• Explanation

• Measurement

• Prediction of interaction patterns

Development Process
We have developed standards for each phase of a user model development process. We
can identify the following four phases of model development.

1. Specification

2. Design

3. Calibration

4. Validation

Ideally, all of these phases should maintain a standard. The standard will help to
reuse the model by other partners irrespective of its stage of development. In the
following paragraphs, we propose a standard for each of these phases.

Specification. This phase should primarily identify the scope, requirement and goal of
the model. According to our classification this stage should identify whether it is an

• Ergonomic model

• Application specific models

• Cognitive models and so on.

This phase also roughly specify the data requirement for the user profile.

Design. A simplistic view of the user model is shown in Figure 1. Following that figure,
the standards in design phase should identify:

1. The structure of the user profile

2. The format of the user profile

3. The type or the specific system used as the inference machine

85
Intro. To Human Computer Interaction
4. Parameters used in the inference machine

Calibration. The calibration phase will populate the model with real life data. The data
can come from an existing store like published results on anthropomorphic data or
can be collected through new studies. In the first case the source of the data should be
specified, in the later case the data collection process should follow an existing
standard like ISO 9241.

Validation. The validation process proves the accuracy of the model. Like the
calibration process, it can be validated with existing data or a new study. In either
case, the same standard should be followed as in the calibration stage.

User Study
The user studies should be described in such a way so that they can be replicated by
other researchers. The following points can be followed to describe the study.

• Design

• Procedure / Method

• Material

• Participants

• Result

In particular, the task, environmental context of the study and detail about
participants should be described in detail which also corresponds to the final
application of the user model.

Data Storage
One main aim of the whole standardization process as a whole is to share data
collected during the development process. The following list gives a comprehensive list
of features, however it should be extended based on the context of the model.

• User Profile

- Demographic detail

- Structural features

- Cognitive features

- Impairments and corresponding disability

• Interfaces used in the study and its properties like (refer figure 2)

- Dimension

86
Intro. To Human Computer Interaction
- Forecolour

- Backcolour

- Controls

o Location
o Size
o Forecolour
o Backcolour
o Highlighting colour
o Font size
• Task

- Description

• Instrument

- Technical specification

• Movement trace

- Co-ordinates

- Timestamp

87
Intro. To Human Computer Interaction
Fig. 2. An interface with its detail stored in xml

Evaluation Specification
Any interactive system can be evaluated according to the following features.

• Performance evaluation

- Task completion time

- Number of errors / Error rate

- Idle time

• Subjective experiences

• Quality of life

A user model should specify which criteria it may help to evaluate. For example, a
cognitive model can be used to evaluate cognitive performance while a physical
simulation may predict ease of use and thus subjective experience of users while
using a tool.

The following case study demonstrates a user model and shows how it can be specified
in terms of the standardization features.

A Case Study
We have developed a simulator that explains the effect of physical impairment on
interaction with electronic interfaces. It embodies both the internal state of a device
and also the perceptual, cognitive and motor processes of its user. Figure 3 shows the
architecture of the simulator.

The Application model represents the task (like opening an application in a laptop
or changing the channel in a TV) currently undertaken by the user by breaking it up
into a set of simple atomic tasks.

The Interface model decides the type of input and output devices (like mouse,
trackball, keyboard, remote control and so on) to be used by a particular user and sets
parameters for an interface.

The User model simulates the interaction patterns of users for undertaking a task
analysed by the task model under the configuration set by the interface model. It
consists of a perception model, a cognitive model and a motor behaviour model.

• The perception model simulates the visual perception of interface objects. It is


based on the theories of visual attention.

88
Intro. To Human Computer Interaction
• The cognitive model determines an action to accomplish the current task. It is
more detailed than the GOMS model [John and Kieras, 1996] but not as complex as
other cognitive architectures.
 The motor behaviour model predicts the completion time and possible interaction
patterns for performing that action. It is based on statistical analysis of screen
navigation paths of disabled users.

The details about users are store in xml format in the user profile following the
ontology shown in figure 4 below. The ontology stores demographic detail of users like
age and sex and divide the functional abilities in perception, cognition and motor
action. The perception, cognitive and motor behaviour models takes input from the
respective functional abilities of users. Table 1 summarizes the model in terms of the
standardization features.

Fig. 3. Architecture of the


simulator

Fig. 4. User Ontology

Table 1. Our user model in terms of the standardization features

89
Intro. To Human Computer Interaction
Conceptualization
Definition The simulator embodies both the internal state of a device
and also the perceptual, cognitive and motor processes of its
user.
It has user interfaces to simulate interaction patterns of disabled
and elderly users and will also later be part of a simulation and
Purpose
adaptation platform where interface designers can develop and
test adaptation systems.
Development Stage
Specification Works for interaction with electronic interface.
The user model works by modelling basic perceptual, cognitive
Design and motor capability of users and simulate tasks involving any
electronic interface.
It is and will be calibrated through standard tasks by ISO and
Calibration
cognitive psychology.
The models are initially validated through ISO 9241 and visual
search tasks involving different participants. Later output from the
Validation simulation will be compared to the existing guidelines to validate
the results and later the simulation output will augment the
existing guidelines in the form of the GUIDE handbook.
We followed ISO 9241 pointing task and standard visual search
User study task to collect data. Data was collected from people with and
without visual and mobility impairment.
Data Storage

User profile

Table1.

Interface
definition

90
Intro. To Human Computer Interaction
Movement trace

Evaluation
Performance Till now, the models are evaluated by measuring
evaluation
• correlation among actual and predicted task completion time

• relative error in prediction

• effect size of different design alternatives in predicted task completion times.


Subjective We are also working on interface designers in the GUIDE project and observing
evaluation how the modeling tool helps them in long term.

Benefits of Standardization

User trials are always expensive in terms of both time and cost. A design evolves
through an iteration of prototypes and if each prototype is to be evaluated by a user
trial, the whole design process will be slowed down. Buxton has also noted that "While
we believe strongly in user testing and iterative design. However, each iteration of a
design is expensive. The effective use of such models means that we get the most out
of each iteration that we do implement’. Additionally, user trials are not representative
in certain cases, especially for designing inclusive interfaces for people with special
needs. A good simulation with a principled theoretical foundation can be more useful
than a user trial in such cases. Exploratory use of modelling can also help designers
to understand the problems and requirements of users, which may not always easily
be found through user trials or controlled experiments.

However as we pointed out at the beginning, the concept of user modeling is still
pretty diverse among researchers. Each user model is useful for their own application,
however there should be ways to use it in other applications as well. It will reduce the
cost of reengineering new models and also help to incorporate user models in more
applications. The recent EU initiative of setting up the VUMS (Virtual User modeling
and simulation [8]) project cluster also supports this claim. The VUMS project cluster
aims to standardize user modeling efforts to increase interoperatibility among user
models developed for a wide variety of applications like designing automobile, washing
machine, digital television interface and so on.

91
Intro. To Human Computer Interaction
LESSON XVI

Predictive Models

Predictive Models

There are 6 behaviour models that help HCI designers to predict the way an interface
will behave, and if it is effective enough to be used on a computer or device. These 6
behaviour models are split into 2 catorgories, predictive and descriptive.

The predictive models are:


 Keystroad-level model (KLM)
 Throughput (TP)
 Fitt's law
The discriptive models are:
 Key-action model
 buxton's three state model
 Guiard's model of bimanual skill
I am going to look at 1 predictive model and one descriptive model.

keystroke level model (predictive model)


The keystroke level model was described by Card, Moran, and Newell in the early
1980s. The model focuses on how long it takes users to actually use the HCI via
hardware. The keystroke level model features 11 steps that is used by individual
people and organizations, they use this to estimate how long it takes to perform simple
tasks involving the input of a human via hardware. Normally companies who cannot
afford specialists use this method.

The keystroke level model defines and measures how long it takes to press and release
a key on the keyboard (measured in words per minute, and categorized into fast,
novice and slow typists), how long it takes to point the mouse on the screen, how long
it takes to press or release a mouse click, how long it takes to switch hardware
devices, ie keyboard and mouse, how long it takes for the human brain to prepare to
peform an action within an HCI, how long it takes to type a string of characters, how
long the user has to wait for the system to perform the action in the HCI.

Fitts's Law
First of all it is not Fitt’s Law. The name of the famous researcher is Paul Fitts, so one
should be careful on spelling. Fitts's Law is basically an empirical model explaining
speed-accuracy tradeoff characteristics of human muscle movement with some
analogy to Shannon’s channel capacity theorem. Today, with the advent of graphical
user interfaces and different styles of interaction, Fitts’ Law seems to have more
importance than ever before.

92
Intro. To Human Computer Interaction
The early experiments on pointing movements targeted the tasks that might be related
to the worker efficiency problem, such as production line and assembly tasks. They
were not targeted to HCI field because there was no GUI. Paul Fitts extended
Woodworth's research which focused on telegraph operator performance substantially
with his famous reciprocal tapping task to define the well-referenced Fitts’ Law (Fitts
1954). In the Fitts’ Law description of pointing, the parameters of interest are:

a. The time to move to the target


b. The movement distance from the starting position to the target center
c. Target width

Fitts started his work making an analogy of the human motor system to the well-
known Shannon's channel capacity theorem. He started with the following theorem:

in the above equation, C represents the effective information capacity of the


communication channel, B represents the bandwidth of the channel, S and N
represent signal and allowable noise power levels respectively. Fitts claimed that the
distance (A) can be thought as signal power, the width of the target (W) can be thought
as the allowable noise. As powerful transmitters carry more information, it becomes
harder to receive when the allowable noise level increases. Similarly, it takes longer to
hit targets which are further away and smaller. With this analogy, he derived the
following equation, which is now known as Fitts’ Law:

here, MT represents the movement time to hit the target, a and b are empirically
determined constants. A represents the amplitude, which is the distance of the center
of the target from the starting location and W is the target width which is shown in
Figure 2.

Figure 2. The basic pointing task variables A


and W.

The empirical constants a and b are found using a regression analysis on the
movement time data. A typical Fitts’ regression line appears as follows:

93
Intro. To Human Computer Interaction
Figure 3. Fitts’ regression line.

In almost all of the research following the


Fitts' original experiment, the empirically
determined constant a, is usually considered
to include a constant time, such as
depressing a mouse button depending on
the experiment. (Note that the definition of
movement time has no strong specification
for the boundary conditions). Fitts defined
the term Index of Difficulty (ID, shown in
Figure 3), as a measure of the task difficulty as follows:

Mackenzie (MacKenzie 1992), suggested a more stable model for the Fitts Law, which
works better -also more like the Shannon’s original formula- for the small values of ID
as follows:

Index of difficulty is measured in terms of "bits", which comes from the analogy with
Shannon's information theorem. In addition to the index of difficulty, Fitts also defined
a measure for the performance, named “Index of Performance” (IP) which is as follows:

Index of performance is measured in bits per second (bits/sec), similar to the


performance indices of the electronic communication devices (e.g. modems). Fitts
claimed that under ideal circumstances the term a in Equation 2 would be zero,
therefore the index of performance (IP) can be simply taken as ID/MT from Equation 2.
However, later by other researchers, the constant a is proven to be a significant factor
emphasizing the need for a more detailed analysis. The constant term a is also shown
to be highly affected by the learning curve (Card 1991) of the input device and the
task.

Welford, suggested a better model by separating A and W into two terms. He indicated
that the effect of the target width and the target distance is not proportional and his
model yields a better correlation coefficient (Welford 1968). Later researchers
suggested the same. However, there is no simple index of performance associated with
Welford’s model.

94
Intro. To Human Computer Interaction
Fitts had subjects move a stylus alternately back and forth between two separate
target regions, tapping one or the other target at the end of each movement. These
types of tasks are called “continuous tasks” where the subject is not expected to stop
after finishing one movement but instructed to repeat the same task symmetrically as
quickly as possible. In continuous tasks, the total time is divided by the number of
movements to determine the average movement time for a particular target size and
distance. The other type, “discrete tasks”, on the other hand, are tasks where the
subject is instructed to stop after one movement and the time is measured between
the start and the end-points. Subjects were instructed to make their aimed
movements as rapidly as possible but land within the boundaries of the targets on at
least 95% of the taps. Fitts varied the size and distance of the targets in his
experiments. For his reciprocal tapping task, he obtained an ID value of about
10bits/sec.

Paul Fitts conducted three different experiments in his study; the famous reciprocal
stylus tapping task with two different stylus weights (1oz and 1lb), the disc transfer
task and the pin transfer tasks. The latter two tasks were more demanding in terms of
the endpoint difficulties, and resulted higher constant values for the regression
coefficient for similar values of indices of performances. This in fact was the first
indication of the variability of the endpoint selection time in Fitts' experiment.
However, Fitts did not mention this effect in detail in his original publication (Fitts
1954). Fitts’ original data was later reviewed by many researchers (Fitts 1964),
(Sheridan 1979), (Welford 1986), (MacKenzie 1992) and different opinions criticized the
validity of the Fitts model.

Fitts later repeated the original study by himself (Fitts 1964) in discrete form and
concluded that the original formula also holds for the discrete tapping tasks. Other
researchers repeated similar experiments and found that Fitts’ Law holds for discrete
tasks as well.

Fitts' experiment and the Fitts’ Law equation highlight the points that are important in
pointing tasks such as pointing speed, target distance, target size and accuracy. Fitts’
Law gives us a way to compare tasks, limbs and devices both in manual as well as in
computer pointing. Therefore one can conclude that devices with higher indices of
performance would be faster and presumably better.

On the other hand, we also have to state that, Fitts’ Law does not provide any
prediction of the performance of a limb or device. It does not provide information
without conducting an experiment. It does not provide an absolute measure for a limb
using particular device, rather, it is a comparative tool for studying different devices,
tasks, interaction techniques or limbs. Attempts to define such a universal standard
exist but are still being studied (MacKenzie 1999).

With the advent of new smaller technological devices, small screens, and highly loaded
user interfaces, Fitts’ Law again becomes an important tool to measure what is better
what is not, in terms of interface design. For example, increased screen density (DPI)
due to higher resolution graphics cards usually result in smaller menu buttons on
identical monitors. Yet, advantage of having more pixels on same screen bounces back
to user as longer click times and higher error rates. The problem becomes more severe
as user clicks via low performer input devices such as trackpads.

95
Intro. To Human Computer Interaction
Using bigger LCD monitors does not improve the condition: Although the target size
returns to normal, users are expected to travel longer distances on screen (Not only
with input device but also with their eyes).

The key factor is, to reduce required travel distance from one location to another as
user navigates through the interface and maintaining a proper size affordance for
clicking. Repositioning the cursor has been suggested for this reason, however, it must
be done carefully without causing user to lose locus of control. Reorganization of
navigational elements, menus, buttons so that frequently used elements are placed
closer to neutral cursor position is helpful to increase performance. Elimination of very
narrow hierarchical menu height or elimination of hierarchical menus altogether
where possible can also improve interface performance due to Fitts’s Law
characteristics of such. Dynamic approaches such as zooming icons (Mac OS X
desktop) and gravity wells are known as effective performance improvement methods
but must be used with great caution without causing other side effects.

The key-action model (KAM) (discriptive)


Computer keyboards today contain a vast array of buttons, the buttons are either
symbol keys, executive keys, or modifier keys. Symbol keys deliver graphic symbols —
typically, letters, numbers, or punctuation symbols — to an application such as a text
editor. Executive keys invoke actions in the application or at the system-level or meta-
level. Examples include ENTER, F1, or ESC. Modifier keys do not generate symbols or
invoke actions, but, rather, set up a condition necessary to modify the effect of a
subsequently pressed key. Examples include SHIFT or ALT. Basicly, this model looks
at how users interact with HCI's using keyboards and what shortcut keys HCI's use.

Self-check

I. Enumerate and explain the User Models


II. Enumerate and explain the predictive Models

96
Intro. To Human Computer Interaction
LESSON XVII

Universal Design

The Seven Principles of Universal Design

It can be easy to forget that users don’t come in a standard format when designing
products. We’re getting better at catering for different personas or demographics but
the industry still lags a long way behind design that is accessible to as many disabled
people as possible.

The principle of “Design for All” is one that begins with the Seven Principles of
Universal Design. These were founded at North Carolina State University back in 1997
by a team of design specialists across multiple disciplines which was headed by
Ronald Mace. The Seven Principles help designers evaluate the effectiveness of their
designs to be used by as many people as possible.

Equitable Use
Is your design going to be useful to a wide range of people; including those with
different physical and mental capacities from your test users? Can it be marketed to
those people with different capacities?

Flexible Use
When your design is put into use; can it be used in many different ways? Will adapt to
the way one person wants to use it or has to use it because of differing abilities?

97
Intro. To Human Computer Interaction
Simple and Intuitive Use

How easy is your design for someone to pick up and start using immediately without
instruction? The easier it is for someone to use irrespective of their previous skills,
experiences or learning and irrespective of their ability to concentrate for long periods
of time; the easier it will be for a wide-range of user to use it.

Perceptible Information

Does your design give the user enough


information to make the most efficient use
of your product? Is this true in all
conditions?

Error Tolerances

Have you tried to make your design


“foolproof”? In that, no matter how it is
used, there are minimal errors and
minimal consequences for those errors?
This is vital for those with differing
abilities; they may make mistakes
compared to other users but they should
not be unduly inconvenience for those
mistakes either.

Minimal Physical Effort

Has your design tried to minimize the


physical effort needed to get the best use
from your product? Have you tried to keep
motions that may cause fatigue in the user
to a minimum?

Size and Space in Both Approach and Usage


Finally, have you considered the environment that the product will be used in? Does
your design allow for the right amounts of space for a user to approach and
manipulate the product? Is this true for a wheelchair user or someone on crutches?

98
Intro. To Human Computer Interaction
LESSON XVIII

Information Virtualization

What is Information Visualization?

Information visualization is the process of representing data in a visual and


meaningful way so that a user can better understand it. Dashboards and scatter plots
are common examples of information visualization. Via its depicting an overview and
showing relevant connections, information visualization allows users to draw insights
from abstract data in an efficient and effective manner.

Information visualization plays an important role in making data digestible and


turning raw information into actionable insights. It draws from the fields of human-
computer interaction, visual design, computer science, and cognitive science, among
others. Examples include world map-style representations, line graphs, and 3-D
virtual building or town plan designs.

The process of creating information visualization typically starts with understanding


the information needs of the target user group. Qualitative research (e.g., user
interviews) can reveal how, when, and where the visualization will be used. Taking
these insights, a designer can determine which form of data organization is needed for
achieving the users’ goals. Once information is organized in a way that helps users
understand it better—and helps them apply it so as to reach their goals—visualization
techniques are the next tools a designer brings out to use. Visual elements (e.g., maps
and graphs) are created, along with appropriate labels, and visual parameters such as
color, contrast, distance, and size are used to create an appropriate visual hierarchy
and a visual path through the information.

Information visualization is becoming increasingly interactive, especially when used in


a website or application. Being interactive allows for manipulation of the visualization
by users, making it highly effective in catering to their needs. With interactive
information visualization, users are able to view topics from different perspectives, and
manipulate their visualizations of these until they reach the desired insights. This is
especially useful if users require an explorative experience.

Literature on Information Visualization

Here’s the entire UX literature on Information Visualization by the Interaction Design


Foundation, collated in one place:

99
Intro. To Human Computer Interaction
Have you ever thought
about how much data
flows past each of us in an
ordinary day? From the
newspaper you read at
breakfast, to the e-mails
you receive throughout the
day, to the bank
statements generated
whenever you withdraw
money or spend it, to the
conversations we have, and
so on?

There is a tidal wave of


data associated with each
aspect of our lives, and in
addition to that personal data, there is data available on nearly every aspect of life.

Over the last few decades computing and the internet have revolutionized our ability to
create, store and retrieve information on a whim. A global economy and instant
communication have created an explosion in the volumes of data to which we are
exposed. Yet, the amount of data leads to a large amount of possible confusion and
decision paralysis. There’s more data available than we can comfortably process.

Information visualization, the art of representing data in a way that it is easy to


understand and to manipulate, can help us make sense of information and thus make
it useful in our lives. From business decision making to simple route navigation –
there’s a huge (and growing) need for data to be presented so that it delivers value.

An Example of Everyday Information Visualization


This map, generated in Google maps, offers two simple ways of representing the route
from Chiang Mai in Northern Thailand to the capital of Thailand, Bangkok, in the
center of the country.

100
Intro. To Human Computer Interaction
The first representation consists of written instructions on how to go from Chiang Mai
to Bangkok (as you can see it’s a pretty simple drive – though it’s worth noting that it
would be more complex if we were to be moving between specific points within each
city). The second representation is an image of the route itself imposed on a map.

Both representations represent value to different people. The first, the instructions, is
highly useful to people who need to get from Chiang Mai to Bangkok directly. For
example, a businessman going to a meeting.

The second, the map data on the other hand, could be really useful to a tourist who
intends not to drive straight from A to B but rather wants to know “what’s on the
way?” This lets the tourist look for potential break points in the journey and start to
research what their options are in those places.

Both of these representations are examples of information visualization. The first relies
on clear simple instructions and the minimum of graphical content – it conveys a
simple set of useful instructions. The second conveys rather more data and in a visual
form that allows for rapid cognitive processing to enables us to quickly digest the
information we see.

Common Uses for Information Visualization


There are some very common uses for information visualization and these include:

Presentation (for Understanding or Persuasion)


“Use a picture. It’s worth a thousand words.” Tess Flanders, Journalist and Editor,
Syracuse Post Standard, 1911.

Journalists have known for a very long time that some ideas are simply too awkward
to communicate in words and that a visual representation can help someone
understand concepts that might otherwise be impossible to explain.

101
Intro. To Human Computer Interaction
One of the most famous information visualizations in the world is the map of the
London Underground. It is only a “map” in the loosest sense of the word; in that the
geography above ground is very different to the way it is shown on the underground
map. However, it enables pretty much anyone to quickly understand how to get from
one point in London to any other using the underground system.

In simple terms, the underground map presents complex data for the purposes of
understanding that data to make it useful.

There is a “dark side” to the presentation of information for understanding and it’s the
presentation of information to persuade. There are “lies, damned lies and statistics” as
the saying (usually attributed to Mark Twain but he attributed it to the British Prime
Minister Benjamin Disraeli and there is no trace of Disraeli having said it – the saying
has also been attributed to others) goes. By choosing what information to represent
and what information to leave out – there are now “lies, damned lies and information
visualizations”.

It is up to the presenter to decide where the ethical boundaries are in persuading


people through information visualization. For example, you could show a graph that
states “70% of people who use homeopathy feel better than those who don’t” but omit
the fact that “70% of people who take a placebo feel better than those who don’t”.

Explorative Analysis

102
Intro. To Human Computer Interaction
The image above portrays the frequency of lung cancer within the United States by
geographic region. Mapping disease data like this enables researchers to explore the
relationship between a disease and geography. It’s important to note that this data
doesn’t explain why there is a spike in cancer rates in the South East of the United
States but it does indicate that there is a spike which is worthy of further
investigation.

Explorative analysis through information visualization allows you to see where


relationships in data may exist.

Confirmation Analysis
Information visualization can also be used to help confirm our understanding and
analysis of data. For example, if you perceive a relationship between two stock prices,
you can plot the data and see if the two are related.

This graph (above) might be used to show the similarities in Brownian motion between
sets of particles or it might be used to question the break in the relationship towards
the end of the graph, for example.

The Take Away


Information visualization is designed to help us make sense out of data. It can be used
to explore relationships between data, to confirm ideas we hold about data or to
explain data in easy to digest manner. It may also be used, rightly or wrongly, to help
persuade someone with data.

As the volume of data available to us increases exponentially in every field of endeavor


– information visualization is becoming increasingly important as a skill in the
workplace and in academia.

103
Intro. To Human Computer Interaction
Self-Check

I. Explain on your own words the ff.

A. the Seven Universal Design and principle.


1. Equitable use
2. Flexibility in use
3. simple and intuitive to use
4. perceptible information
5. tolerance for error
6. low physical effort
7. size and space for approach and use
B. Information and Virtualization
1. Information Virtualization
2. Literature on Information Visualization

104
Intro. To Human Computer Interaction
LESSON XIX

WEB

Emotion and website design

BY DIANNE CYR
This chapter is about hedonic or affective elements (footnote 1) of website design and
the potential of such design to elicit emotion in the user. In an online environment
hedonic elements of website design include color, images, shapes, and use of
photographs, among other characteristics, which are expected to provide the user with
emotional appeal, a sense of the aesthetic, or a positive impression resulting from the
overall graphical look of a website (Cyr et al., 2009; Lavie and Tractinsky, 2003;
Zhang, 2013). While it is well known that emotion is important to the interpretation of
experience, it is only in recent years that research has begun to transcend utilitarian
aspects of website design to consider empirically affective elements of design.
Therefore, not only is it important that websites are useful and easy to use, but also
that they entice the user to experience emotions such as enjoyment, involvement,
trust, or satisfaction.

In this context, I focus on an empirically based research perspective that is anchored


in the tradition of information systems (IS), and more specifically on the design of
websites in the human computer interaction tradition (HCI). While much of the
research considered is anchored in e-commerce, there are clearly implications for
other types or applications such as e-government or social networking. In the following
pages I therefore include the following: a brief retrospective examination of the
development of a hedonic perspective in IS and design; an outline of some of the more
commonly documented emotion-laden outcomes of website design; a consideration of
graphical design elements known to elicit emotion in the user such as human images
and color; an elaboration on the social elements of design; and a conclusion with a
segment on future directions for research.

Emotion and Website Design: Some Background


Defining Emotion
According to Zhang, 2013 ( p. 247) “[A]ffect is conceived of as an umbrella term for a
set of more specific concepts that includes emotion, moods, feelings...” Zhang
continues (ibid, p. 251):

"One of the most complex affective concepts is emotion. Put simply, emotions
are induced affective states (Clore and Schnall 2005), or core affect attributed to
stimuli (Barrett et al. 2007; Russell 2003). Emotions typically arise as reactions
to situational events in an individual’s environment that are appraised to be
relevant to his/her needs, goals, or concerns. Once activated, emotions
generate subjective feelings (such as anger or joy), generate motivational states
with action tendencies, arouse the body with energy-mobilizing responses that

105
Intro. To Human Computer Interaction
prepare it for adapting to whatever situation one faces, and express the quality
and intensity of emotionality outwardly and socially to others

(Damasio 2001; Izard 1993; Reeve 2005)"

Other researchers have identified a spectrum of feeling associated with emotion. For
example, different emotions included by different authors have been anger, guilt,
sadness, and fear/anxiety (Smith and Lazarus, 1993), or joy, fear, anger, sadness,
disgust, shame, and guilt (Scherer, 1997). Emotion, therefore, has been seen to have
both negative and positive valence (Cenfetelli, 2004; Roseman et al., 1996). Emotional
responses are known to have two components: arousal and valence. Arousal reflects
the intensity of the response, while valence refers to the direct emotional response
ranging from positive to negative (Russell, 1980; Deng and Poole, 2010).

With website design, it is expected that emotion is aroused in the user based on a
response to specific design elements. Therefore, the user may feel a sense of
satisfaction when website colors are appealing, or when a graphical design elicits
enjoyment or excitement. In addition, it is important that website design meets the
needs and sensibilities of the user. This may include website design that is particular
to subgroups with specific preferences. The author’s research has identified that there
are different website design preferences for men and women, or for users in different
national locations. These differences will be elaborated in the following pages. If
website design is appropriate to the user, then this arouses the action tendencies
described by (Zhang (2013); above, including users being more loyal to the site, and
returning there in the future.

Beyond Cognitive-based Paradigms


Despite the pervasiveness of emotional reaction in the human psyche, only within the
last decade have calls been made for a break with conventional cognition-driven
paradigms of studying user reactions to technology (Beaudry and Pinsonneault, 2010;
Zhang and Li, 2004). In their place is an expanded focus that includes not only
utilitarian outcomes such as usefulness or ease of use, but also the role of affect and
emotion in the examination of information and communication technology systems
(Kim et al., 2007; Sun and Zhang, 2006). For instance, Hassenzahl 2006 (p. 266)
elaborates:

"In HCI, it is widely accepted that usability is the appropriate definition of


quality. However, the focus of usability on work-related issues (e.g.,
effectiveness, efficiency) and cognitive information processing has been
criticized. Its quite narrow definition of quality neglects additional hedonic (non-
instrumental) human needs and related phenomena, such as emotion, affect
and experience"

Further, Zhang (2013) outlines:

Affect is a critical factor in human decisions and behaviors within many social
contexts. In the information and communication technology context (ICT), a
growing number of studies consider the affective dimension of human
interaction with ICTs. However, few of these studies take systematic

106
Intro. To Human Computer Interaction
approaches, resulting in inconsistent conclusions and contradictory advice for
researchers and practitioners.

To date, some research has been conducted in the area of emotion when users are in
online environments. For instance, hedonic outcomes have been examined in terms of
flow (Eroglu et al., 2003; Griffith et al., 2001; Ha et al., 2007; Huang, 2006; Koufaris
et al., 2002); cognitive absorption (Agarwal and Karahanna, 2000; Wakefield and
Whitten, 2006), involvement (Fortin and Dholakia, 2003; Johnson et al., 2006),
playfulness (Wakefield and Whitten, 2006), enjoyment (Dickinger et al., 2008; Lee et
al., 2000; Li et al., 2008; Lin and Bhattarcherjee, 2008;2010; Sun, 2001; Sun and
Zhang, 2006; van der Heijden, 2004; Venkatesh, 2000; Wakefield and Whitten, 2006),
hedonic outcomes (Venkatesh and Brown, 2001), pleasure (Belanger et al., 2002),
happiness (Beadry and Pinsonneault 2010), fun (Dabholkar, 1994; Dabholkar and
Bagozzi, 2002), stimulation (Fiore et al., 2005), or mystery (Rosen and Purinton,
2004), among others.

Further, various studies are emerging that examine emotion more specifically related
to design elements, including design of e-commerce websites. This is important since
Lam and Lin (2004) argue that the role of emotions in online shopping is even more
important than in traditional marketing contexts because the consumer is disengaged
from human interaction. To this end, user emotional responses have been measured
with respect to “design factors” such as shapes, textures, color (Kim et al., 2003),
visual characteristics of web pages (Lindgaard et al., 2006), or web page aesthetics
(Robins and Holmes, 2008). Additional topics covered are affective user interfaces
(Johnson and Wiles, 2003; Lisetti and Nasoz, 2002); hedonic quality (Childers et al.,
2001; Hassenzahl, 2002; van der Heijden 2003, 2004); aesthetic performance
including atmospheric cues, media richness and social presence (Lim and Cyr, 2009);
presentation richness such as symbol variety (Jahng et al., 2002); interaction richness
(Jahng et al., 2007); human images (Cyr et al., 2009); color (Cyr et al., 2010); or
vividness (Jiang and Benbasat, 2007). Cyr et al. (2006) found that design aesthetics on
a mobile device resulted in enjoyment, and ultimately online loyalty. Similarly, Sony
Ericsson’s website for Egypt (Figure 1) is aimed to promote variety and fun
(Seidenspinner and Theuner, 2007).

Figure 40.1: Sony


Ericsson Egypt: Promoting
Variety and Fun (in
Seidenspinner and
Theuner, 2007).

107
Intro. To Human Computer Interaction
A note on the use of illustrations in this chapter: In many cases these figures are
reproduced from the original article where they appeared. These figures are used for
general illustrative purposes only, and it is not intended that full readability is
required

Over the years, beauty has been a precursor to emotional responses. In website design
various researchers have focused on aesthetic beauty (footnote 2) (e.g. Karvonen,
2000; Lavie and Tractinsky, 2004). Schenkman and Jonsson (2000)examined beauty
on a sample of web pages (Figure 2) using multi-dimensional analysis. They found four
categories that viewers used to judge a web page: beauty; illustrations versus text;
overview (i.e. lucid, clear, and easy to understand); and structure. Overall, the
researchers found the best predictor of overall judgment of the website was beauty.
Based on user judgments, these websites were scored for perceived beauty out of a
maximum of seven: National Geographic (5.08), Disney (4.51), Greenpeace (3.93),
L’Oreal (5.55), and Krook Consulting (2.70).

108
Intro. To Human Computer Interaction
Figure 40.2: Sample Web Pages related to Beauty.(in Schenkman and Jonsson, 2000)

Deng and Poole (2010) examined web page visual complexity and order and found a
relationship to user emotions and behavior. Website complexity was dependent on the
number of links, graphics, and the amount of text that appear on the web page. Figure
3 shows examples of low website complexity (12 links/2 graphics/33 text) versus high
website complexity shown in Figure 4 (54 links/14 graphics/118 text).

Figure 40.3: Sample of a Web


Page Low in Visual Complexity
(in Deng and Poole, 2010)

109
Intro. To Human Computer Interaction
Figure 40.4: Sample of a Web Pages High in Visual Complexity (in Deng and Poole,
2010)

Order refers to the logical organization, coherence, and clarity of the web page content.
An additional component of the research is the degree to which the meta-motivational
state of the user (e.g. extent to which a user seeks stimulation from the site) influences
the user’s impression of the website. It was found that whether a user was more
focused or more relaxed in approach when viewing a website did have an effect on
whether the website was perceived as pleasant. This second finding suggests that not
only is the actual design of the website important to eliciting an emotional reaction
from the user, but also that the user’s motivational state will have some bearing on
how the website is viewed and evaluated.

Scholars have also examined how social elements such as pictures of people or
emotive text on websites empirically impact users’ impressions such as enjoyment
(Cyr et al. 2006; Gefen and Staub 2003; Hassanein and Head, 2007). (footnote 3)
Many of these studies, particularly in the e-commerce realm, have user outcomes of
trust (Bhattacherjee, 2002; Chen and Dhillon, 2003; Cheung and Lee, 2006; Cyr,
2008; Everard and Galletta, 2006; Gefen et al., 2003; Jarvenpaa et al., 2000;Komiak
and Benbasat, 2004 ; Koufaris and Hampton-Sosa, 2004; Wang and Benbasat, 2005)
and satisfaction (Agarwal and Venkatesh, 2002; Fogg et al., 2002; Hoffman and Novak,
1996; Koufaris, 2002; Lindgaard and Dudek, 2003; Nielsen, 2001; Palmer, 2002;
Szymanski and Hise, 2000; Yoon, 2002).

Further research has aimed to develop models that incorporate hedonic elements.
Related to affect, Loiacono and Djamasbi (2010) proposed the relevance of mood (such
as sadness, fear, or happiness) for system usage models that could be applied online.
They further outlined a model in which mood is intended to influence perception,
evaluation, and cognitive effort resulting in variable levels of IS usage behavior. While
this model is not tested, it is a useful framework from which to investigate emotion
empirically.

In addition, Lowry et al. (forthcoming 2013) developed a hedonic system adoption


model focused on a user’s intrinsic motivations. Using an immersive gaming
environment, various games were rigorously tested to determine perceived levels of joy
and ease of use. A game that scored “low” on both dimensions was a text-based
adventure game with minimal graphical content (Figure 5).

Figure 40.5: Gaming Environment Low in Hedonic


Value (from Lowry et al., 2013)

110
Intro. To Human Computer Interaction
In contrast, the game scored by users as “high” for joy and ease of use was highly
interactive, with complex colors and graphics (Figure 6). Using various game
interfaces, these researchers determined that perceived ease of use resulting in
behavioral intention to use, or the user’s experience of immersion, is mediated by
hedonic constructs such as curiosity or joy (along with perceived usefulness and
control). Hence the role of affectively based constructs is central to understanding the
user experience in a variety of web-based platforms, including gaming.

Figure 40.6: Gaming


Environment High in Hedonic
Value (from Lowry et al., 2013)

Summary
It is encouraging that over the
past decade, researchers have
expanded beyond utilitarian
models of online experience to
encompass how users are also
emotionally engaged. While
substantial progress has been
made in this area, only recently
have researchers created
comprehensive models to
evaluate how hedonic systems
operate. Hence, there is
considerable scope for future work that empirically explores variables that elicit
hedonic or affective responses in the user, with a goal of theory development in this
domain. The spectrum for such research is broad, and encompasses e-commerce,
gaming, e-health, and many additional contexts.

In general, emotional responses are triggered by an ability to engage the user in an


online environment which is aesthetically pleasing. This places the elicitation of
emotion firmly in the realms of visual design and interaction design. However, there is
no clear definition as to what represents a hedonic outcome—and, based on the
literature to date, these outcomes have varied widely. As such, there is scope for
studies that add clarity and consistency to how hedonic systems impact the user. In
an effort to consolidate the literature to date, in the following sections I have chosen to
discuss four hedonic outcomes: enjoyment, involvement, trust, and satisfaction. These
have particular importance in an e-commerce setting, but are relevant in online
contexts more generally. These constructs have received considerable use and have
been validated in numerous studies.

Outcome Variables that Elicit Emotion


Enjoyment

As early as 2003, Blythe and Wright (2003) (p. xvi) argued that in HCI “traditional
usability approaches are too limited and must be extended to encompass enjoyment”.
Perhaps more than any other construct, enjoyment has been used to measure user

111
Intro. To Human Computer Interaction
hedonic perceptions and expectations on websites (e.g. Dellaert and Dabholkar, 2009;
Gretzel and Fersnemaier, 2006;Hassanein and Head, 2005 ; Koufaris et al., 2001;
Koufaris, 2002; Lee et al., 2000; Li et al., 2008; Sun, 2001; Sun and Zhang, 2006;Qui
and Benbasat, 2009 ; Venkatesh, 2000). Other work (e.g. Warner, 1980 (footnote 4))
has suggested enjoyment encompasses three dimensions: engagement, positive affect,
and fulfillment. Enjoyment has also been subsumed under the concept of flow (as
originally identified by Csikszentmihalyi, 1989 (footnote 5)). Although Dabholkar
(1994) and Dabholkar and Bagozzi (2002) employed the term “play” in place of
enjoyment in their research, they admitted that the meaning of “play” is no different
from that of enjoyment. (footnote 6)

Although enjoyment is a commonly used construct to measure user reactions to


hedonic content on the web, it is surprising that the accurate measurement of
enjoyment has tended to be elusive. For example, as recently as 2008, Lin et al.
2008(p. 41) noted: “[W]hen we came to the question of assessing the degree to which
enjoyment arises from a Web encounter, we found remarkably little to guide us...and
no instrument for assessing enjoyment of Web experiences could be found.” To this
end, they created and validated an instrument for online enjoyment with three
dimensions: engagement, positive affect, and fulfilment, as suggested earlier by
Warner (1980). This instrument is a positive step forward to create ways in which to
accurately measure user responses, and thus inform web managers and markets as to
what constitutes meaningful and enjoyable website design.

Further, the concept of enjoyment has been revealed to be “a strong predictor of


attitude in the web-shopping context” (Childers et al., 2001, p. 526; Cyr et al., 2006,
2007; Hassanein and Head, 2006; Lankton and Wilson, 2007; Koufaris et al., 2002;
van der Heijden, 2003; Zhang and von Dran, 2002). In online settings, a primary goal
of vendors is to entice users to purchase from websites or to revisit them in the future,
resulting in loyal behavior (Rosen and Purinton, 2004). Online loyalty (or e-loyalty) has
been described as an enduring psychological attachment by a customer to a particular
online vendor or service provider (Anderson and Srinivasan, 2003; Butcher et al.,
2001). Jiang and Benbasat (2007) discovered that vividness and interaction of
consumer product displays for a watch and Personal Data Assistant (PDA) resulted in
enjoyment and e-loyalty.

In a study of mobile interfaces as used in an e-service shopping environment,


researchers found the “design aesthetics” of the interface positively impacted
enjoyment, usefulness, and ease of use, which in turn positively affected user loyalty
(Cyr et al., 2006). More specifically, Design Aesthetics referred to the following:
attractiveness of the screen design (e.g. colors, boxes, menus); professional design;
meaningful graphics; and overall “look and feel” as visually appealing (Figure 7).

112
Intro. To Human Computer Interaction
Figure 40.7: Screen
Shots used to Test
Design Aesthetics (in
Cyr et al., 2009)

Involvement

For many years


involvement has been
the object of
considerable
consumer-oriented
research. Although
there have been
numerous definitions
of involvement,
Koufaris 2002(p. 211)
summarized
involvement as: “(a) a
person’s motivational
state (i.e. arousal,
interest, drive)
towards an object
where (b) that
motivational state is
activated by the
relevance or
importance of the
object in question...” If the website permits involvement for the user, then it will result
in an affective response that will be greater than an elicited cognitive reaction (Fortin
and Dholakia, 2003). Online, involvement implies a user emotional response that
includes absorption and excitement with website characteristics (e.g. Kumar and
Benbasat, 2002; Santosa et al., 2005; Singh et al., 2003), and therefore encompasses
elements of “flow”. Jiang et al. (2010) refer to “affective involvement” as a heightened
emotional feeling associated with a website comprising how users feel toward the
website.

In terms of website antecedents that result in involvement, website interactivity has


played a prominent role (Fortin and Dholakia, 2003; Johnson et al., 2006). (footnote 7)
Interactivity potentially enables the user to have augmented control of the content,
and thus offers an opportunity for the user to interact with the advertiser and/or other
consumers (Fortin and Dholakia, 2003). Involvement was found to have a “pivotal role”
(ibid, p. 394) in understanding how consumers interact with websites, and thus
influences consumer loyalty toward the site. In research that examined different levels
of interactivity in a fictitious vacation website (Cyr et al., 2009), five different web-poll
designs (footnote 8) (Figure 8) were tested with users. The designs range from no user
interaction in Treatment 1 to high interactivity and visualization capability in
Treatments 4 and 5.

113
Intro. To Human Computer Interaction
Based on survey results, perceived website interactivity resulted in user perceptions of
efficiency, effectiveness, enjoyment and trust, and ultimately online loyalty (Cyr et al.,
2009). Although there were no statistically different results for the different web-poll
treatments in Figure 8, additional qualitative analysis revealed that users had more
positive impressions of the more interactive websites. Relevant to the topic of emotion
and website design, different concepts emerged, which included “Affective” and
“Aesthetic” categories. For instance, for Aesthetics users noted the websites were
“visually appealing”, “unique”, “creative”, “stylish and innovative”. For the Affective
category, users described the sites as “exciting”, “makes the customers feel more
empowered by allowing them to influence others using the poll system”, “has a warm
feeling to it”, is “entertaining” and “fun”.

The content above was common to all treatments. There were three pages in total: the
front page with a list of hotels and a summary, and two detail pages. These two (one
shown as the
smaller cutout)
included photos,
description and
the actual web-
poll at the top.
The gray-striped
area is where
design varied.

114
Intro. To Human Computer Interaction
Treatment 1: Control version. No user interaction with web-poll; static indicator of
other users’ rating only.

Treatment 2: Basic
web-poll with
conventional
interaction (radio
button) and simple
information
visualization (bar
chart)

Treatment 3: Metaphor-rich web-poll. Cursor reveals foot icon across sandbox to select
one of nine
possible value
combinations on
a grid. Mini plot
on front page
with size of dot
displaying
number of votes.

Treatment 4: Flash version for enhanced user control. Cursor changes into foot icon,
moving on scale
continuously.
Front page
summary uses
color lightness to
represent weight.
Bar levels give a
positive/negative
‘slope’ for before
and after.

Treatment 5:
Enhanced Bar
Chart version for
visualizing user
contribution. Users
‘viscerally’ plot
their vote to the
stack by adding a
‘brick’

115
Intro. To Human Computer Interaction
Figure 40.8: Five Levels of Website Interactivity (in Cyr et al., 2009)

Trust
Website usability can significantly impact trust (Flavián et al., 2006). In online
environments numerous researchers have endeavored to understand the complexities
inherent in trust (Bhattacherjee, 2002; Chen and Dhillon, 2003; Cheung and Lee,
2006; Gefen, 2000; Gefen et al, 2003;Jarvenpaa et al., 2000 ; Komiak and Benbasat,
2004; Koufaris and Hampton-Sosa, 2004;Rattanawicha and Esichaikul, 2005 ; Wang
and Benbasat, 2005; Yoon, 2002). (footnote 9) Online trust relates to consumer
confidence in a website and willingness to rely on the vendor in conditions where the
consumer may be vulnerable to the seller (Jarvenpaa et al., 1999). Trust has an
emotional component, and according to Komiak and Benbasat (Komiak and Benbasat
2006, p. 943) “[E]motional trust is defined as the extent to which one feels secure and
comfortable about relying on the trustee”. Unlike the vendor-shopper relationship
established in traditional retail settings when trust is assessed in a direct and
personal encounter, the primary communication interface with the vendor is an
information technology artifact, the website. An absence of trust is one of the most
frequently cited reasons that consumers refrain from purchasing from Internet
vendors (Grabner-Kräuter and Kaluscha, 2003).

Various studies have shown that website trust is fundamental to e-loyalty including
online purchase intentions (Flavián et al., 2006; Gefen, 2000; McKnight et al, 2004),
and willingness by consumers to buy from an online vendor (Flavián et al., 2006;
Laurn and Lin, 2003; Pavlou, 2003). Antecedents to user trust in websites vary, and
include website design characteristics (Flavián et al., 2006) and design credibility
(Green and Pearson, 2011). Everard and Galletta (2005-6) conducted a study in which
they examined how presentation flaws on websites (e.g. errors, poor style,
incompleteness) influence user perceptions. More specifically, websites were
experimentally created to demonstrate good versus poor quality (Figure 9). Results of
the investigation found that user perceptions of flaws on the website related to
perceived quality—which was in turn directly related to trust and intention to
purchase from the store. Thus, careful attention to detail and the elimination of design
flaws has a positive impact on the user.

Figure 40.9: Samples of Web Pages for Good (upper) versus


Poor Style (in Everard and Galletta (2005-6)

A relationship exists between beauty of a website and trust


(Karvonen, 2000). More specifically, images have the power
to enhance consumer trust in a vendor. In this vein,
jewelry retailer Tiffany and Co. invested in digital imaging
technology to ensure images of jewelry are presented on its
website in such a way as to instill trust in potential buyers
(Srinivasan et al., 2002). Refer to Figure 10.

116
Intro. To Human Computer Interaction
Figure 40.10: A Sample Web page
from Tiffany and Co.

Additional triggers of online trust


include vendor size (van der
Heijden et al., 2000), perceived
vendor reputation (Jarvenpaa et
al., 2000; Koufaris and
Hampton-Sosa, 2004;Van der
Heijden et al., 2000), service
quality (Gefen, 2002), social
presence (Gefen and Straub,
2003), and perceived security
control (Koufaris and Hampton-
Sosa, 2004).

Satisfaction
Satisfaction on the web relates to "stickiness" and the sum of all the website qualities
that induce visitors to remain at the website rather than move to another site
(Hoffman and Novak, 1996). An effectively designed website engages and attracts
online consumers, resulting in online satisfaction (Agarwal and Venkatesh, 2002; Fogg
et al., 2002; Hoffman and Novak, 1996; Koufaris, 2002; Lindgaard and Dudek,
2003;Nielsen, 2001 ; Palmer, 2002; Szymanski and Hise, 2000; Yoon, 2002).

Elements of website design that contribute to satisfaction are numerous and varied.
Palmer (2002) validated design metrics for websites and found that site organization,
information content, and navigation are important to website success—including user
intent to return to the site. In other research, website design and the "ambience
associated with the site itself and how it functions" is an antecedent to satisfaction
(Straub, 1989). In alignment with other researchers (e.g. Agarwal and Venkatesh,
2002; Cronbach, 1971; Falk and Miller, 1992), website satisfaction is defined here as
overall contentment with the online experience. Website satisfaction is frequently a
predictor of e-loyalty (Flavián et al, 2006; Kim and Benbasat, 2006; Lam et al., 2004;
Laurn and Lin 2003; Yoon, 2002).

Summary
From the preceding, it is clear that if websites are effective and are able to arouse
responses in users such as enjoyment, involvement, trust, or satisfaction, then they
will be successful in enticing users to return to the site. These findings are intuitive,
but in the present article they are also founded on considerable systematic and
rigorous investigation by researchers—mostly in the information systems area.
However, beyond pure research, these results have merit for practitioners such as web
strategists and designers. As a powerful communication mechanism for commercial or
other use, effective website design has the ability to persuade. Hence, a worthy goal is
the amalgamation of work from both the academic research and design communities
to forge new and deeper understandings as to how emotion and websites are tied
together. This calls for integrated and multidisciplinary approaches—as well as
multiple methods for assessing user reactions to website design elements.

117
Intro. To Human Computer Interaction
Graphical Website Design Elements: A Focus on Color and Images
Although a number of website elements could be considered in depth concerning their
ability to impact the user, this chapter focuses on the website characteristics of color
and imagery. These two elements are not only central in website design, but they also
have interesting cultural implications for the user that merit future consideration.
Thus we begin with an overview of color and website design—including how color
preferences vary by country. This is followed by an overview of images, similarly with
some discussion as to how imagery is perceived by users from different countries.

Color and Emotion


The study of color and color appeal has interdisciplinary connections. Color has been
studied by a variety of researchers—from artists to zoologists—including the use of
color in art (Fornell, 1982) or visual perception (Gorn et al., 1997). Psychologists have
been interested in the effect of color on individual preferences (Goldberg et al., 2002).
Some colors are able to arouse and excite an individual, while other colors elicit
relaxation. Research on color suggests hue (as in primary colors red, blue, yellow),
brightness (light colors such as white versus dark colors such as black or gray), and
saturation (intense versions of a color versus pastels) all have an effect on individual
reactions and perceptions (Latomia and Happ, 1987).

"Colors are known to possess emotional and psychological properties" (Lichtle, 2007,
p. 91), and have the potential to convey commercial meaning in products, services,
packaging, and Internet design. For many years, marketers have utilized the power of
color for logos or displays to build consumer confidence in the corporate brand (Lui et
al., 2004; Rivard and Huff, 1988). Cool colors such as blue and green are generally
viewed more favorably than warm colors such as yellow or red (Goldberg and Kotval,
1999; Latomia and Happ, 1987; Marcus and Gould, 2000). Blue is generally
associated with "wealth, trust, and security" (Lichtle, 2007) and is universally liked
(Carte and Russell, 2003; Meyers-Levy and Peracchio, 1995; Nielsen and Del Galdo,
1996). In part, this explains the use of blue by corporate entities such as banks (at
least in North America) or IBM to establish a professional and credible image. Red is
the color symbolizing Coca Cola. Alternately, orange denotes "cheapness" (Lichtle,
2007).

Although color has the potential to elicit emotions or behaviors, in only a handful of
studies are various website color treatments empirically tested regarding their impact
on user trust or satisfaction. However, Cyr (2008) found that visual design of the
website (which includes color) resulted in trust, satisfaction, and loyalty. Further, Kim
and Moon (1998) examined color and four other design factors on an online banking
interface. Color elements examined were color tone (e.g. warm or cool), main color (e.g.
primary or pastel), background color, brightness, and symmetry (how color was
organized). The findings show that color has a main effect on trustworthiness of the
interface. Color likewise has an influence on behavioral intentions such as customer
loyalty, with blue producing stronger buying intentions than red (Becker, 2002;
Latomia and Happ, 1987).

In a study aimed to investigate the impact of color related to user emotion and
perceptions of website appeal, Bonnardel et al. (2011) tested a variety of colors to
determine whether the user found the site to be pleasing, appealing, and appropriate
(Figure 11). By varying hue and intensity 23 pages were created. (footnote 10) Testing

118
Intro. To Human Computer Interaction
with groups of participants with varied backgrounds, including web designers, yielded
consistent color effects—with blue as the most preferred color, and gray as the least
preferred. Additionally, color impacted how users navigated the site and the
information they retained.

Figure 40.11: Samples of Diverse Color Use on Websites (in Bonnardel at al., 2011)

Culture and Color


Two opposing views exist when culture, color, and cognition intersect. A Universalistic
view prescribes generic cognitive processing of color perception. Alternately, Cultural
Relativism suggests that color perception is mostly shaped by culturally specific
language associations and perceptual learning (Berlin and Kay, 1969; Kay et al.,
1991). For the Internet, the prevailing view is more aligned to Cultural Relativism in
that ideally websites should be developed for specific cultural and user groups—
termed website localization (Barber and Badre, 1998; Cyr and Trevor-Smith, 2004).
When localizing a website, in addition to language translation, details such as
currency, color sensitivities, product or service names, images, gender roles, and
geographic examples are considered (footnote 11). Localized website design creates
alignment to user expectations, and according to an ISO quality standard this is a
critical success factor for effective and efficient task completion.

Color connotes different meaning in different cultures (Bagozzi and Yi, 1989; Barber
and Badre, 1998; Singh et al., 2003). Red means happiness in China, but danger in
the United States. In a cross-cultural study focused on understanding the meaning
and preferences for ten colors across eight countries, blue was generally considered
"peaceful" or "calming" (Madden et al., 2000). In contrast, brown and black had
associations of "sad" and "stale" across cultures. Other colors, such as yellow and
orange, showed less cross-cultural consistency in terms of how they were perceived.

In the information systems and e-commerce realm, a growing number of studies have
been published with respect to culture and website design, and the subsequent effect
on users. In a report in which 27 research studies published in 16 different journals
were evaluated for website cultural congruency (e.g. cultural adaption), strong
empirical support was provided for the positive impact of cultural congruency on
performance measures including website effectiveness (Vyncke and Brengman, 2010).
(footnote 12) Other investigations support unique user preferences for website design
characteristics in different countries and cultures. For instance, for a study in which
domestic and Chinese websites for 40 American-based companies were systematically
compared, significant cultural differences were uncovered for all major categories
tested (Singh et al., 2003). (footnote 13)

119
Intro. To Human Computer Interaction
Especially rare are rigorous studies which are primarily focused on color in website
design across cultures. This area represents an important and overlooked topic.
According to Noiwan and Norcio (Noiwan and Norcio 2006, pp. 103, 104), “[E]mpirical
investigations on the impacts of cultural factors on interface design are absolutely
vital... Interface designers need to understand color appreciation and color responses
of people in different cultures and regions."

In a study comparing Indian and U.S. websites related to language, pictures, symbols,
and colors substantial differences were found regarding the use of color (Kulkarni et
al., 2012). The Indian portal (Figure 12) had multiple colors on a white background,
while the U.S. website (Figure 13) hosted only blue and red, which, as the authors
point out, are represented on the national flag.

Figure 40.12: An Indian


Portal (in Kulkarni et al.,
2012)

Figure 40.13: A U.S. Portal (in


Kulkarni et al., 2012)

From this study it appears that


the use of specific colors on
websites in different countries can
also impact color appeal, trust and
satisfaction.

Cyr and Trevor-Smith (2004)


examined design elements,
including the use of color, using
30 municipal websites in each of
Germany, Japan, and the U.S (90

120
Intro. To Human Computer Interaction
websites in total). Use of symbols and graphics, color preferences, site features (links,
maps, search functions, page layout), language and content were examined, and
significant differences were determined in each website design category. Colors used
on a website were matched to a color wheel and assigned a numerical value by
independent raters based on the percentage of the page on which a color appears.
(footnote 14) Fifteen colors were used across the websites. Relevant to the countries in
this investigation, blue was most popular on German websites, while gray was the
color most often appearing on American websites. Japanese are known to prefer
brighter colors such as yellow (also supported by Noiwan and Norcio, 2006).

Building on empirical results from the few studies in which color is examined on
websites, Cyr et al. (2010) examined the influence of blue, gray, and yellow on color
appeal, trust, satisfaction, and online loyalty. Figure 14 illustrates a sample of how
color is adapted. (footnote 15) Testing was conducted on 30 participants from each of
Canada, Germany, and Japan (90 total). These countries were chosen based on known
cultural diversity (e.g. Hofstsede, 1980). Data was collected using a questionnaire,
interviews, and an eye-tracker. (footnote 16) Website color appeal was found to be a
significant determinant of website trust and satisfaction, with differences across
cultures. More specifically, Canadians, Germans, and Japanese all tend to dislike
yellow on websites. Users in all three countries most prefer the blue sites, contrary to
expectations that Japanese would prefer a bright color.

Of particular relevance to a discussion of emotion and websites, from interview


analyses five concepts emerged from the data related to use of color on the website: (1)
aesthetics concerning artistic appeal, (2) affective or emotional quality of the color, (3)
functional quality, (4) harmony (i.e. balance), and (5) appropriateness (Cyr et al.,
2010). As already noted, color has the ability to elicit an emotional or affective
response toward a website, and this varied across the various groups. For instance, in
all three countries blue had positive affective quality while in none of the countries did
users mention this was the case for gray.

121
Intro. To Human Computer Interaction
Figure 40.14: Sample Websites
indicating Color Zone
Treatments and Look Zones for
a German Website (preliminary
experimental treatments
adapted from Cyr et al., 2010)

Imagery and Emotion


Image design for websites may
include elements of balance,
emotional appeal, aesthetics,
and uniformity of the overall
graphical look of the site. This
encompasses web elements
such as use of photographs,
colors, shapes, or font type
(Garrett, 2003). The aesthetics
of website design are considered
related to the “overall enjoyable
user experience” (Tarasewich,
2003, p. 12). The use of
photographs in websites has
been debated among usability
experts in a discussion of
whether photographs
unnecessarily clutter up the
website, slow it down, and
disrupt its functionality
(Riegelsberger, 2002).
Alternately, images have been
found to attract viewer attention
(Riegelsberger, 2002), and
increase credibility (Fogg et al.,
2002).

In advertising, images are used


to convey product and brand
information—and to elicit
emotional responses from
consumers (Branthwaite, 2002;
Kamp and MacInnis, 1995;
Swinyard, 1993). In one study, a
picture of a spray bottle of window cleaner composed of purple berries aligned with the
verbal statement “bring home a fresh fruit orchard” generated “positive inferences”
from viewers (Phillips and McQuarrie, 2005). In other research, happy or angry faces
were flashed on a screen while people examined Chinese ideographs. The type of face
affected “liking ratings” of the ideographs. Even small alterations in an image can

122
Intro. To Human Computer Interaction
impact product evaluations. For instance, changing the camera angle of a product can
influence the viewer’s attitude toward the product (Meyers-Levy and Peracchio, 1992).

Online, the visual design of an e-commerce website is important because it improves


website aesthetics and emotional appeal (Garrett, 2003; Liu et al., 2001; Park et al.,
2005), which may in turn lead to more positive attitudes toward an online store (Fiore
et al., 2005).Loiacono et al. (2007) created an instrument to measure consumer
evaluation of websites, and found that visual appeal and consistent images resulted in
entertainment for the user, ultimately leading to intentions to reuse the site in the
future. In research on banner ads, different formats using either text or images were
manipulated. Viewers consistently rated the versions with images as more positive and
effective (Yoon, 2002).

Age makes a difference as to how images are processed and appreciated. For example,
some studies suggest that web aesthetics and visual design may be especially
important to Generation Y users (e.g. born 1977 to 1990) (Tractinsky, 2004; 2006).
Other studies have shown that visual appeal of the homepage for an online vendor
impacts impressions of vendor image and merchandise quality for Generation Y users
(Oh et al., 2008). Users younger than age 25 seek fun when shopping online, and
respond positively to personalized product offers, custom-designing products, and
seeing the profiles of previous customers who purchased an item. In a two pronged
study that included surveys and investigations where participants looked at a web
page using an eye-tracking device, Generation Y users exhibited specific preferences
for a large main page, images of celebrities, little text, and a search feature (Djamasbi
et al., 2010). In research that compared Generation Y users and Baby Boomers (e.g.
born 1946 to 1964) using an eye-tracking device and self-report measures, both
generations reported similar aesthetic preferences, and liked pages with images and
little text (Djamaski et al., 2011). However, viewing patterns differed, with Baby
Boomers having significantly more eye fixations that covered more of the pages.

Human Images and Image Appeal


Online, images of people are used to induce favorable emotional responses
(Riegelsberger et al., 2003) and to draw attention (Tullis et al., 2009). Adding images of
players engaged in an online text chatting game increased cooperation (Zheng et al.,
2002). The use of human images is thought to increase the website’s aesthetics and
playfulness—and therefore positively influence the user (Liu et al., 2001).

For many years advertising has relied on imagery using “friendly faces” to build a
positive attitude toward products (Giddens, 1990), as shown with the use of faces on
sample banner ads, lifestyle photos, and opinion articles in Figure 15.

123
Intro. To Human Computer Interaction
Figure 40.15: Sample Use
of Faces (in Tullis et al.,
2009)

Online trust can be


established through virtual
re-embedding of content
and social cues
(Riegelsberger et al., 2003;
2005). In a study using
pages from the online shop
of a well-known British
supermarket chain,
identical pages were created
with one exception: one
page contained a
photograph of a human
face while another had a
box of text the identical size as the photograph. Viewers were more attracted to the
photograph, leading to the conclusion that “the face is a very important source of
socio-emotional cues....Advertisers have found that photographs of faces attract
attention and create an immediate affective response that is less open to critical
reflection than text we read” (Riegelsberger, 2002, p. 1). For an online banking
website, inclusion of employee photographs resulted in attributions of trustworthiness
(Steinbrück et al., 2002). A photograph of the author in an online magazine article
resulted in greater perceived trustworthiness of the article (Fogg et al., 2001).

Using an eye-tracking device, Djamasbi et al. (2012) examined the impact of facial
images on viewing behavior, and the number of user eye fixations on web pages
(footnote 17). Using the theory of visual hierarchy (e.g. the order in which information
is communicated to users) objects on the website were manipulated related to the
presence or absence of facial images, and their location either above or below the mid-
point on the page. Typically, users follow an F-shaped pattern when viewing web
pages—that is, they look along the left hand portion of the page, and particularly the
top left hand area (Buscher at el., 2009). Figure 16 shows the pages used in
Djamasbi’s study.

124
Intro. To Human Computer Interaction
Figure 40.16: Web
Pages with Faces
and their Location
(in Djamasbi et al.,
2012)

Findings from the


above study
(Djamasbi et al.,
2012) revealed that
faces did not
necessarily increase
viewing time or the
number of people
who viewed the
areas with images.
However fixation
patterns in these
areas were affected.
For instance,
viewing was more
dispersed when
faces are present -
and thus users
scanned faces,
titles, and text
(Figure 17, a and b.
Also, faces above
the mid-point on
the page attracted
significantly longer
fixations, which negatively affected user performance on a performance task since
users diverted their attention from key information such as titles.

Figure 40.17:
Heat Maps
when Browsing
Faces or Text
(in Djamasbi et
al., 2012)

125
Intro. To Human Computer Interaction
Social Elements and Website Design

Building on the preceding, but broader than human imagery, there are a variety of
socially oriented website elements which can elicit hedonic responses in users.
Website social elements, and more specifically website social presence as mentioned
above, is known to result in user enjoyment (Cyr et al., 2007; Hassanein and Head,
2007). In the following, perceived social presence is discussed, including differences
between men and women regarding desired levels of social presence. In addition,
cultural implications for social design elements are outlined, although little research
has been conducted in this area.

Perceived Social Presence


Perceived social presence has been defined as “the extent to which a medium allows
users to experience others as being psychologically present” (Gefen and Straub, 2003,
p. 11). Perceived social presence implies a psychological connection with the user who
perceives the website as “warm”, personal, sociable, thus creating a feeling of human
contact (Yoo and Alavi, 2001). (footnote 19)In one study, social presence has been
segmented into “affective social presence” and “cognitive social presence” (Shen and
Khalifa, 2009), although most research uses a uni-dimensional construct. Affective
social presence refers to emotional responses aroused in the user by virtual social
interaction. Cognitive social presence refers to the user’s belief regarding relationships
with others in the social context.

Examples of website features that encourage social presence are socially-rich text
content, personalized greetings (Gefen and Straub, 2003), human audio (Lombard and
Ditton, 1997), or human video (Kumar and Benbasat, 2002). Gefen and Straub (2003)
suggested that pictures and text are able to convey personal presence in the same
manner as do personal photographs or letters. In addition to perceived social presence
resulting in online enjoyment, social presence has implications for website
involvement (Kumar and Benbasat, 2002; Witmer et al., 2005); website trust (Cyr et
al., 2007; Gefen and Straub, 2003; Hassanein and Head, 2007); and utilitarian
outcomes such as perceived usefulness (Hassanein and Head 2005-6; 2007).

In research on Internet auctions,


two conditions of social influence
were presented to participants: (1)
interpersonal information in the
form of text, or (2) “virtual
presence” that included pictures of
other bidder’s faces (Rafaeli and
Noy, 2005) (Figure 21).

126
Intro. To Human Computer Interaction
Figure 40.21 A-B: Difference Levels
of Social Influence on Websites (in
Rafaeli and Noy, 2005)

Results indicated the effect of


interpersonal information on
bidding behavior was not as
important as the effect of virtual
presence. The authors explained
their results as being related to “the
enthusiasm with facial cues [by
users] and perception of other’s
presence” (Rafaeli and Noy, 2005,
p. 172). The incorporation of human or human-like faces in online environments
provides online participants with a stronger sense of community (Donath, 2001).

Previous research has manipulated Internet shopping conditions to investigate online


social presence on an apparel website (Hassanein and Head, 2007). In a low social
presence condition functional text and a basic product picture appeared; in the
medium condition a basic product picture appeared with emotive/descriptive text; and
in the high social presence condition pictures depicted human figures interacting with
the product as well as rich and emotive text (Figure 22).

127
Intro. To Human Computer Interaction
Figure 40.22 A-B-C: Difference Levels of
Social Presence on Websites (in
Hassanein and Head, 2007)

Hassanein and Head (2007) concluded


that for shopping websites featuring
apparel, higher levels of social presence,
created in part through human figures,
positively impacted perceived usefulness,
trust, and enjoyment. Further, as the
degree of perceived social presence
increases, there is an increased impact
on emotions and behavior (Argo et al.,
2005). Alternately, the type of website
determines whether or not the
development of social presence is
necessary. For instance, Hassanein and
Head (2005-6) conducted another study in which an apparel website was compared to
a website selling headphones. For the website selling headphones higher levels of
social presence did not positively impact user attitudes, since the user was primarily
seeking detailed product information.

128
Intro. To Human Computer Interaction
LESSON XX

Embodied Agents

Situated and Embodied Cognition

The development of computers also gives rise to a new model for human thinking.
The cognitive science initially was driven by the analogy that the brain functions like
a computer that result in thinking which is popularly known as symbols processing
and manipulation. All meaning arose via correspondences between symbols (words,
mental representations) and things in the external world. The mind was seen as a
mirror of nature, and human thought as abstract and disembodied. Another paradigm
is
parallel distributed processing approach and connectionism, where thinking is
modeled as associations in artificial neuron networks. Some connectionist models are
directly based on the developments in the neuroscience, while others are more general
models of cognitive processes such as concept formation. The computational aspect
of cognitive science explores understanding, hypotheses, and axioms of the nature and
limits our cognitive and information processing system, as well as to emphasize
principles for designing effective systems to support individuals, groups, and
organizations. But intelligent behavior is not just symbolic manipulation and
deductive reasoning, but also interaction with others. This requires awareness of the
relationship with the world, physical coordination, intelligent action, creativity, and
other affective behavior responses. Everyday human intelligence includes all of above
states and many more that we can understand through embodied and situated
cognition – which is emerging as an alternative paradigm in cognitive science.\

This paradigm finds applications in various research projects and areas, notably are
modern studies of robotics, autonomous agents, and interactive interfaces. The
situated
view on cognition is linked to distributed and augmented cognition approaches. These
research directions stress on enhancing the limits of information processing capacities
of
human and machine. In this regard, present research explores the complex work
environment to study the cognition that may result in computational model of
particular task
or some general framework for information handling. How do we explore mental
processes and physical embodiment together? Situatedness and embodiment have
become important concepts in practically all areas of cognitive science since the late
80s.
Many researchers have emphasized the importance of studying cognition in the
context
of agent-environment interaction and sensorimotor activity [1], [2], [3], [4], [5], [6], [7],
[8], [9], [10], [11], [12]. The viewpoint known as embodied or situated cognition treats

129
Intro. To Human Computer Interaction
cognition as an activity that is structured by the body and its situatedness in its
environment - which result in an embodied action. In this view, cognition is due to
experiences
through a body with sensorimotor capacities. These capacities are embedded in an
encompassing biological, psychological, and cultural context.
Perception is understood as perceptually guided action. Such behavior is facilitated
through elaborate feedback mechanisms among sensory and motor apparatus as
shown in figure 1. Hence, cognitive structures emerge from the recurrent sensorimotor
patterns that enable the perceiver to guide his or her actions in the local situation.
That is, the emergent reinforced neural connections between the senses and the motor
system form the basis for cognition. The mind’s embodiment provides natural biases
for inductive models and representations, and thus automatically ground cognitive
processes that might normally be considered disembodied. This view provides a sharp
contrast from the standard information-processing viewpoint, in which cognition is
seen as a problem of recovering details of the pre-given outer world [6].
In this light, the mind is no longer seen as passively reflecting the outside world,
but rather as an active constructor of its own reality. In this perspective, the
fundamental building blocks of cognitive processes are control schemata for motor
patterns
that arise from perceptual interaction with the body’s environment. The motivation
for the cognitive system for such interaction comes from within the system in the
form of needs and goals. This can also be linked to an organization. We can make
more sense of our brains and bodies if we view the nervous system as a system for
producing motor output. The cerebellum is connected almost directly to all areas of
the brain - sensory transmissions, reticular (arousal/attention) systems, hippocampus
(episodic memories), limbic system (emotions, behavior). It is largely accepted in AI
that embodiment has strong implications on the control strategies for generating
purposive and intelligent behavior in the world.

Self-check

I. On your Own words Explain the ff.


A. Emotion and website design
B. Emotion and Website Design: Some Background
C. Graphical Website Design Elements: A Focus on Color and Images
D. Social Elements and Website Design
E. Embodied agents
F. Situated and Embodied Cognition

130
Intro. To Human Computer Interaction
LESSON XX

Embodied Agents

Computer Supported Cooperative Work

Computer Supported Cooperative Work (CSCW) is a community of behavioral


researchers and system builders at the intersection of collaborative behaviors and
technology. The collaboration can involve a few individuals or a team, it can be within
or between organizations, or it can involve an online community that spans the globe.
CSCW addresses how different technologies facilitate, impair, or simply change
collaborative activities.

The CSCW community revolves around a journal and two conference series, one
typically held in North America and one in Europe. Books and academic courses
followed, and relevant papers appear in other conferences as well. Pointers to these
resources conclude this chapter.

The Emergence of CSCW


In 1984 Irene Greif and Paul Cashman coined the acronym CSCW for an invited
workshop focused on understanding and supporting collaboration. Technology capable
of supporting a group of people was so expensive that workplace deployment was the
sole focus. A major topic was email, which in 1984 was poorly designed, not
interoperable across different platforms, and used primarily by researchers. The first
open CSCW conference was held in 1986. CSCW soon became the principle research
forum for the collaboration that was newly enabled by emerging client-server PC and
workstation networks. Despite the severe processing and memory constraints in those
early days, these networks created new possibilities.

What inspired these researchers? In 1988, Greif published Computer-Supported


Cooperative Work: A book of readings (Greif 1988). Four of the first five papers
describe the inspirational research led by Douglas Engelbart between 1963 and 1984.
Engelbart is best known for inventing the mouse, but he had a far broader vision for
augmenting human intellect and building high-performance teams through
technology. The NLS system he and his colleagues developed and used (See Figure 1
A-B) included many features that took decades to become widely used, including
desktop and video conferencing.

131
Intro. To Human Computer Interaction
Figure 27.1 A-B: Douglas Engelbart and staff using NLS to support 1967 meeting with
sponsors - probably the first computer-supported conference. The facility was rigged
for a meeting with representatives of the ARC's research sponsors NASA, Air Force,
and ARPA. A U-shaped table accommodated setup CRT displays positioned at the right
height and angle. Each participant had a mouse for pointing. Engelbart could display
his hypermedia agenda and briefing materials, as well as the documents in his
laboratory's knowledge base.

Another source of inspiration was more recent behavioral research (also included in
Greif’s book), much of it centered on minicomputer office automation systems. These
systems supported computer-mediated communication, a term still in use, and the
technologies were often called groupware, reflecting the early focus on small groups or
teams. Use of this label declined as organization-wide deployment became common
and as collaboration features were integrated into more applications.

In theory, CSCW could cover any aspect of cooperative work in which digital
technology plays a role. In practice, the CSCW research field reflects the interests of its
participants. For example, by the mid-1980s database systems were already a
maturing technology used in many organizations and not covered within CSCW.
Research centered on communication, such as use of email and videoconferencing
prototypes, and on small-group interaction, such as collaborative text editing and
drawing. Over time, technological advances and shifting interests of CSCW researchers
broadened the scope of CSCW. It came to cover interaction within units of all sizes,
using both fixed and mobile technologies. Social media are now a major focus.

The terms computer, support, cooperative, and work have all been transcended.
CSCW encompasses collaboration that uses technologies we do not call computers,
collaboration in which technology plays a central rather than a support role, uses that
involve conflict, competition, or coercion rather than cooperation, and studies of
entertainment and play.

A strong European branch of CSCW formed with a somewhat different focus. Several
papers in the 1988 conference from Nordic countries described participatory or
cooperative design approaches. In 1989 the European conference series began, with
strong German and British participation. Liam Bannon and Kjeld Schmidt (Bannon
and Schmidt 1989) outlined a vision, some of which came to pass and some of which

132
Intro. To Human Computer Interaction
did not. Whereas much North American involvement was initially from commercial
software developers and telecommunications companies, Europe drew mainly from
government and academic research focused on large enterprises that at that time
typically designed and developed software in-house.

The European approach was also more firmly grounded in theory, with activity theory
a particularly strong influence (for example, see Engeström and Middleton, 1998).

CSCW Technology

The framework in Table 1 is a useful way to conceptualize collaboration technologies


and their similarities and differences. Human behaviors that contribute to
collaboration may be roughly divided into three categories: communication, sharing
information, and coordination. People may engage in these behaviors at the same time
(real time collaboration) or at different times (asynchronous collaboration).
Technologies or technology features have been developed to support each of these six
components of collaborative behavior.

Real time Asynchronous


 Telephone  Email
 Video conferencing  Voice mail
Communication  Instant messaging  Blogs
 Texting  Social networking
sites
 Whiteboards  Document repositories
Information sharing  Application sharing  Wikis
 Meeting facilitation  Web sites
 Virtual worlds  Team workspaces
 Floor control  Workflow
 Session management management
Coordination  Location tracking  CASE tools
 Project management
 Calendar scheduling
Table 27.1: A two-dimensional collaboration framework with examples of technology
features or products found within each cell.

Communication tools remains central to CSCW, even as email is studied less and
microblogging more. Communication via voice, video, text conferencing, instant
messaging, and text messaging have been explored. Waves of research into prototype
desktop video systems appeared in the late 1980s, mid-1990s, and early 2000s. As
video communication finally blossoms, past CSCW studies covering a range of complex
social and interface issues will likely contribute (Poltrock and Grudin, 2005). Social
networking sites such as Twitter and Facebook (see Figure 2 A-B) blend
communication and information sharing features.

Figure 27.2: A user profile in the Facebook social networking site as it looked in 2010

133
Intro. To Human Computer Interaction
Information repositories provide a
way to share information. Early
studies focused on document
management systems, more
recently attention has shifted to
wikis and Wikipedia, the mountain
of freely-accessible information
including a complete edit history
that is swarmed over by an army of
graduate students who analyze it in
diverse ways. An early influential
study combined visualization and
analysis to examine conflict
visualized through the history of
edit changes (Viégas et al, 2004).
Other topics include information
reliability, contributor reliability,
incentive systems, image
contribution, and Wikipedia
administration. Team workspaces
such as Microsoft’s SharePoint or
Google Wave (see Figure 3) provide a managed repository for a team’s artifacts and
tools for communicating and sharing information with one another.

Figure 27.3 A-B: Google Wave conversation and collaboration. In 2009, Google started
beta testing Google Wave, a real-time collaboration environment that Google hoped
would eventually displace email and instant messaging. However, Google announced
in August 2010 that it had decided to stop developing Wave as a standalone project,
due to insufficient user adoption.

134
Intro. To Human Computer Interaction
Coordination technologies employed in the workplace such as meeting support
systems, group calendars, workflow management systems, and computer-aided
software engineering systems were an early focus of CSCW. They gave way to studies
of how people coordinate in the absence of (or despite) coordination management
technologies. For example, Bowers et al (1995) studied the problems that deployment
of workflow technology created in a large printing enterprise. Social networking also
enables a new generation of coordination technologies whether mobile and location-
aware real-time (e.g., Foursquare and Google Latitude - Figure 4 - or asynchronous -
e.g., Groupon’s coordination of purchasing decisions).

Figure 27.4: Google Latitude (initial


release February 5, 2009) shows your
friends on a map--as long as they've
agreed to share their location.

135
Intro. To Human Computer Interaction
LESSON XXI

Ubicomp

Ubiquitous computing

Ubiquitous computing (or "ubicomp") is a concept in software engineering and


computer science where computing is made to appear anytime and everywhere. In
contrast to desktop computing, ubiquitous computing can occur using any device, in
any location, and in any format. A user interacts with the computer, which can exist
in many different forms, including laptop computers, tablets and terminals in
everyday objects such as a refrigerator or a pair of glasses. The underlying
technologies to support ubiquitous computing include Internet, advanced middleware,
operating system, mobile code, sensors, microprocessors, new I/O and user interfaces,
computer networks, mobile protocols, location and positioning, and new materials.

This paradigm is also described as pervasive computing, ambient intelligence, or


"everyware". Each term emphasizes slightly different aspects. When primarily
concerning the objects involved, it is also known as physical computing, the Internet
of Things, haptic computing, and "things that think". Rather than propose a single
definition for ubiquitous computing and for these related terms, a taxonomy of
properties for ubiquitous computing has been proposed, from which different kinds or
flavors of ubiquitous systems and applications can be described.

Ubiquitous computing touches on distributed computing, mobile computing, location


computing, mobile networking, sensor networks, human–computer interaction,
context-aware smart home technologies, and artificial intelligence.

Core concepts
Ubiquitous computing is the concept of using small internet connected and
inexpensive computers to help with everyday functions in an automated fashion. For
example, a domestic ubiquitous computing environment might interconnect lighting
and environmental controls with personal biometric monitors woven into clothing so
that illumination and heating conditions in a room might be modulated, continuously
and imperceptibly. Another common scenario posits refrigerators "aware" of their
suitably tagged contents, able to both plan a variety of menus from the food actually
on hand, and warn users of stale or spoiled food.

Ubiquitous computing presents challenges across computer science: in systems design


and engineering, in systems modelling, and in user interface design. Contemporary
human-computer interaction models, whether command-line, menu-driven, or GUI-
based, are inappropriate and inadequate to the ubiquitous case. This suggests that
the "natural" interaction paradigm appropriate to a fully robust ubiquitous computing
has yet to emerge – although there is also recognition in the field that in many ways
we are already living in a ubicomp world (see also the main article on natural user
interfaces). Contemporary devices that lend some support to this latter idea include

136
Intro. To Human Computer Interaction
mobile phones, digital audio players, radio-frequency identification tags, GPS, and
interactive whiteboards.

Mark Weiser proposed three basic forms for ubiquitous computing devices:

 Tabs: a wearable device that is approximately a centimeter in size


 Pads: a hand-held device that is approximately a decimeter in size
 Boards: an interactive larger display device that is approximately a meter in size
Ubiquitous computing devices proposed by Mark Weiser are all based around flat
devices of different sizes with a visual display. Expanding beyond those concepts there
is a large array of other ubiquitous computing devices that could exist. Some of the
additional forms that have been conceptualized are:

 Dust: miniaturized devices can be without visual output displays, e.g. micro
electro-mechanical systems (MEMS), ranging from nanometres through
micrometers to millimetres. See also Smart dust.
 Skin: fabrics based upon light emitting and conductive polymers, organic
computer devices, can be formed into more flexible non-planar display surfaces
and products such as clothes and curtains, see OLED display. MEMS device
can also be painted onto various surfaces so that a variety of physical world
structures can act as networked surfaces of MEMS.
 Clay: ensembles of MEMS can be formed into arbitrary three dimensional
shapes as artefacts resembling many different kinds of physical object (see also
tangible interface).
In Manuel Castells' book The Rise of the Network Society, Castells puts forth the
concept that there is going to be a continuous evolution of computing devices. He
states we will progress from stand-alone microcomputers and decentralized
mainframes towards pervasive computing. Castells' model of a pervasive computing
system, uses the example of the Internet as the start of a pervasive computing system.
The logical progression from that paradigm is a system where that networking logic
becomes applicable in every realm of daily activity, in every location and every context.
Castells envisages a system where billions of miniature, ubiquitous inter-
communication devices will be spread worldwide, "like pigment in the wall paint".

Ubiquitous computing may be seen to consist of many layers, each with their own
roles, which together form a single system:

 Layer 1: Task management layer


o Monitors user task, context and index
o Map user's task to need for the services in the environment
o To manage complex dependencies
 Layer 2: Environment management layer
o To monitor a resource and its capabilities
o To map service need, user level states of specific capabilities
 Layer 3: Environment layer
o To monitor a relevant resource
o To manage reliability of the resources

137
Intro. To Human Computer Interaction
Self-check

I. Explain the ff, in your won words


a. Computer Supported Cooperative Work
b. The Emergence of CSCW
c. CSCW Technology
d. HCI in ubicomp

138
Intro. To Human Computer Interaction
REFERENCES

https://www.interaction-design.org

139
Intro. To Human Computer Interaction

You might also like