Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 5

COGNITIVE ASPECTS  Why, since both displays have the same

density of information?
Why do we need to understand users?
 Spacing
 Interacting with technology is cognitive
 In the 1st screen the information is
 Need to take into account cognitive
bunched up together, making it hard to
processes involved and cognitive
search
limitations of users
 In the 2nd screen the characters are
 Provides knowledge about what users
grouped into vertical categories of
can and cannot be expected to do
information making it easier
 Identifies and explains the nature and
Multitasking and attention
causes of problems users encounter
 Is it possible to perform multiple
 Supply theories, modelling tools,
tasks without one or more of them
guidance and methods that can lead to
being detrimentally affected?
the design of better interactive
 Ophir et al (2009) compared heavy
products
vs light multi-taskers
Cognitive processes  heavy were more prone to being
distracted than those who
 Attention infrequently multitask
 Selecting things to concentrate on at a  heavy multi-taskers are easily
point in time from the mass of stimuli distracted and find it difficult to
around us filter irrelevant information
 Allows us to focus on information that is Design implications for attention
relevant to what we are doing
 Involves audio and/or visual senses  Make information salient when it needs
 Focussed and divided attention enables attending to
us to be selective in terms of the mass
 Use techniques that make things stand
of competing stimuli but limits our
out like color, ordering, spacing,
ability to keep track of all events
underlining, sequencing and animation
 Information at the interface should be
structured to capture users’ attention,  Avoid cluttering the interface with too
e.g. use perceptual boundaries much information
(windows), colour, reverse video, sound
 Search engines and form fill-ins that
and flashing lights
have simple and clean interfaces are
easier to use
 ACTIVITY

 Tullis (1987) found that the two screens


produced quite different results  Perception
 1st screen - took an average of 5.5  How information is acquired from the
seconds to search world and transformed into experiences
 2nd screen - took 3.2 seconds to search
 Obvious implication is to design  Memory
representations that are readily
perceivable, e.g.  Involves first encoding and then
 Text should be legible retrieving knowledge.
 Context is important in affecting our
 Icons should be easy to distinguish and
memory (i.e. where, when)
read
 We recognize things much better
 Activity than being able to recall things
 we remember less about objects we
 Weller (2004) found people took less
have photographed than when we
time to locate items for information
observe them with the naked eye
that was grouped
(Henkel, 2014)
 using a border (2nd screen) compared
with using color contrast (1st screen) Processing in memory

 Some argue that too much white space  Encoding is first stage of memory
on web pages is detrimental to search  determines which information is
attended to in the environment
 Makes it hard to find information
and how it is interpreted
Design implications for Perception  The more attention paid to
something…
 Icons should enable users to
 The more it is processed in terms
readily distinguish their
of thinking about it and
meaning
comparing it with other
 Bordering and spacing are knowledge…
effective visual ways of  The more likely it is to be
grouping information remembered
 e.g. when learning about HCI, it
 Sounds should be audible
is much better to reflect upon it,
and distinguishable
carry out exercises, have
 Speech output should enable discussions with others about it,
users to distinguish between and write notes than just
the set of spoken words passively read a book, listen to a
lecture or watch a video about it
 Text should be legible and
distinguishable from the Context is important
background  Context affects the extent to
 Tactile feedback should allow which information can be
users to recognize and subsequently retrieved
 Sometimes it can be difficult for
distinguish different
people to recall information
meanings
that was encoded in a different George Miller’s (1956) theory of how
context: much information people can
 “You are on a train and remember
someone comes up to you and
says hello. You don’t recognize People’s immediate memory capacity is
him for a few moments but very limited
then realize it is one of your
neighbours. You are only used Many designers think this is useful
to seeing your neighbour in the finding for interaction design
hallway of your apartment But…
block and seeing him out of
context makes him difficult to What some designers get up to…
recognize initially”
 Present only 7 options on a
 Activity
menu
 People are very good at
 Display only 7 icons on a tool bar
remembering visual cues
 Have no more than 7 bullets in a
about things
list
 e.g. the color of items, the
 Place only 7 items on a pull
location of objects and marks
down menu
on an object
 Place only 7 tabs on the top of a
 They find it more difficult to
website page
learn and remember
 But this is wrong
arbitrary material
 e.g. birthdays and phone Why?
numbers
 Inappropriate application of the theory

 People can scan lists of bullets, tabs,


Recognition versus recall menu items for the one they want

 Command-based interfaces  They don’t have to recall them from


memory having only briefly heard or
require users to recall from
seen them
memory a name from a possible
set of 100s  Sometimes a small number of items is
 GUIs provides MP3 players good
visually-based options that users But depends on task and available screen estate
need only browse through until
they recognize one Digital content management
 Web browsers, etc., provide lists Is a growing problem for many users
of visited URLs, song titles etc.,
 vast numbers of documents,
that support recognition
images, music files, video clips,
memory
emails, attachments,
The problem with the classic ‘72’ bookmarks, etc.,
 where and how to save them  a wearable device that intermittently
all, then remembering what takes photos without any user
they were called and where to intervention while worn
find them again
 digital images taken are stored and
 naming most common means of revisited using special software
encoding them
 Has been found to improve people’s
 but can be difficult to memory, suffering from Alzheimer’s
remember, especially when
Design implications of memory
have 1000s and 1000s.
 Don’t overload users’ memories with
 Memory involves 2 processes
complicated procedures for carrying
 recall-directed and recognition-based out tasks
scanning
 Design interfaces that promote
 File management systems should be recognition rather than recall
designed to optimize both kinds of
 Provide users with various ways of
memory processes
encoding information to help them
 e.g. Search box and history list remember

 Help users encode files in richer ways e.g. categories, color, flagging, time stamping

 Provide them with ways of saving files


using colour, flagging, image, flexible
text, time stamping, etc.  Learning
 How to learn to use a computer-
Digital Forgetting based application
 When might you wish to forget  Using a computer-based
something that is online? application or YouTube video to
 When you break up with a partner understand a given topic
 Emotionally painful to be reminded of  People find it hard to learn by
them through shared photos, social following instructions in a
media, etc., manual
 Sas and Whittaker (2013) suggest new
ways of harvesting and deleting digital Design implications of Learning
content
 Design interfaces that encourage
 e.g. making photos of ex into an
abstract collage
exploration
 helps with closure  Design interfaces that constrain and
guide learners

Memory aids  Dynamically linking concepts and


representations can facilitate the
 Sense Cam developed by Microsoft
learning of complex material
Research Labs (now autographer)
 Reading, speaking, and listening Problem-solving, planning, reasoning
and decision-making
 The ease with which people can
read, listen, or speak differs  All involves reflective cognition
 Many prefer listening to reading  e.g. thinking about what to
do, what the options are, and
 Reading can be quicker than
the consequences
speaking or listening
 Often involves conscious processes,
 Listening requires less cognitive
discussion with others (or oneself),
effort than reading or speaking
and the use of artefacts
 Dyslexics have difficulties
 e.g. maps, books, pen and
understanding and recognizing
paper
written words
 May involve working through
Applications
different scenarios and deciding
 Speech-recognition systems which is best option
allow users to interact with them
Design implications
by asking questions
 e.g. Google Voice, Siri  Provide additional
 Speech-output systems use information/functions for users who
artificially generated speech wish to understand more about how
 e.g. written-text-to-speech to carry out an activity more
systems for the blind effectively
 Natural-language systems enable
 Use simple computational aids to
users to type in questions and
support rapid decision-making and
give text-based responses
planning for users on the move
 e.g. Ask search engine
Dilemma
Design implications of Reading,
Speaking, Listening  The app mentality developing in
the psyche of the younger
Speech-based menus and instructions
generation is making it worse for
should be short
them to make their own
Accentuate the intonation of artificially decisions because they are
generated speech voices becoming risk averse (Gardner
and Davis, 2013)
they are harder to understand than
 Relying on a multitude of apps
human voices
means that they are becoming
Provide opportunities for making text increasingly more anxious about
large on a screen making decisions by themselves.

You might also like