KCAU - Distance Learning - HCI Module

You might also like

Download as doc, pdf, or txt
Download as doc, pdf, or txt
You are on page 1of 129

P.O.

Box 56808 - 00200 Nairobi


Email: kca@kca.ac.ke
Web: www.kca.ac.ke

SCHOOL OF PURE AND APPLIED SCIENCES


DEPARTMENT OF INFORMATION TECHNOLOGY

BACHELOR OF SCIENCE IN INFORMATION TECHNOLOGY


(BSc.IT)

COURSE CODE: BIT 2308


COURSE TITLE: HUMAN COMPUTER INTERACTION

INSTRUCTIONAL MANUAL FOR BSc. IT – DISTANCE LEARNING

PREPARED BY PETER KIARIE

Page 1
Page 2
BIT 2308 Human Computer Interaction
Contact Hours 48
Pre-requisite N/A
Purpose/Aim This course presents the principles of creating systems that are
usable by and useful to human beings.
Course Objective  The learner can give an overview of the main criteria for
(Indicative Learning system usability
Outcomes  The learner can apply human computer interaction
principles to the design of user interfaces
Course Content  Introduction
 Usability paradigms and principles
 The design process
 Models of the user in design
 Task analysis
 Dialogue notations and design
 Models of the system
 Implementation support
 Evaluation techniques
 Help and documentation
 Groupware
 Computer-supported cooperative work and social issues
 Current developments in HCI
 Hypertext, multimedia and the world-wide web

Learning & Teaching Lectures, tutorials and computer laboratory exercises


Methodologies
Instructional Classroom with audio visual aids
Materials/Equipment Computer laboratory
Course Assessment Type Weighting (%)
Examination 70
Continuous Assessment 30
Total 100
Recommended Title Author Publisher
Reading Human Computer Dix A., Finlay J. et Prentice Hall Europe
Interaction al (1998)
Additional Reading Human-Computer Alan J. Dix, Janet Prentice Hall; 2nd
Interaction (2nd E. Finlay, Gregory edition (1998)
Edition) D. Abowd, Russell
Beale, Janet E.
Finley
Human Computer John M Carrol Pearson Education
Interactions in the (2002)
New Millennium
Other Support A variety of multimedia systems and electronic information
Material resources as prescribed by the lecturer. Various application
manuals and articles, URL search and journals.

Page 3
Introduction
In writing this unit because… (paragraph)
To introduce the students to concepts in human computer interfaces and the various
techniques used to build interactive computer systems

Include Prerequisites, Assignments/self tests, Examinations

Objectives
By the end of this unit you should to:
i. Give an overview of the main criteria for system usability.
ii. To apply human computer interaction principles to the design of user interface.
iii. Define Human computer interaction
iv. Describe HCI interface development.
v. Provide an introduction to theories and methodologies in use in human computer interaction,
concentrative in the area of user interface design.
vi. Gain practical experience in designing and evaluating interfaces.
vii. Develop a simple user interface

1. Explain the capabilities of both humans and computers from the viewpoint of human
information processing.
2. Describe typical human–computer interaction (HCI) models, styles, and various
historic HCI paradigms.
3. Apply an interactive design process and universal design principles to designing HCI
systems.
4. Describe and use HCI design principles, standards and guidelines.
5. Analyze and identify user models, user support, socio-organizational issues, and
stakeholder requirements of HCI systems.
6. Discuss tasks and dialogs of relevant HCI systems based on task analysis and dialog
design.
7. Analyze and discuss HCI issues in groupware, ubiquitous computing, virtual reality,
multimedia, and Word Wide Web-related environments.

1.7. Suggestion for further reading


1) Carrol J. M, Human Computer Interaction in the new Millenium, Pearson Education
2) Preece J. Et al, Human Computer Interaction, Addison Wesley
3) Dix A Et al, Human Computer Interaction, Prentice Hall Europe

Text books for further reading


1. Pressman R.S, Software Engineering : A Practitioners Approach, McGraw Hill
2. Schneiderman B., Designing the user Interface: Strategies for effective Human Computer
Interaction, Addison Wesley
3. Bayer H. and Holtzblat K., Contextual Design: defining customer centered systems, Morgan
Kaufmann

Page 4
4. Plyse R., Moore. Graphical User interface Design and Evaluation. Prentice Hall.

Table of Contents

LECTURE ONE: INTRODUCTION TO HCI........................................................................10


Lecture Objectives................................................................................................................10
Lecture Outline.....................................................................................................................10
1.1. Definition...................................................................................................................10
1.2. The Goals of HCI......................................................................................................12
1.3. HCI Usability.............................................................................................................15
1.4. Factors Affecting HCI Usability...............................................................................17
1.5. Usability Principles...................................................................................................17
1.6. HCI and its Evolution................................................................................................19
1.6.1 Dynabook...................................................................................................................20
1.6.2. Star............................................................................................................................20
1.6.3. Lisa by Apple...........................................................................................................20
1.7. Disciplines Contributing to HCI................................................................................21
1.8. HCI Guidelines, Principles and Theories..................................................................22
1.8.1. Guidelines..............................................................................................................22
1.8.2. Design Principles...................................................................................................23
1.8.3. Theories.................................................................................................................25
1.9. Lecture Summary.....................................................................................................27
1.10. Lecture Review Activities......................................................................................28
1.11. Suggestion for further reading...............................................................................28
LECTURE 2: HCI PARADIGMS..........................................................................................29
Lecture Objectives................................................................................................................29
2.1 Introduction....................................................................................................................29
2.2. HCI Paradims...............................................................................................................29
2.2.1. Time sharing...............................................................................................................29
2.2.2. Video display units.....................................................................................................30
2.2.3. Programming toolkits.................................................................................................31
2.2.5. Personal Computing...................................................................................................31
2.2.6. Window Systems and the WIMP Interface................................................................31
2.2.7. WIMP interfaces........................................................................................................32
2.2.8. 3D Interfaces............................................................................................................32
2.2.9. The Metaphor...........................................................................................................32
2.2.10. Direct Manipulation.................................................................................................33

Page 5
2.3. Chapter review questions...........................................................................................37
2.4. Lecture Summary.....................................................................................................37
2.5. Suggestion for further reading...................................................................................37
LECTURE 3: HCI HUMAN FACTORS - COGNITION......................................................38
Lecture Objectives...............................................................................................................38
3.1 Introduction....................................................................................................................38
3.2 Cognitive Psychology....................................................................................................39
3.3. Cognition and Cognitive Frameworks.......................................................................39
3.3.1. Cognition Modes....................................................................................................40
3.4. Cognitive Frameworks..............................................................................................41
3.4.1. Human Information Processing.............................................................................41
3.4.1.1. The Extended Human Information Processing model.......................................42
3.4.1.2. The Model Human Processor.............................................................................43
3.4.2. Mental Models.......................................................................................................45
3.4.3. Gulfs of Execution and Evaluation........................................................................45
3.4.4. Distributed Cognition............................................................................................47
3.4.5. External Cognition.................................................................................................48
3.5. Recent Development in Cognitive Psychology.........................................................49
3.6. Lecture Summary......................................................................................................50
3.7. Review Questions......................................................................................................50
3.8. Suggestion for Further Reading.................................................................................50
LECTURE 4: HCI HUMAN FACTORS – Perception and Representation............................52
4.2. The Gestalt Laws of perceptual organization (Constructivist)......................................53
4.3. Affordances (Ecological)..............................................................................................55
4.4. Affordances in Software................................................................................................55
4.4.1. Perceived Affordances in Software............................................................................56
4.5. Link affordance in web sites.........................................................................................56
4.6. Influence of Theories of perception in HCI..................................................................56
4.7. Lecture Summary......................................................................................................57
4.8. Lecture review questions...........................................................................................57
4.9. Further Reading.........................................................................................................57
LECTURE 5: HUMAN FACTORS -ATTENTION AND MEMORY..................................58
Lecture Objectives................................................................................................................58
5.1 Introduction....................................................................................................................58
5.2. Attention........................................................................................................................58
5.3. Models of Attention......................................................................................................59
5.3.1. Focused Attention......................................................................................................59

Page 6
5.3.2. Divided Attention.......................................................................................................60
5.4. Focusing attention at the interface................................................................................60
5.4.1. Structuring Information..............................................................................................61
5.5. Multitasking and Interruptions......................................................................................61
5.6. Automatic Processing....................................................................................................62
5.7. Memory Constraints.....................................................................................................62
5.8. Levels of Processing Theory........................................................................................63
5.8.1. Meaningful Interfaces...............................................................................................63
5.8.2. Meaningfulness of Commands..................................................................................64
5.8.3. Meaningfulness of Icons............................................................................................64
5.9. Lecture Summary......................................................................................................65
5.10. Lecture review questions.......................................................................................65
5.11. Further Reading.....................................................................................................65
LECTURER 6: MENTAL MODELS AND KNOWLEDGE.................................................66
Lecture Objectives................................................................................................................66
6.1. Mental Models..............................................................................................................66
6.2. Types of Mental Models...............................................................................................67
6.3. Applicability in HCI..................................................................................................68
6.4. Mental Models in HCI...............................................................................................68
6.5. Knowledge Representation........................................................................................70
6.6. Knowledge in the Head vs. Knowledge in the World...............................................71
6.7. Lecture Summary......................................................................................................72
6.8. Lecture Review Questions.........................................................................................72
6.9. Further Reading.........................................................................................................72
LECTURE 7: COMPUTER FACTORS IN HCI.....................................................................73
USER INTERFACE & USER SUPPORT..............................................................................73
Lecture Objectives................................................................................................................73
7.1. Introduction...................................................................................................................73
7.2. Input Devices.............................................................................................................74
7.3.1. Keyboard................................................................................................................75
7.3.2. QWERTY keyboard..............................................................................................75
7.3.3. Alphabetic keyboard..............................................................................................75
7.3.4. Phone pad and T9 entry.........................................................................................76
7.3.5. Dvorak Keyboard...................................................................................................76
7.3.6. Chord Keyboards...................................................................................................76
7.3.7. Handwriting Recognition.......................................................................................76
7.3.8. Speech Recognition...............................................................................................77

Page 7
7.3.9. Dedicated Buttons..................................................................................................78
7.3.10. Positioning, Pointing and Drawing....................................................................78
7.3.11. Cursor Keys........................................................................................................81
7.4. Choosing Input Devices...........................................................................................81
7.5. Output Devices..........................................................................................................82
7.5.1. Purposes of Output.................................................................................................83
7.6. Sound Output.............................................................................................................83
7.6.1. Natural sounds.......................................................................................................84
7.6.2. Speech....................................................................................................................84
7.7. User Support..............................................................................................................84
7.10. Designing user Support Systems...........................................................................89
7.11. User Support Presentation issues...........................................................................89
7.12. Lecture Summary...................................................................................................91
7.13. Lecture review questions.......................................................................................91
7.14. Further Reading.....................................................................................................91
LECTURE 8: HUMAN COMPUTER INTERACTION DESIGN.........................................92
8.1. Interaction Cycle and Framework.................................................................................92
8.1.1. The interaction framework.........................................................................................93
8.2. Ergonomics...................................................................................................................95
8.3. Interaction Styles...........................................................................................................97
8.3.1. Command Line Languages.........................................................................................97
8.3.2. Menus........................................................................................................................97
8.3.3. Direct manipulation....................................................................................................98
8.3.4. Form fill-in.................................................................................................................98
8.3.5. Natural Language......................................................................................................99
8.3.6. Question/Answer and Query Dialogue.....................................................................99
8.3.7. WIMP interface.......................................................................................................100
8.3.9. Buttons.....................................................................................................................101
8.3.10. Toolbars..................................................................................................................101
8.3.11. Palettes...................................................................................................................101
8.3.13. Dialog boxes...........................................................................................................102
8.4. Interaction Design.......................................................................................................102
8.5. Interaction Design versus Interface Design...............................................................103
8.6. Why Good HCI design................................................................................................103
8.6.1. Increase in Worker Productivity, Satisfaction, and Commitment............................104
8.6.2. Reduction in Training Costs, Errors, and Production Costs....................................105
8.7. Design Principles.......................................................................................................106

Page 8
8.8. Lecture Summary....................................................................................................107
8.9. Lecture Review Questions.......................................................................................107
8.10. Further Reading...................................................................................................108
LECTURE 9: HCI DESIGN PROCESS................................................................................109
Lecture Objectives..............................................................................................................109
9.1. The Design Process.....................................................................................................109
9.2. HCI Design Approaches.............................................................................................110
9.3. HCI Design Rules, Principles, Standards and Guidelines..........................................114
9.3.1. HCI Design Rules....................................................................................................114
9.3.2. Principles of Human-Computer Interface Design:..................................................115
9.3.3. HCI Design Standards..............................................................................................119
9.3.4. Guidelines............................................................................................................119
9.4. Web Interface Design Considerations.....................................................................120
Information Architecture and Web Navigation..................................................................120
9.6. Web Sites Navigation Systems................................................................................123
9.6.1. Global navigation.....................................................................................................123
9.6.2. Local navigation.......................................................................................................123
9.6.3. Supplementary navigation........................................................................................124
9.6.4. Contextual navigation..........................................................................................124
9.6.5. Courtesy navigation.............................................................................................124
9.6.6. Personalisation and Social Navigation.................................................................125
9.8. Navigation Aids.......................................................................................................126
9.9. Diagramming Information Architecture and Navigation........................................127
9.9.1. Blueprints.................................................................................................................127
9.9.2. Wireframes...............................................................................................................127
9.10. Lecture Summary.................................................................................................128
9.11. Lecture Review Questions...................................................................................128
9.12. Further Reading...................................................................................................128

Page 9
LECTURE ONE: INTRODUCTION TO HCI

Lecture Objectives
As the aim of this lecture is to introduce you the study of Human Computer Interaction, so that
after studying this you will be able to:
i. Describe the formal definition of HCI
ii. Describe the goals of HCI
iii. Define Usability goals
iv. Discuss the History and Evolution of HCI
v. Explain the HCI Guidelines, Principles and Theories

Lecture Outline
1.1.1. Title 1.3.1
1.1.2. Title 1.3.2
1.1.3. Title 1.3.3

1.1. Definition
Human Computer Interaction (HCI) is the study of how people interact with computers and to
what extent computers are or are not developed for successful interaction with human beings.
Human-computer interaction is a discipline concerned with the design, evaluation and
implementation of interactive computing systems for human use and with the study of major
phenomena surrounding them." Association for Computing Machinery

Other definitions include:


 a discipline concerned with the design, evaluation, and implementation of computing
systems for human use and with the study of major phenomena surrounding them"
 the discipline of designing, evaluating and implementing interactive computer
systems for human use, as well the study of major phenomena surrounding this
discipline
 HCI involves the design implementation and evaluation of interactive systems in the
context of the users’ task and work
 “Human computer interaction can be defined as the discipline concerned with the
design, evaluation, and implementation of interactive computing systems for human
use and with the study of major phenomena surrounding them”

Page 10
Human Computer Interaction (HCI) involves the study, planning, and design of the
interaction between people and computers. Because human–computer interaction studies a
human and a machine in conjunction, it draws from supporting knowledge on both the
machine and the human side. HCI is also sometimes referred to as man–machine interaction
(MMI) or computer–human interaction (CHI). HCI can be viewed as two powerful
information processors (human and computer) attempting to communicate with each other
via a narrow-bandwidth, highly constrained interface.
As its definitions name implies, HCI consists of three parts: the user, the computer itself, and
the ways they work together (Interaction).

 User: By "user", we may mean an individual user, a group of users or a sequence of users
in an organization working together.
 Computer: When we talk about the computer, we're referring to any technology ranging
from desktop computers, to large scale computer systems. For example, if we were
discussing the design of a Website, then the Website itself would be referred to as "the
computer". Devices such as mobile phones or VCRs can also be considered to be
“computers”.
 Interaction: There are obvious differences between humans and machines. In spite of
these, HCI attempts to ensure that they both get on with each other and interact
successfully. In order to achieve a usable system, you need to apply what you know about
humans and computers, and consult with likely users throughout the design process. In
real systems, it is vital to find a balance between what would be ideal for the users and
what is feasible in reality.

Common interaction styles


 Command line interface: This a way of expressing instructions to the computer directly
using function keys, single characters, short abbreviations, whole words, or a
combination. This type interaction style is suitable for repetitive tasks, better for expert
users than novices, offers direct access to system functionality. Examples includes
command names/abbreviations should be meaningful!
 Menus: A list of options is presented to the user and the appropriate decision is selected
by typing some code or selecting the option required. Selection of the options is done
through use of numbers, letters, arrow keys, the mouse or a combination of this methods.

Page 11
The selection relies on the recognition of the options names, thus requiring that the names
used should be meaningful. Visibility of options make it easier to use and it reduces the
need for recall. The methods also allows for hierarchical grouping of the options
 Direct Manipulation: Direct manipulation is a style of interaction which features a natural
representation of task objects and actions promoting the notion of people performing a
task themselves (directly) not through an intermediary like a computer
 Question/answer and query dialogue: In this kind of interaction questions are asked one at
a time and the next question may depend on the previous answer. Question and answer
dialogues are often used in tasks where information is elicited from users in a prescribed
and limited form. This a type of interaction where the user is led through the interaction
via series of questions. It is suitable for novice users but has restricted functionality. A
Fourth Generation language often used in information systems
 form-fills and spreadsheets
Used for data entry and retrieval and are based on paper form metaphor. They are used
for both input and output
 WIMP: Most common interaction style on PCs. Uses;
• Windows
• Icons
• Menus
• point and click

1.2. The Goals of HCI


The term Human Computer Interaction (HCI) was adopted in the mid-1980s as a means of
describing this new field of study. This term acknowledged that the focus of interest was
broader than just the design of the interface and was concerned with all those aspects that
relate to the interaction between users and computers.

The goals of HCI are to produce usable and safe systems, as well as functional systems.
These goals can be summarized as ‘to develop or improve the safety, utility, effectiveness,
efficiency and usability of systems that include computers’ (Interacting with computers,
1989). In this context the term ‘system’ derives from systems theory and it refers not just to
the hardware and software but to the entire environment be it organization of people at work
at, home or engaged in leisure pursuits that uses or is affected by the computer technology in

Page 12
question. Utility refers to the functionality of a system or, in other words, the things it can do.
Improving effectiveness and efficiency are self-evident and ubiquitous objectives. The
promotion of safety in relation to computer systems is of paramount importance in the design
of safety-critical systems. Usability, a key concept in HCI, is concerned with making systems
easy to learn and easy to use. Poorly designed computer system can be extremely annoying to
users, as you can understand from above described incidents.

A basic goal of HCI is to improve the interactions between users and computers by making
computers more usable (usability), and receptive to the user's needs (functionality).
 Functionality of a system is defined by the set of actions or services that it provides to its
users. However, the value of functionality is visible only when it becomes possible to be
efficiently utilised by the user
 Usability of a system with a certain functionality is the range and degree by which the
system can be used efficiently and adequately to accomplish certain goals for certain
users. The actual effectiveness of a system is achieved when there is a proper balance
between the functionality and usability of a system

Specifically, the goal of HCI is to design a user interface that meets the following
characteristics;
a) Effectiveness: effective to use - Concerned with whether the system is doing what it
generally says it will do
b) Efficiency: efficient to use
c) Safety: It involves protecting the users from dangerous conditions and undesirable
situations ie save to use.
 Preventing the user from making serious error by reducing the risk of wrong
keys/buttons being mistakenly activated (an example is not placing the quit or delete-
file command right next to the save command on a menu.)
 Providing users with various means of recovery should they make errors. Save
interactive systems should engender confidence and allow the users the opportunity to
explore the interface to carry out new operations. The HCI should provide means of
recovering from errors such as undo options, confirmation dialogs etc
d) Utility: It refers to the extent to which the system provides the right kind of functionality
so that user can do what they need or want to do. HCI should have sufficient

Page 13
functionality to accommodate range of users tasks. An example of a system with high
utility is an accounting software package providing a powerful computational tool that
accountants can use to work out tax returns. have good utility the
e) Learnability: It refers to how easy a system is to learn to use. It is well known that people
do not like spending a long time learning how to use a system. They want to get started
straight away and become competent at caring out tasks without to much effort. This is
especially so far interactive products intended for everyday use (for example interactive
TV, email) and those used only infrequently (for example, video conferencing) to certain
extent
f) Memorability: It refers to how easy a system is to remember how to use, once learned.
This is especially important for interactive systems that are used infrequently. If users
haven’t used a system or an operation for a few months or longer, they should be able to
remember or at least rapidly be reminded how to use it. Users shouldn’t have to keep
relearning how to carry out tasks. Unfortunately, this tends to happen when the operation
required to be learning are obscure, illogical, or poorly sequenced. Users need to be
helped to remember how to do tasks. There are many ways of designing the interaction to
support this. For example, users can be helped to remember the sequence of operations at
different stages of a task through meaningful icons, command names, and menu options.
Also, structuring options and icons so they are placed in relevant categories of options
(for example, placing all the drawing tools in the same place on the screen) can help the
user remember where to look to find a particular tool at a given stage of a task.

To meet this characteristics, HCI is thus concerned with


a) Methodologies and processes for designing interfaces (i.e., given a task and a class of
users, design the best possible interface within given constraints, optimizing for a desired
property such as learnability or efficiency of use)
b) Methods for implementing interfaces
c) Techniques for evaluating and comparing interfaces
d) Developing new interfaces and interaction techniques
e) Developing descriptive and predictive models and theories of interaction

A long term goal of HCI is to design systems that minimize the barrier between the human's
cognitive model of what they want to accomplish and the computer's understanding of the

Page 14
user's task. In order to produce computer systems with good usability, developers must
attempt to:
a) understand the factors that determine how people use technology
b) develop tools and techniques to enable building suitable systems
c) achieve efficient, effective, and safe interaction
d) put people first

Underlying the whole theme of HCI is the belief that people using a computer system should
come first. Their needs, capabilities and preferences for conducting various tasks should
direct developers in the way that they design systems. People should not have to change the
way that they use a system in order to fit in with it. Instead, the system should be designed to
match their requirements.

1.3. HCI Usability


Usability is one of the key concepts in HCI. It is concerned with making systems easy to
learn and use. It involves optimizing the interactions people have with interactive product to
enable them to carry out their activities at work, school, and in their everyday life. ISO
defines usability as "the effectiveness, efficiency and satisfaction with which specified users
can achieve specified goals in particular environments".

A usable system is:


a) easy to learn: It refers to how easy a system is to learn to use. It is well known that people
do not like spending a long time learning how to use a system. They want to get started
straight away and become competent at caring out tasks without to much effort.
b) easy to remember how to use (memorability): It refers to how easy a system is to
remember how to use, once learned. This is especially important for interactive systems
that are used infrequently. If users haven’t used a system or an operation for a few months
or longer, they should be able to remember or at least rapidly be reminded how to use it.
Users shouldn’t have to keep relearning how to carry out tasks.
c) effective to use: the general goals and refers to how good a system at doing what it is
suppose to do.
d) efficient to use: refers to the way a system supports users in carrying out their tasks.
e) safe to use: involves protecting the users from dangerous conditions and undesirable
situations.
Page 15
f) enjoyable to use
g) Have good utility: refers to the extent to which the system provides the right kind of
functionality so that user can do what they need or want to do.

The goals usability are shown in the diagram below;

Fig 1. HCI Usability goals

Why is usability important?


Usability is most often defined as the ease of use and acceptability of a system for a particular
class of users carrying out specific tasks in a specific environment. Ease of use affects the
user’s performance and their satisfaction, while acceptability affects whether the product is
used. Many everyday systems and products seem to be designed with little regard to usability.
This leads to frustration, wasted time and errors.

For example, a photocopier might have buttons like these on its control panel.

Imagine that you just put your document into the photocopier and set the photocopier to make
15 copies, sorted and stapled. Then you push the big button with the "C" to start making your
copies.

Page 16
What do you think will happen?
(a) The photocopier makes the copies correctly.
(b) The photocopier settings are cleared and no copies are made.

If you selected (b) you are right! The "C" stands for clear, not copy. The copy button is
actually the button on the left with the "line in a diamond" symbol. This symbol is widely
used on photocopiers, but is of little help to someone who is unfamiliar with this. Every day
systems should actually be easy, effortless, and enjoyable to use?

1.4. Factors Affecting HCI Usability


The main factors affecting usability are:
a) Format of input
b) Feedback
There is need to continuously inform the user about the system’s state, how it is
interpreting the user’s input. The user should at all times be aware of what is going on
c) Visibility
d) Affordance

The principles of visibility and affordance were identified by HCI pioneer Donald Norman.
a) Visibility is the mapping between a control and its effect. For example, controls in cars
are generally visible – the steering wheel has just one function, there is good feedback
and it is easy to understand what it does. Mobile phones and VCRs often have poor
visibility – there is little visual mapping between controls and the users’ goals, and
controls can have multiple functions.
b) The affordance of an object is the sort of operations and manipulations that can be done
to it. A door affords opening, a chair affords support. Important types includes;
 Perceived Affordances are the actions a user perceives to be possible. For example,
does the design of a door suggest that it should be pushed or pulled open?
 Real Affordances are the actions which are actually possible.

1.5. Usability Principles


There are several historical principles of usability that are evolving into new usability
paradigms each day. Usability paradigms reveal how humans interact with computers in

Page 17
contemporary applications, while usability principles describe how these paradigms work. In
their book Usability Paradigms and Principles in Human-Computer Interface, Dix, Finally,
Abowd and Beale proposes three principles of usability design
a) Learnability – the ease with which new users can begin effective interaction and achieve
maximal performance.
b) Flexibility – the multiplicity of ways in which the user and system exchange information.
c) Robustness – the level of support provided to the user in determining successful
achievement and assessment of goals.
Details of these principles will be discussed in Lecture 9 – HCI Design Process.

Jacon Nielsen also proposed other ten general principles for user interface design. They are
called "heuristics" because they are more in the nature of rules of thumb than specific
usability guidelines.
1. Visibility of system status: The system should always keep users informed about what is
going on, through appropriate feedback within reasonable time.
2. Match between system and the real world: The system should speak the users' language,
with words, phrases and concepts familiar to the user, rather than system-oriented terms.
Follow real-world conventions, making information appear in a natural and logical order.
3. User control and freedom: Users often choose system functions by mistake and will need
a clearly marked "emergency exit" to leave the unwanted state without having to go
through an extended dialogue. Support undo and redo.
4. Consistency and standards: Users should not have to wonder whether different words,
situations, or actions mean the same thing. Follow platform conventions.
5. Error prevention: Even better than good error messages is a careful design which prevents
a problem from occurring in the first place. Either eliminate error-prone conditions or
check for them and present users with a confirmation option before they commit to the
action.
6. Recognition rather than recall: Minimize the user's memory load by making objects,
actions, and options visible. The user should not have to remember information from one
part of the dialogue to another. Instructions for use of the system should be visible or
easily retrievable whenever appropriate.
7. Flexibility and efficiency of use: Use accelerators (unseen by the novice user) may often
speed up the interaction for the expert user such that the system can cater to both
inexperienced and experienced users. Allow users to tailor frequent actions.
Page 18
8. Aesthetic and minimalist design: Dialogues should not contain information which is
irrelevant or rarely needed. Every extra unit of information in a dialogue competes with
the relevant units of information and diminishes their relative visibility.
9. Help users recognize, diagnose, and recover from errors: Error messages should be
expressed in plain language (no codes), precisely indicate the problem, and constructively
suggest a solution.
10. Help and documentation: Even though it is better if the system can be used without
documentation, it may be necessary to provide help and documentation. Any such
information should be easy to search, focused on the user's task, list concrete steps to be
carried out, and not be too large.

1.6. HCI and its Evolution


HCI takes place within a social and organizational context. Different kinds of applications are
required for different purposes and care is needed to divide tasks between humans and
machines, making sure that those activities and routine are allocated to machines. Knowledge
of human psychological and physiological abilities and, more important still their limitations
is important. This involves knowing about such things as human information processing,
language, communication, interaction and ergonomics. Similarly it is essential to know about
the range of possibilities offered by computer hardware and software so that knowledge about
humans can be mapped on to the technology appropriately. The main issues for consideration
on the technology side involve input techniques, dialogue technique, dialogue genre or style,
computer graphics and dialogue architecture. This knowledge has to be brought together
some how into the design and development of computer systems with good HCI. Tools and
techniques are needed to realize systems. Evolution also plays an important role in this
process by enabling designers to check that their ideas really are what users want.

Three systems that provide landmarks along this evolutionary path are the Dynabook, the Star
and the Apple Lisa, predecessor of today’s Apple Macintosh machines. An important
unifying theme present in all three computer systems is that they provided a form of
interaction that proved effective and easy for novices and experts alike. They were also easy
to learn, and provided a visual-spatial interface whereby, in general, objects could be directly
manipulated, while the system gave immediate feedback.

Page 19
1.6.1 Dynabook
Alan Kay designed the first object-oriented programming language in the 1970s. Called
Smalltalk, the programs were the basis for what is now known as windows technology, the
ability to open more than one program at a time on a personal computer. However, when he
first developed the idea, personal computers were only a concept. In fact, the idea of personal
computers and laptops also belongs to Kay. He envisioned the Dynabook - a notebook sized
computer, with a keyboard on the bottom and a high resolution screen at the top.
1.6.2. Star
The Xerox Star was born out of PARC's creative ferment, designing an integrated system
that would bring PARC's new hardware and software ideas into a commercially viable
product for use in office environments. The Star drew on the ideas that had been developed,
and went further in integrating them and in designing for a class of users who were far less
technically knowledgeable than the engineers who had been both the creators and the prime
users of many PARC systems (one of PARC's favorite mottoes was "Build what you use, use
what you build.") The Star designers were challenged to make the personal computer usable
for a community that did not have previous computer experience.

1.6.3. Lisa by Apple


This interaction was started with GUI (Graphical User Interface). If you are sitting in front of
a computer with a mouse and pull down menus you owe it to this machine. Alongside
developments in interactive graphic interface, interactive text processing systems were also
evolving at a rapid rate. Following in the footsteps of line and display editors was the
development of systems that allowed users to create and edit documents that were
represented fully on the screen. The underlying philosophy of these systems is captured by
the term WYSIWYG, which stands for ‘what you see is what you get’ (pronounced ‘whizzee-
wig’). In other words, the documents were displayed on the screen exactly as they would look
in printed form. This was in stark contrast to earlier document editors, where commands were
embedded in the text and it was impossible to see what document would look like without
printing it.

This section lists some of the key developments and people in the evolution of HCI.
a) Human factors engineering (Frank Gilbreth, post World War 1) – study of operator’s
muscular capabilities and limitations.

Page 20
b) Aircraft cockpits (World War 2) – emphasis switched to perceptual and decision making
capabilities
c) Symbiosis (J.C.R. Licklider, 1960’s) - human operator and computer form two distinct
but interdependent systems, augment each other’s capabilities
d) Cognitive psychology (Donald Norman and many others, late 1970’s, early 1980’s) -
adapting findings to design of user interfaces
e) Development of GUI interface (Xerox, Apple, early 1980’s)
f) Field of HCI came into being (mid 1980’s) – key principles of User Centred Design and
Direct Manipulation emerged.
g) Development of software design tools (e.g. Visual Basic, late 1980’s, early 1990’s)
h) Usability engineering (Jakob Neilsen, 1990’s) - mainly in industry rather than academic
research.
i) Web usability (late 1990’s) – the main focus of HCI research today.

1.7. Disciplines Contributing to HCI


The field of HCI covers a wide range of topics, and its development has relied on
contributions from many disciplines. Some of the main disciplines which have contributed to
HCI are:
1. Computer Science
 technology
 software design, development & maintenance
 User Interface Management Systems (UIMS) & User Interface Development
Environments (UIDE)
 prototyping tools
 graphics
2. Cognitive Psychology
 information processing
 capabilities
 limitations
 cooperative working
 performance prediction
3. Social Psychology
 social & organizational structures

Page 21
4. Ergonomics/Human Factors
 hardware design
 display readability
5. Linguistics
 natural language interfaces
6. Artificial Intelligence
 intelligent software
7. Philosophy, Sociology & Anthropology
 Computer supported cooperative work (CSCW)
 Engineering & Design
a. graphic design
b. engineering principles

1.8. HCI Guidelines, Principles and Theories


1.8.1. Guidelines
Guidelines are best practice, based on practical experiences or empirical studies. They are
software development documents which offer application developers a set of
recommendations. Their aim is to improve the experience for the users by making application
interfaces more intuitive, learnable, and consistent.

They include issues such as: terminology, appearance, action sequences, input/output
formats, and Dos and Donts of graphic styles. They build a good starting point but need
management processes to facilitate their enforcement. Written guidelines help to develop a
“shared language” and based on it promote consistency among multiple designers and
designs/products. Examples of guidelines includes:
a) How to provide ease of interface navigation;
b) how to organize the display.
c) how to draw user’s attention, and
d) how to best facilitate data entry
Guidelines will discussed in details in lecture 9 – HCI Design Process.

Page 22
1.8.2. Design Principles
Are more fundamental, widely applicable and enduring than guidelines (i.e., guidelines often
need to be individually specified for every project/organization while principles are project-
independent). There are 5 tasks/principles that may be performed/followed;
1. Determine your target audience (in particular user skill-level)
Every designer should start with trying to understand the intended user (e.g., population
profile, gender, physical and cognitive abilities, education, cultural/language background,
knowledge, usage pattern, attitudes, etc.). Specifying the intended user is a very important yet
very difficult task. One particular challenge is: Many applications cater for several diverse
user groups at the same time. Determining the user skill level is especially important. There
are 2 skill levels:
A. Skills in regard to interface usage in general
B. Skills in regard to the particular application/task domain

Users can be classified into one of the following 3 groups:


a) Novice or first-time users: The Novice users know neither the interface concepts nor the
task domain, while first-timers are familiar with interface concepts but are new to the task
domain. Some specific guidelines for this group are:
 Provide instructions, dialog boxes, online help
 Restrict the vocabulary to a small number of familiar and consistently used terms
 Use only a small number of actions (simple tasks)
 Provide sufficient informative and feedback about accomplished tasks
 Provide constructive, very specific error messages
 Provide user manuals, video demonstrations, and task oriented online tutorials
b) Knowledgeable intermittent users: This type of users have stable knowledge about task
concepts and broad knowledge about general interface concepts, may have difficulty
retaining the structure of menus or the location of features over time. Some specific
guidelines for this group are:
 Reduce burden on memory
 Orderly structure menus
 Use consistent terminology and sequences of actions
 Use meaningful messages
 Provide guides to frequent pattern of usage

Page 23
 Provide context dependent help
 Use well-organized reference manuals
c) Expert frequent users: Expert frequent users are thoroughly familiar with the task concept
and general interface concepts. Main goal of such “power users” is to get work done fast
and efficiently. Some specific guidelines for this group are:
 Ensure rapid response times
 Provide brief and non-distracting feedback
 Provide shortcuts to carry out frequent actions
 Provide string commands and abbreviations

It is crucial to Systems users are defined and where there are multiple diverse user groups the
following approaches can be used during design process;
 Use a “multi layer” design/approach to learning: Structure your UI in a way that limited
users may only work with a small set of functions and objects while advanced users can
access advanced functionality
 Give the user control over the density of informative feedback (e.g., error or confirmation
messages), the number of elements on the display, or the pace of interaction
2. Identify the tasks that users perform
This process often involves interviewing and observing the user, which also helps to
understand the task frequencies and sequences
 Be careful regarding the extent of provided functionality (inadequate vs. cluttered)
 Start with high-level tasks, decompose them into smaller steps and finally atomic
actions
 Choose the right level of granularity (e.g., depending on task frequencies)
3. Choose an appropriate interaction style
Use natural language in order to accomplish the required task.
 Pros: Users do not need to learn a syntax, may be successful in applications where the
scope is limited and task sequences are clear from the beginning
 Cons: Clarification dialog required (what to do next?), hard to determine the context,
may be unpredictable, in applications with broad scope most likely less efficient
4. Apply the “8 golden rules” of HCI Design
Shneiderman’s 8 Golden Rules (1987):
a) Strive for consistency

Page 24
b) Enable frequent users to use shortcuts
c) Offer informative feedback
d) Design dialogs to yield closure
e) Offer error prevention and simple error handling
f) Permit easy reversal of actions
g) Support internal locus of control
h) Reduce short-term memory load
5. Always try to prevent user errors
General issues that should be considered in order to prevent errors:
 Use functionally organized screens and menus
 Design menu choices and commands to be distinctive
 Make it difficult for users to perform irreversible actions
 Provide feedback about the state of the UI
 Design for consistency of actions
 Consider universal usability
Other more specific issues:
 Prevent incorrect use of the UI (e.g., grey out unavailable functions, provide pull-
down menus for specific data entry tasks, etc.)
 Aggregate a sequence of steps into one single action (e.g., specify a lengthy task once,
save it and execute it again later)
 Provide reminders that indicate that specific actions are required to complete a task

1.8.3. Theories
Theories are more high-level than principles, and are largely very abstract. There are two
purposes for the HCI theories’; to support collaboration and teaching through the provision of
consistent terminologies for objects and actions (descriptive, explanatory) and to enhance the
quality of the HCI by providing predictions for motor task performance, perceptual activities,
execution times, error rates, emotional reactions, etc. Predictive characteristics enable
designers, for instance, to better serve the users’ needs and compare proposed designs more
objectively.
Four classes of theories;
1. Stages-of-action models (SOAM)

Page 25
Deal with explaining the stages that users go through when using the User Interface, ie
HCI interactive cycle. The user formulates a plan of action, which is then executed at the
computer interface. When the plan, or part of the plan, has been executed, the user
observes the computer interface to evaluate the result of the executed plan, and to
determine further actions.

The interactive cycle can be divided into two major phases: execution and evaluation.
These can then be subdivided into further stages, seven in all;
a) Forming the goal
b) Forming the intention
c) Specifying the action
d) Executing the action
e) Perceiving the system state
f) Interpreting the system state
g) Evaluating the outcome

Based on this model pursue 4 design principles:


 Make system state and action alternatives visible
 Provide a good conceptual model with a consistent system image
 The interface should include good mappings that reveal the relationships between
stages
 Users should receive continuous feedback this may help, for instance, to study user
failure.
Potential causes for failure are:
 The user forms an inadequate goal
 The user forms an adequate goal but does not find the right UI object to execute it
(e.g., use of inadequate labels)
 The user forms an adequate goal but may not know how to specify or execute a
desired action
 The user receives inappropriate feedback
2. Goals, Operators, Methods, and Selection rules (GOMS)
GOMS stands for goals, operator, methods, and selection rules. It is a family of
techniques that analyzes the user complexity of interactive systems. Goals are what the

Page 26
user must accomplish. An operator is an action performed in pursuit of a goal. A method
is a sequence of operators that accomplish a goal. Selection rules specify which method
satisfies a given goal, based on context. This theory decompose users actions into small
measurable steps
 User formulates a goal (e.g., publish draft to obtain client feedback) and sub-goals
(e.g., upload graphic)
 User executes a variety of operators in order to change his/her mental state or to affect
the task environment (e.g., locate file on hard drive - in the user’s mind)
 Operator elementary perceptual, motor, or cognitive act
 User achieves goals by using methods (e.g., move mouse to click appropriate button
on the UI to start the upload procedure)
 User bases his/her decision as to what method to use in order to accomplish a goal on
selection rules (e.g., mouse vs. key)
 Works well for describing steps in the decision making process of users while they
carry out interaction tasks
3. Widget-level theories (WLT)
Alternative approach to “hierarchical decomposition”. Hierarchical decomposition
reduces complexity by breaking down tasks into atomic actions or complex UIs into
atomic components. WLTs deal with higher level components that are self-contained and
reusable
4. Context-of-use theories
Many theories are based on controlled lab experiments, isolated environments and only
consider isolated phenomena
 Problem: the context (physical and social environment) in which the user operates
may also play an important role in influencing usage patterns
 Context ≈ Users’ interactions with other people (e.g., colleagues) and resources,
unexpected interruptions, ...
 Theory: Knowledge is not always in the user’s mind but rather distributed in his/her
environment (e.g., manuals, Internet, etc.)
 The relevance of Context-of-Use theories increases with growing use of mobile or
context-aware devices
1.9. Lecture Summary

Page 27
1.10. Lecture Review Activities
1. Suggest some ways in which the design of the copier buttons on page 3 could be
improved.
2. Consider factors involved in the design of a new library catalogue system using HCI
principles..
3. Use the internet to find information on the work of Donald Norman and Jakob Neilsen

1.11. Suggestion for further reading


1. Pressman R.S, Software Engineering : A Practitioners Approach, McGraw Hill
2. Schneiderman B., Designing the user Interface: Strategies for effective Human Computer
Interaction, Addison Wesley
3. Bayer H. and Holtzblat K., Contextual Design: defining customer centered systems,
Morgan Kaufmann
4. Plyse R., Moore. Graphical User interface Design and Evaluation. Prentice Hall.

Page 28
LECTURE 2: HCI PARADIGMS

Lecture Objectives
As the aim of this lecture is to introduce you the study of Human Computer Interaction, so that
after studying this you will be able to:
i. Describe the Human computer Paradigms
ii. Explain the principal historical advances in interactive designs.

2.1 Introduction
The great advances in computer technology have increased the power of machines and
enhanced the bandwidth of communication between human and computer. The impact of the
technology alone, however, is not sufficient to enhance its usability. As our machines have
become more powerful, they key to increased usability has come from the creative and
considered application of the technology to accommodate and augment the power of the
human. Paradigms for interaction have for the most part been dependent upon technological
advances and their creative application to enhance interaction.

Recent trend has been to promote paradigms that move beyond the desktop. With the advent
of wireless, mobile, and handheld technologies, developers started designing applications that
could be used in a diversity of ways besides running only on an individual’s desktop
machine.

2.2. HCI Paradims


2.2.1. Time sharing
In the 1940s and 1950s, the significant advances in computing consisted of new hardware
technologies. Mechanical relays were replaced by vacuum electron tubes. Tubes were
replaced by transistors, and transistors by integrated chips, all of which meant that the amount
of sheer computing power was increasing by orders of magnitude. By the 1960s it was
becoming apparent that the explosion of growth in computing power would be wasted if there
were not an equivalent explosion of ideas about how to channel that power.

A researcher J. C. R. Licklider who became the director of the Information Processing


Techniques Office of the US Department of Defense’s Advanced Research Projects Agency
(ARPA), made an effort to finance various research centers across the United States in order

Page 29
to encourage new ideas about how best to apply the burgeoning computing technology. One
of the major contributions to come out of this new emphasis in research was the concept of
time sharing, in which a single computer could support multiple users. Previously, the
human (or more accurately, the programmer) was restricted to batch sessions, in which
complete jobs were submitted on punched cards or paper tape to an operator who would then
run them individually on the computer. Time-sharing systems of the 1960s made
programming a truly interactive venture and brought about a subculture of programmers
known as ‘hackers’ – single-minded masters of detail who took pleasure in understanding
complexity. Though the purpose of the first interactive time-sharing systems was simply to
augment the programming capabilities of the early hackers, it marked a significant stage in
computer applications for human use.

2.2.2. Video display units


As early as the mid-1950s researchers were experimenting with the possibility of presenting
and manipulating information from a computer in the form of images on a video display unit
(VDU). These display screens could provide a more suitable medium than a paper printout
for presenting vast quantities of strategic information for rapid assimilation. It was not until
1962, however, when a young graduate student at the Massachusetts Institute of Technology
(MIT), Ivan Sutherland, astonished the established computer science community with the
Sketchpad program, that the capabilities of visual images were realized.

Sketchpad allowed a computer operator to use the computer to create, very rapidly,
sophisticated visual models on a display screen that resembled a television set. The visual
patterns could be stored in the computer’s memory like any other data, and could be
manipulated by the computer’s processor. Sketchpad demonstrated two important ideas. First,
computers could be used for more than just data processing. They could extend the user’s
ability to abstract away from some levels of detail, visualizing and manipulating different
representations of the same information. Those abstractions did not have to be limited to
representations in terms of bit sequences deep within the recesses of computer memory.
Rather, the abstractions could be made truly visual. To enhance human interaction, the
information within the computer was made more amenable to human consumption. The
computer was made to speak a more human language, instead of the human being forced to
speak more like a computer. Secondly, Sutherland’s efforts demonstrated how important the

Page 30
contribution of one creative mind (coupled with a dogged determination to see the idea
through) could be to the entire history of computing.

2.2.3. Programming toolkits


Dougles Engelbart’s ambition since the early 1950s was to use computer technology as a
means of complementing human problem-solving activity. Engelbart’s idea as a graduate
student at the University f California at Berkeley was to use the computer to teach humans.
This dream of naïve human users actually learning from a computer was a stark contrast to
the prevailing attitude of his contemporaries that computers were purposely complex
technology that only the intellectually privileged were capable of manipulating.

According to Engelbart the secret to producing computing equipment that aided human
problem solving ability was in providing the right toolkit. The idea of building components of
a computer system that will allow you to rebuild a more complex system is called
bootstrapping and has been used to a great extent in all of computing. The power of
programming toolkits is that small, well-understood components can be composed in fixed
ways in order to create larger tools. Once these larger tools become understood, they can
continue to be composed with other tools, and the process continues.

2.2.5. Personal Computing


Programming toolkits provide a means for those with substantial computing skills to increase
their productivity greatly. But Engelbart’s vision was not exclusive to the computer literate.
The decade of the 1970s saw the emergence of computing power aimed at the masses,
computer literate or not. One of the first demonstrations that the powerful tools of the hacker
could be made accessible to the computer novice was a graphics programming language for
children called LOGO. The inventor, Seymen Papert, wanted to develop a language that was
easy for children to use.

2.2.6. Window Systems and the WIMP Interface


With the advent and immense commercial success of personal computing, the emphasis for
increasing the usability of computing technology focused on addressing the single user who
engaged in a dialog with the computer in order to complete some work. Humans are able to
think about more than one thing at a time, and in accomplishing some piece of work, they
frequently interrupt their current train of thought to pursue some other related piece of work.

Page 31
A personal computer system which forces the user to progress in order through all of the
tasks needed to achieve some objective, from beginning to end without any diversions, does
not correspond to that standard working pattern. If the personal computer is to be an effective
dialog partner, to must be as flexible in its ability to change the topic as the human is.

One presentation mechanism for achieving this dialog partitioning is to separate physically
the presentation of the different logical threads of user–computer conversation on the display
device. The window is the common mechanism associated with these physically and logically
separate display spaces. Interaction based on windows, icons, menus and pointers – the
WIMP interface – is now commonplace.

2.2.7. WIMP interfaces


WIMP (Windows, Icons, Menus and Pointing device) interfaces are often used in desktop
computing applications and are easy to learn and implement. WIMP interfaces have several
standard elements that include visual controls designed based on metaphors to real world
systems. Icons in such interfaces represent actions and feedback mechanisms. Menus and
controls include dropdown menus, popup menus, radio buttons, checkboxes, selection lists
and dialog boxes. WIMP interfaces are used to achieve usability principles and a certain set
of usability measures can be used to evaluate such systems. The usability principles followed
by WIMP interfaces include efficiency, ease of learning and recoverability from errors,
robustness, and customizability. WIMP interfaces can also be considered portable and
flexible,

2.2.8. 3D Interfaces
3D interfaces are also used as part of WIMP interfaces, including buttons, scroll bars, and
icons. These elements are made to appear as if they have three dimensions. This type of
interface addresses the same usability principles as the WIMP interface

2.2.9. The Metaphor


Metaphor is used quite successfully to teach new concepts in terms of ones, which are already
understood. It is no surprise that this general teaching mechanism has been successful in
introducing computer novices to relatively foreign interaction techniques. Metaphor is used
to describe the functionality of many interaction widgets, such as windows, menus, buttons
and palettes. Tremendous commercial successes in computing have arisen directly from a

Page 32
judicious choice of metaphor. The Xerox Alto and Star were the first workstations based on
the metaphor of the office desktop. The majority of the management tasks on a standard
workstation have to do with the file manipulation.

2.2.10. Direct Manipulation


In the early 1980s as the price of fast and high-quality graphics hardware was steadily
decreasing, designers were beginning to see that their products were gaining popularity as
their visual content increased. As long as the user-system command line prompt computing
was going to stay within the minority population of the hackers who reveled in the challenge
of complexity. In a standard command line interface, the only way to get any feedback on the
results of previous interaction is to know that you only have to ask for it and to know how to
ask for it. Rapid visual and audio feedback on a high-resolution display screen or through a
high-quality sound system makes it possible to provide evaluative information for every
executed user action.

Rapid feedback is just one feature of the interaction technique known as direct Manipulation.
Others includes
a) Visibility of the objects of interest
b) Incremental action at the interface with rapid feedback on all actions
c) Reversibility of all actions, so that users are encouraged to explore without severe
penalties
d) Syntactic correctness of all actions, so that every user action is a legal operation
e) Replacement of complex command language with actions to manipulate directly the
visible objects.

2.2.11. Language versus action


Whereas it is true that direct manipulation interface make some tasks easier to perform
correctly, it is equally true that some tasks are more difficult, if not impossible. Contrary to
popular wisdom, it is not generally true that action speak louder than words. The image,
projected for direct manipulation was of the interface as a replacement for the underlying
system as the world of interest to the user. Actions performed at the interface replace any
need to understand their meaning at any deeper, system level. Another image is of the
interface s the interlocutor or mediator between the user and the system. The user gives the
interface instructions and it is then the responsibility of the interface to see that those
Page 33
instructions are carried out. The user-system communication is by means of indirect language
instead of direct actions.

2.2.12. Hypertext
In 1945, Vannevar Bush, then the highest-ranking scientific administrator in the US war
effort, published an article entitled ‘As We May Think’ in The Atlantic Monthly. Bush was in
charge of over 6000 scientists who had greatly pushed back the frontiers of scientific
knowledge during the Second World War. He recognized that a major drawback of these
prolific research efforts was that it was becoming increasingly difficult to keep in touch with
the growing body of scientific knowledge in the literature. In his opinion, the greatest
advantages of this scientific revolution were to be gained by those individuals who were able
to keep abreast of an ever-increasing flow of information. To that end, he described an
innovative and futuristic information storage and retrieval apparatus – the memex –, which
was constructed with technology wholly existing in 1945 and aimed at increasing the human
capacity to store and retrieve, connected pieces of knowledge by mimicking our ability to
create random associative links.

An unsuccessful attempt to create a machine language equivalent of the memex on early


1960s computer hardware led Nelson on a lifelong quest to produce Xanadu, a potentially
revolutionary worldwide publishing and information retrieval system based on the idea of
interconnected, non-linear text and other media forms. A traditional paper is read from
beginning to end, in a linear fashion. But within that text, there are often ideas or footnotes
that urge the reader to digress into richer topic. The linear format for information does not
provide much support for this random and associated browsing task. What Bush’s memex
suggested was to preserve the non-linear browsing structure in the actual documentation?
Nelson coined the phrase hypertext in the mid 1960s to reflect this non-linear text structure.

2.2.13. Multi-modality
The majority of interactive systems still uses the traditional keyboard and a pointing device,
such as a mouse, for input and is restricted to a color display screen with some sound
capabilities for output. Each of these input and output devices can be considered as
communication channels for the system and they correspond to certain human
communication channels. A multi-modal interactive system is a system that relies on the use
of multiple human communication channels. Each different channel for the user is referred to
Page 34
as a modality of interaction. In this sense, all interactive systems can be considered multi-
model, for human have always used their visual and haptic channels in manipulating a
computer. In fact, we often use our audio channel to hear whether the computer is actually
running properly.

However, genuine multi-modal systems rely to an extent on simultaneous use of multiple


communication channels for both input and output. Humans quite naturally process
information by simultaneous use of different channels.

2.2.14. Computer-Supported Cooperative Work


Another development in computing in the 1960s was the establishment of the first computer
networks, which allowed communication between separate machines. Personal computing
was all about providing individuals with enough computing power so that they were liberated
from dumb terminals, which operated on time-sharing systems. It is interesting to note that as
computer networks become widespread, individuals retained their powerful workstations but
now wanted to reconnect themselves to the rest of the workstations in their immediate
working environment, and even throughout the world. One result of this reconnection was the
emergence of collaboration between individuals via the computer called computer-supported
cooperative work, or CSCW.

The main distinction between CSCW systems and interactive systems designed for a single
user is that designer can no longer neglect the society within which any single user operates.
CSCW systems are built to allow interaction between humans via the computer. CSCW
system is electronic mail – email – yet another metaphor by which individuals at physically
separate locations can communicate via electronic messages that work in a similar way to
conventional postal systems.

2.2.15. The World Wide Web


Probably the most significant recent development interactive computing is the World Wide
Web, often referred to as just the web, or WWW. The web is built on top of the Internet, and
offers an easy to use, predominantly graphical interface to information, hiding the underlying
complexities of transmission protocols, addresses and remote access to data.

Page 35
The Internet is simply a collection of computers, each linked by any sort of data connections,
whether it be slow telephone line and modem or high-bandwidth optical connection. The
computers of the Internet all communicate using common data transmission protocols and
addressing systems. This makes it possible for anyone to read anything from anywhere, in
theory, if it conforms to the protocol. The web builds on this with its own layer of network
protocol, a standard markup notation for laying out pages of information and a global naming
scheme. Web pages can contain text, color images, movies, sound and, most important,
hypertext links to other web pages. Hypermedia documents can therefore be published by
anyone who has access to a computer connected to the Internet.

2.2.16. Ubiquitous Computing


In the late 1980s, a group of researchers at Xerox PARC led by Mark Weiser, initiated a
research program with the goal of moving human-computer interaction away from the
desktop and out into our everyday lives. Weiser observed.

The most profound technologies are those that disappear. They weave themselves into the
fabric of everyday life until they are indistinguishable from it. These words have inspired a
new generation of researchers in the area of ubiquitous computing. Another popular term for
this emerging paradigm is pervasive computing, first coined by IBM. The intention is to
create a computing infrastructure that permeates our physical environment so much that we
do not notice the computer may longer. A good analogy for the vision of ubiquitous
computing is the electric motor. When the electric motor was first introduced, it was large,
loud and very noticeable. Today, the average household contains so many electric motors that
we hardly ever notice them anymore. Their utility led to ubiquity and, hence, invisibility.

2.2.17. Sensor-based and context-aware interaction


The yard-scale, foot-scale and inch-scale computers are all still clearly embodied devices
with which we interact, whether or not we consider them ‘computers’. There are an
increasing number of proposed and existing technologies that embed computation even
deeper, but unobtrusively, into day-to-day life. Weiser’s dream was computers anymore’, and
the term ubiquitous computing encompasses a wide range from mobile devices to more
pervasive environments.

Page 36
2.3. Chapter review questions
1. Explain the following HCI Usability Paradigms
i. Ubiquitous Computing
ii. Direct Manipulation
iii. Time sharing
iv. WIMP
2. What new paradigms do you think may be significant in the future of interactive
computing?

2.4. Lecture Summary


In this lecture we have discussed paradigms that promote the usability of interactive systems.

2.5. Suggestion for further reading


1. Pressman R.S, Software Engineering : A Practitioners Approach, McGraw Hill
2. Schneiderman B., Designing the user Interface: Strategies for effective Human Computer
Interaction, Addison Wesley
3. Bayer H. and Holtzblat K., Contextual Design: defining customer centered systems,
Morgan Kaufmann
4. Plyse R., Moore. Graphical User interface Design and Evaluation. Prentice Hall.

Page 37
LECTURE 3: HCI HUMAN FACTORS - COGNITION
Lecture Objectives
As the aim of this lecture is to introduce you the study of Human Computer Interaction, so
that after studying this you will be able to:
i. Explain cognitive psychology with respect to HCI designs
ii. Understand the importance of Cognition in HCI design
iii. Explain types of Human Cognition
iv. Understand different cognitive frameworks used in HCI design.
v. Describe Cognitive Psychology theories with respect to HCI

3.1 Introduction
Human/computer interaction is characterized as a dialogue or interchange between the human
and the computer because the output of one serves as the input for the other in an exchange of
actions and intentions. Early characterizations of the human-computer interface were more
computer-centric. Today, the interface is viewed primarily from the other direction. Starting
from human tasks and intentions, we follow the path of actions until we come to the machine.
Human-computer interaction (HCI) involves the activities of humans using computers.
Interaction refers to a dialogue generated by the command and data input to the computer and
the display output of the computer and the sensory/perceptual input to the human and motor
response output of the human. Interaction takes place at the interface, which is made up of a
set of hardware devices and software tools from the computer side and a system of sensory,
motor, and cognitive processes from the human side.

Displays are human-made artifacts designed to support the perception of relevant system
variables and to facilitate further processing of that information. Before a display is designed,
the task that the display is intended to support must be defined (e.g. navigating, controlling,
decision making, learning, entertaining, etc.). A user or operator must be able to process
whatever information that a system generates and displays; therefore, the information must be
displayed according to principles in a manner that will support perception, situation
awareness, and understanding.

Unless human characteristics are considered when designing or implementing technologies,


the consequences can be errors and a lack of human productivity. Historically, many

Page 38
technologies have not been designed with users in mind. Many technologies do not fit users'
tasks. Technology systems need to be built to effectively support human tasks. Failing to
design and develop information technologies with user characteristics in mind can lead to a
lack of system functionality, increase in user dissatisfaction, and increase in ineffective work
practices.

When designing HCI, useful principles can be drawn from the sub-domains of sensation,
perception, attention, memory, and decision-making to guide us on issues surrounding screen
layout, information grouping, menu length, depth and breadth.

3.2 Cognitive Psychology


Psychology is concerned primarily with understanding human behavior and the mental
processes that underlie it. The science of psychology has been very influential in Human
Computer Interaction. Psychology’s contribution to interface design is many fold involving
nearly all aspects of the discipline from sensation and perception of the computer screen and
auditory output, learning and memory of commands and procedures, individual differences in
experience and cognitive abilities using the computer and attitudes toward the computer, and
developmental changes in appropriateness and usability of the computer. Research in HCI
has primarily focused on the cognitive processes involved on the part of the computer "user."

Cognitive psychology have attempted to apply relevant psychological principles to HCI by using
a variety of methods, including development of guidelines, the use of models to predict human
performance and the use of empirical methods for testing computer systems. A key aim of HCI
is to understand how humans interact with computers, and to represent how knowledge is
passed between the two.

3.3. Cognition and Cognitive Frameworks


Cognition refers to the processes by which we (users) become acquainted with things or, in
other words, how we gain knowledge. These include understanding, remembering, reasoning,
attending, being aware, acquiring skills and creating new ideas ie. is the process by which we
gain knowledge. Cognition is studied in various disciplines such as psychology, philosophy,
linguistics, science and computer science.

Page 39
In order to make human-computer interactions that are is easy to learn, easy to remember,
and easy to apply to new problems, computer scientists must understand something about
human learning, memory, and problem solving. While designing user interface of these
systems, the cognitive processes whereby users interact with computers must be taken into
account because usually users’ attributes do not match to computer attributes. Also we should
take into account that computer systems can have non-cognitive effects on the user, for
example the user’s response to virtual worlds.

The processes which contribute to cognition include:


a) Understanding,
b) remembering
c) reasoning
d) perception and recognition
e) memory
f) learning
g) attention
h) being aware
i) acquiring skills
j) creating new ideas
k) Reading, speaking, and listening
l) Problem solving, decision-making.

3.3.1. Cognition Modes


There are two general modes of Cognition
a) Experiential cognition: The state of mind in which we perceive, act, and react to events
around us effectively and effortlessly. It requires reaching a certain level of expertise and
engagement. Examples include driving a car, reading a book, having a conversation and
playing a video game.
b) Reflective cognition: Reflective cognition involves thinking, comparing, and decision-
making. This kind of cognition is what leads to new ideas and creativity. Examples
include designing, learning, and writing a book. Norman points out that both modes are
essential for everyday life but that each requires different kinds of technological support.

Page 40
3.4. Cognitive Frameworks
A number of conceptual frameworks and theories have been developed to explain and predict
user behavior based on theories of cognition. In this section, we outline three early internal
frameworks that focus primarily on mental processes together with three more recent external
ones that explain how humans interact and use technologies in the context in which they
occur. These are:
Internal
• Information processing.
• Mental models
• Gulfs of execution and evaluation
External
• Distributed cognition
• External cognition
• Embodied interaction.

3.4.1. Human Information Processing


Human-computer interaction can be viewed as two powerful information processors (human
and computer) attempting to communicate with each other via a narrow-bandwidth, highly
constrained interface

One of the many other approaches to conceptualizing how the mind works, has been to use
metaphors and analogies. A number of comparisons have been made, including
conceptualizing the mind as a reservoir, a telephone network, and a digital computer. One
prevalent metaphor from cognitive psychology is the idea that the mind is an information
processor, everything that is sensed (sight, hearing, touch, smell, and taste) was considered to
be information, which the mind processes. Information is thought to enter and exit the mind
through a series of ordered processing stages.

HCI is fundamentally an information-processing task. The human information processing


approach is based on the idea that human performance, from displayed information to a
response, is a function of several processing stages. The nature of these stages, how they are
arranged, and the factors that influence how quickly and accurately a particular stage
operates, can be discovered through appropriate research methods.

Page 41
Human information processing analyses are used in HCI in several ways.
• basic facts and theories about information-processing capabilities are taken into
consideration when designing interfaces and tasks
• information-processing methods are used in HCI to conduct empirical studies
evaluating the cognitive requirements of various tasks in which a human uses a
computer
• computational models developed in HCI are intended to characterize the information
processing of a user interacting with a computer, and to predict, or model, human
performance with alternative interfaces.

The idea of human information processing is that information enters and exits the human
mind through a series of ordered as shown below:

Figure 2. Human Information Processing stages


a) Stage1 encodes information from the environment into some form of internal
representation.
b) In stage 2, the internal representation of the stimulus is compared with memorized
representations that are stored in the brain.
c) Stage 3 is concerned with deciding on a response to the encoded stimulus. When an
appropriate match is made the process passes on to;
d) Stage 4, which deals with the organization of the response and the necessary action.

The model assumes that information is unidirectional and sequential and that each of the
stages takes a certain amount of time, generally thought to depend on the complexity of the
operation performed.

3.4.1.1. The Extended Human Information Processing model


The basic information processing model shown above does not account for the importance of:
 attention the processing only takes place when the human is focused on the task
 memory the information may be stored in memory and information already in memory
may be used in processing the input.

Page 42
In the extended model, cognition is viewed in terms of:
1. how information is perceptual processors
2. how that information is attended to, and
3. how that information is processed and stored in memory.

The figure below illustrates the extended human information processing model. It shows that
attention and memory interact with all the stages of processing.

Figure 3. Extended Human Processing Model

An important question when researching into memory is how it is structured. Memory can be
broadly categorized into three parts, which have links between them, moving the information
which comes in through the senses. This parts includes;
• sensory information store
• short-term memory (more recently known as working memory)
• long-term memory

3.4.1.2. The Model Human Processor


An important concept from cognitive psychology is the model human processor (MHP)
(Card, Moran, and Newell, 1983). This describes the cognitive process that people go through
between perception and action. It is important to the study of HCI because cognitive
processing can have a significant effect on performance, including task completion time,
number of errors, and ease of use. This model was based on the human information
processing model.

Page 43
Based on the information-processing model, cognition is conceptualized as a series of
processing stages where perceptual, cognitive, motor processors are organized in relation to
one another. The model predicts which cognitive processes are involved when a user interacts
with a computer, enabling calculations to be made how long a user will take to carry out
various tasks. This can be very useful when comparing different interfaces. For example, it
has been used to compare how well different word processors support a range of editing
tasks. The information processing approach is based on modeling mental activities that
happen exclusively inside the head. However, most cognitive activities involve people
interacting with external kinds of representations, like books, documents, and computers not
to mentions one another. For example, when we go home from wherever we have been we do
not need to remember the details of the route because we rely on cues in the environment
(e.g., we know to turn left at the red house, right when the road comes to a T-junction, and so
on.). Similarly, when we are at home we do not have to remember where everything is
because information is “out there.” We decide what to eat and drink by scanning he items in
the fridge, find out whether any messages have been left by glancing at the answering
machine to see if there is a flashing light, and so on.

The MHP model was used as the basis for the GOMS (Goals, Operators, Methods, and
Selection Rules) family of techniques proposed for quantitatively modeling and describing
human task performance.
a) Goals: These are the user’s goals, describing what the user wants to achieve. Further, in
GOMS the goals are taken to represent a ‘memory point’ for the user, from which he can
evaluate what should be done and to which he may return should any errors occur.
b) Operators: These are the lowest level of analysis. They are the basic actions that the user
must perform in order to use the system. They may affect the system (e.g., press the ‘X’
key) or only the user’s mental state (e.g., read the dialogue box). There is still a degree of
flexibility about the granularity of operators; we may take the command level “issue the
select command” or be more primitive; “move mouse to menu bar, press center mouse
button….”
c) Methods: As we have already noted, there are typically several ways in which a goal can
be split into sub goals.
d) Selection: Selection means of choosing between competing methods.

Page 44
One of the problems of abstracting a quantitative model from a qualitative description of user
performance is ensuring that two are connected. In particular, it a has been noted that the
form and contents of GOMS family of models are relatively unrelated to the form and content
of the model human processor and it also oversimplified human behavior.

The problems with the Model Processor approach includes;


 It models performance as a series of processing steps
 is that appropriate?
 It is too focused on one person, one task
 It is an overly simplistic view of human behavior
 ignores environment & other people

3.4.2. Mental Models


Mental models are representations in the mind of real or imaginary situations. Conceptually,
the mind constructs a small scale model of reality and uses it to reason, to underlie
explanations and to anticipate events. These models can be constructed from perception,
imagination, or interpretation of discourse. A mental model represents explicitly what is true,
but not what is false. The greater number of mental models a task suggests, and the greater
the complexity of every model, the poorer performance is. These models are more than just
pictures or images, sometimes the model itself cannot be visualized or the image of the model
depends on underlying models. Models can also represent abstract notions like negation or
ownership which are impossible to visualize. Mental Models will be discussed later in lecture
6.
3.4.3. Gulfs of Execution and Evaluation
Closely related to mental models is the idea of gulfs between the interface of a system and its
users. The figure below shows the cycle of interaction between a user and a system:

Page 45
Figure 4. Cycle of interaction between a user and a system
The Gulf of Evaluation is the amount of effort a user must exert to interpret the physical state
of the system and how well their expectations and intentions have been met.
1. Users can bridge this gulf by changing their interpretation of the system image, or
changing their mental model of the system.
2. Designers can bridge this gulf by changing the system image.

The Gulf of Execution is the difference between the user’s goals and what the system allows
them to do – it describes how directly their actions can be accomplished.
1. Users can bridge this gulf by changing the way they think and carry out the task
toward the way the system requires it to be done
2. Designers can bridge this gulf by designing the input characteristics to match the
users’ psychological capabilities.

Design considerations:
Systems should be designed to help users form the correct productive mental models.
Common design methods include the following factors:
1. Affordance: Clues provided by certain objects properties on how this object will be used
and manipulated.
2. Simplicity: Frequently accessed function should be easily accessible. A simple interface
should be simple and transparent enough for the user to concentrate on the actual task in
hand.
3. Familiarity: As mental models are built upon prior knowledge, it's important to use this
fact in designing a system. Relying on the familiarity of a user with an old, frequently

Page 46
used system gains user trust and help accomplishing a large number of tasks. Metaphors
in user interface design are an example of applying the familiarity factor within the
system.
4. Availability: Since recognition is always better than recall, an efficient interface should
always provide cues and visual elements to relieve the user from the memory load
necessary to recall the functionality of the system.
5. Flexibility: The user should be able to use any object, in any sequence, at any time.
6. Feedback: Complete and continuous feedback from the system through the course of
action of the user. Fast feedback helps assessing the correctness of the sequence of
actions.

3.4.4. Distributed Cognition


Distributed cognition is a framework proposed by Hutchins (1991). Its basis is that to explain
human behavior you have to look beyond the individual human and the individual task.
Distributed cognition is an emerging theoretical framework whose goal is to provide an
explanation that goes beyond the individual, to conceptualizing cognitive activities as
embodied and situated within the work context in which they occur. Primarily, this involves
describing cognition as it is distributed across individuals and the setting in which it takes
place.

A main goal of the distributed cognition approach is to analyze how the different components
of the functional system are coordinated. This involves analyzing how information is
propagated through the functional system in terms of technological cognitive, social and
organizational aspects. To achieve this, the analysis focuses on the way information moves
and transforms between different representational states of the objects in the functional
system and the consequences of these for subsequent actions.

One property of distributed cognition that is often discovered through analysis is situation
awareness (Norman, 1993) which is the silent and inter-subjective communication that is
shared among a group. When a team is working closely together the members will monitor
each other to keep abreast of what each member is doing. This monitoring is not explicit -
rather the team members monitor each other through glancing and inadvertent overhearing

The two main concerns of distributed cognition are:


Page 47
• To map out how the different representational states are coordinated across time,
location and objects
• To analyze and explain breakdowns

More recent research in cognitive frameworks has focused on:


a) Knowledge Representation Models: How knowledge is represented
b) Mental Models: How mental models (these refer to representation people construct in
their mind of themselves, others, objects and the environment to help them know what to
do in current and future situations) develop and are used in HCI
c) User Interaction Learning Models: How user learn to interact and become experienced in
using computer system. With respect to applying this knowledge to HCI design, there has
been considerable research in developing:
d) Conceptual Models: Conceptual models are (these are the various ways in which systems
are understood by different people) to help designers develop appropriate interfaces.
e) Interface Metaphor: Interface metaphors are (these are GUIs that consists of electronic
counterparts to physical objects in the real world) to match the knowledge requirements
of users.

3.4.5. External Cognition


External cognition is concerned with explaining the cognitive processes involved when we
interact with different external representations. A main goal is to explicate the cognitive benefits
of using different representations for different cognitive activities and the processes involved. The
main one includes:
a) externalizing to reduce memory load: A number of strategies have been developed for
transforming knowledge into external representations to reduce memory load. One such
strategy is externalizing things we find difficult to remember, such as birthdays,
appointments and addresses. Externalizing, therefore, can help reduce people’s memory
burden by:
• reminding them to do something (e.g., to get something for their mother’s birthday)
• reminding them of what to do (e.g., to buy a card)
• reminding them of when to do something (send it by a certain date)
b) computational offloading: Computational offloading occurs when we use a tool or device in
conjunction with an external representation to help us carry out a computation

Page 48
c) Annotating and cognitive tracing: Another way in which we externalize our cognitions is
by modifying representations to reflect changes that are taking place that we wish to
mark. For example, people often cross thinks off in to-do list to show that they have been
completed. They may also reorder objects in the environment; say by creating different
piles as the nature of the work to be done changes. These two kinds of modification are
called annotating and cognitive tracing:
 Annotating involves modifying external representations, such as crossing off
underlining items.
 Cognitive tracing involves externally manipulating items different orders or structures

3.5. Recent Development in Cognitive Psychology


With the development of computing, the activity of brain has been characterized as a series of
programmed steps using the computer as a metaphor. Concept such as buffers, memory stores
and storage systems, together with the type of process that act upon them (such as parallel
verses serial, top-down verses down-up) provided psychologist with a mean of developing
more advanced models of information processing, which was appealing because such models
could be tested. However, since the 1980s there has been a more away from the information-
processing framework within cognitive psychology.

This has occurred in parallel with the reduced importance of the model human processor with
in HCI and the development other theoretical approaches. This Cognitive theories are classed
as either computational or connectionist.
 The computational approach
This approach uses the computer as a metaphor for how the brain works, similar to the
information processing models described above. Computational approaches continue to
adopt the computer metaphor as a theoretical framework, but they no longer adhere to the
information-processing framework. Instead, the emphasis is on modeling human
performance in terms of what is involved when information is processed rather than when
and how much. Primarily, computational models conceptualize the cognitive system in
terms of the goals, planning and action that are involved in task performance. These
aspects include modeling: how information is organized and classified, how relevant
stored information is retrieved, what decisions are made and how this information is

Page 49
reassemble. Thus tasks are analyzed not in terms of the amount of information processed
per se in the various stages but in terms of how the system deals with new information.
 Connectionist Approaches
The connectionist approach rejects the computer metaphor in favour of the brain
metaphor, in which cognition is represent by neural networks. This approach, otherwise
known as neural networks or parallel distributed processing, simulate behavior through
using programming models. However, they differ from conceptual approaches in that
they reject the computer metaphor as a theoretical framework. Instead, they adopt the
brain metaphor, in which cognition is represented at the level of neural networks
consisting of interconnected nodes. Hence all cognitive processes are viewed as
activations of the nodes in the network and the connections between them rather than the
processing and manipulation of information.

3.6. Lecture Summary


In this lecture you have learnt about

3.7. Review Questions


1. Compare and contrast the types of Cognition modes discussed in this lecture
2. Discuss the cognition frameworks;
i. Distributed cognition
ii. Information processing.
iii. Mental models
iv. External cognition

3. What are the characteristics of Mental Models

3.8. Suggestion for Further Reading


1. Linday, P. & Norman, D., A. Human Information Processing: An Introduction to
Psychology, 1977
2. Preece, J., Interaction Design
3. Preece, J., Human Computer Interaction

Page 50
4. Schneiderman B., Designing the user Interface: Strategies for effective Human Computer
Interaction, Addison Wesley
5. Bayer H. and Holtzblat K., Contextual Design: defining customer centered systems,
Morgan Kaufmann

Page 51
LECTURE 4: HCI HUMAN FACTORS – Perception and Representation
Lecture Objectives
As the aim of this lecture is to introduce you the study of Human Computer Interaction, so
that after studying this you will be able to:
i. Explain the role Human perception in HCI designs

4.1. Perception
Perception is the process or the capability to attain awareness and understand the
environment surrounding us by interpreting, selecting and organizing different type of
information. All perceptions involve stimuli in the central nervous system. These stimuli
result from the stimulation of our sense organs such as auditory stimuli when one hears a
sound or a taste when someone eats something.

Perception is not only passive and can be shaped by our learning, experiences and education.
By training your brain and your cognitive abilities one can improve the different skills that
they use to perceive the world around you, be more aware and improve your learning
capacity.

An understanding of the way humans perceive visual information is important in the design
of visual displays in computer systems. Several competing theories have been proposed to
explain the way we see. These can be split into two classes: constructivist and ecological.
 Constructivist theorists believe that seeing is an active process in which our view is
constructed from both information in the environment and from previously stored
knowledge. Perception involves the intervention of representations and memories. What
we see is not a replica or copy; rather a model that is constructed by the visual system
through transforming, enhancing, distorting and discarding information.
 Ecological theorists believe that perception is a process of ‘picking up” information from
the environment, with no construction or elaboration needed. Users intentionally engage
in activities that cause the necessary information to become apparent. We explore objects
in the environment.

Page 52
4.2. The Gestalt Laws of perceptual organization (Constructivist)
This law implies that users’ ability to interpret the meaning of scenes and objects is based on
innate human laws of organization. Look at the following figure:

What does it say? What do you notice about the middle letter of each word?

You probably read this as ‘the cat’. You interpreted the middle letter in each word according
to the context. Your prior knowledge of the world helped you make sense of ambiguous
information. This is an example of the constructivist process.

Gestalt psychology is a movement in experimental psychology that began just prior to World
War I. It made important contributions to the study of visual perception and problem solving.
The Gestalt approach emphasizes that we perceive objects as well-organized patterns rather
than separate component parts. The Gestalt psychologists were constructivists.

The focal point of Gestalt theory is the idea of "grouping," or how we tend to interpret a
visual field or problem in a certain way. According to the Gestalt laws of perceptual
organization the main factors that determine grouping are:
 proximity - how elements tend to be grouped together depending on their closeness. Dots
appear as groups rather than a random cluster of elements

 similarity - The tendency for elements of same shape or color to be seen as belonging
together, ie how items that are similar in some way tend to be grouped together.
Similarity can be shape, colour, etc.

Page 53
 closure - missing parts of the figure are filled in to complete it, so that it appears as a
whole circle ie how items are grouped together if they tend to complete a pattern.

 good continuation/ continuity - the stimulus appears to be made of two lines of dots,
traversing each other, rather than a random set of dots. We tend to assign objects to an
entity that is defined by smooth lines or curves.

Example in user interface design – proximity used to give structure in a form:

Page 54
• Symmetry - regions bounded by symmetrical borders tend to perceived as coherent
figures

4.3. Affordances (Ecological)


The ecological approach argues that perception is a direct process, in which information is
simply detected rather than being constructed. A central concept of the ecological approach
is the idea of affordance (Norman, 1988). The possible behaviour of a system is the
behaviour afforded by the system. A door affords opening, for example. A vertical scrollbar
in a graphical user interface affords movement up or down. The affordance is a visual clue
that suggests that an action is possible. When the affordance of an object is perceptually
obvious, it is easy to know how to interact with it.

Norman's first and ongoing example is that of a door. Some doors are difficult to see if they
should be pushed or pulled. Other doors are obvious. The same is true of ring controls on a
cooker. How do you turn on the right rear ring?

"When simple things need labels or instructions, the design is bad."

4.4. Affordances in Software


Look at these two possible designs for a vertical scroll bar.

Page 55
Both scrollbars afford movement up or down.
What visual clues in design on the right make this affordance obvious?

4.4.1. Perceived Affordances in Software


The following list suggests the actions afforded by common user interface controls:
• Buttons are to push.
• Scroll bars are to scroll.
• Checkboxes are to check.
• List boxes are to select from. etc.

In some of these cases the affordances of GUI objects rely on prior knowledge or learning.
We have learned that something that looks like a button on the screen is for clicking. A text
box is for writing in, etc. For example, saying that a button on a screen affords clicking,
whereas the rest of the screen does not, is inaccurate. You could actually click anywhere on
the screen. We have learned that clicking on a button shaped area of the screen results in an
action being taken.

4.5. Link affordance in web sites


It is important for web sites users to know what objects on the page can be clicked to follow
links. This is known as link affordance. The following lists give some guidelines for
improving link affordance:

4.6. Influence of Theories of perception in HCI


The constructivist and ecological theorists fundamentally disagree on the nature of
perception. However, interface and web designers should recognize that both theories can be
useful in the design of interfaces:
a) the Gestalt laws can help in laying out interface components to make use of the
context and prior knowledge of the user

Page 56
b) paying careful attention to the affordances of objects ensures that the information
required to use them can easily be detected by the user.

4.7. Lecture Summary

4.8. Lecture review questions


Explain the The Gestalt Laws of perceptual organization

4.9. Further Reading


1. Gibson, J.J. (1979). The Ecological Approach to Visual Perception. Boston: Houghton
Mifflin.
2. Norman, D. (1988). The Psychology of Everyday Things.   New York, Basic Books

Page 57
LECTURE 5: HUMAN FACTORS -ATTENTION AND MEMORY
Lecture Objectives
As the aim of this lecture is to introduce you the study of Human Computer Interaction, so
that after studying this you will be able to:
i. Explain the cognitive process of Attention and Memory in the context of HCI
ii. Describe the models of attention cognitive process
iii. Describe memory models
iv. Explain the importance of attention to the Human Information Processing model

5.1 Introduction
Here in this lecture we will study two cognitive processes named attention and memory. The
importance of these two you have seen in the Extended Human Processing model, studied in
Lecture 3 as shown in figure

5.2. Attention
Attention is the process of selecting things to concentrate on, at a point in time, from the range of
possibilities available. Attention involves our auditory and/or visual senses an example of
auditory attention is waiting in the dentist’s waiting room for our name to be called out to know
when it is our time to go in. auditory attention is based on pitch, timber and intensity. An example
of attention involving the visual senses in scanning the football results in a newspaper to attend to
information about how our team has done. Visual attention is based on color and location.
Attention allows us to focus on information that is relevant to what we are doing. The extent to
which this process is easy or difficult depends on whether we have clear goals and Whether the
information we need is salient in the environment

Page 58
The human brain is limited in capacity. It is important to design user interfaces which take
into account the attention and memory constraints of the users. This means that we should
design meaningful and memorable interfaces. Interfaces should be structured to be attention-
grabbing and require minimal effort to learn and remember. The user should be able to deal
with information and not get overloaded. Our ability to attend to one event from what seems
like a mass of competing stimuli has been described psychologically as focused attention.
The "cocktail party effect" -- the ability to focus one's listening attention on a single talker
among a cacophony of conversations and background noise has been recognized for some
time.

We know from psychology that attention can be focused on one stream of information (e.g.
what someone is saying) or divided (e.g. focused both on what someone is saying and what
someone else is doing). We also know that attention can be voluntary (we are in an attentive
state already) or involuntary (attention is grabbed). Careful consideration of these different
states of attention can help designers to identify situations where a user’s attention may be
overstretched, and therefore needs extra prompts or error protection, and to devise
appropriate attention attracting techniques.

Sensory processes, vision in particular, are disproportionately sensitive to change and


movement in the environment. Interface designers can exploit this by, say, relying on
animation of an otherwise unobtrusive icon to indicate an attention-worthy event.

5.3. Models of Attention


There are two models of attention:
a) Focused attention
b) Divided attention

5.3.1. Focused Attention


Our ability to attend to one event from what amounts to a mass of competing stimuli in the
environment have been psychologically termed as focused attention. The streams of
information we choose to attend to will tend to be relevant to the activities and intentions that
we have at that time. For example, when engaged in a conversation it is usual to attend to
what the other person is saying. If something catches our eye in the periphery to our vision,
for example, another person we want to talk to suddenly appear, we may divert our attention
Page 59
to what she is doing. We may then get distracted from the conversation we are having and as
a consequence have to ask the person we are conversing with to repeat them. On the other
hand, we may be skilled at carrying on the conversation while intermittently observing what
the person we want to talk to is doing.

5.3.2. Divided Attention


As we said, we may be skilled at carrying on the conversation while intermittently observing
what the person we want to talk to is doing. When we attempt to attend to more than one
thing at a time, as in the above example, it is called divided attention. Another example that is
often used to illustrate this intentional phenomenon is being able to drive while holding a
conversation with a passenger.

A further property of attention is that can be voluntary, as when we make a conscious effort
to change our attention. Attention may also be involuntary, as when the salient
characteristics of the competing stimuli grab our attention. An everyday example of an
involuntary act is being distracted from working when we can hear music or voices in the
next room. Another thing is that frequent actions become automatic actions, that is, they do
not need any conscious attention and they require no conscious decisions.

5.4. Focusing attention at the interface


What is the significance of attention for HCI? How can an understanding of intentional
phenomena be usefully applied to interface design? Clearly, the manner in which we deploy
our attention has a tremendous bearing on how effectively we can interact with a system. If
we know that people are distracted, often involuntarily, how is it possible to get their
attention again without allowing them to miss the ‘window of opportunity’?
Techniques which can be used to guide the user’s attention include:
 Structure – grouping, based on the Gestalt laws
 Spatial and temporal cues – where things are positioned or when they appear
 Colour coding, as described in the previous chapter
 Alerting techniques, including animation or sound

Note

Page 60
1. Important information should be displayed in a prominent place to catch the user’s eye
Less important information can be relegated to the background in specific areas – the user
should know where to look
2. Information not often requested should not be on the screen, but should be accessible
when Needed

Note that the concepts of attention and perception are closely related.

5.4.1. Structuring Information


One way in which interfaces can be designed to help users find the information they need is
to structure the interface so that it is easy to navigate through. This requires presenting not
too much information and not too little o a screen, as in both cases the user will have to spend
considerable time scanning through either a cluttered screen or numerous screens of
information. Instead of arbitrarily presenting data it should be grouped and ordered into
meaningful parts capitalizing on the perceptual laws of grouping, information can be
meaningfully structured so that it is easier to perceive and able to guide attention readily to
the appropriate information. This would help user to;
a) Attend his/her task not the interface.
b) Decide what to focus on, based on their tasks, interest, etc.
c) Stay focused, do not provide unnecessary distractions.
d) Structure his/her task, e.g. help
e) Create distraction, when really necessary!
f) Use alerts (only) when appropriate!

5.5. Multitasking and Interruptions


In a work environment using computers, people are often subject to being interrupted, for
example by a message or email arriving. In addition, it is common for people to be
multitasking carrying out a number of tasks during the same period of time by alternating
between them. This is much more common than performing and completing tasks one after
another.

In complex environments, users may be performing one primary task which is the most
important at that time, and also one or more less important secondary tasks. For example, a
pilot’s tasks include attending to air traffic control communications, monitoring flight
Page 61
instruments, dealing with system malfunctions which may arise, and so on. At any time, one
of these will be the primary task, which is said to be fore-grounded, while other activities are
momentarily suspended.

People are in general good at multitasking but are often prone to distraction. On returning to
an activity, they may have forgotten where they left off. People often develop their own
strategies, to help them remember what actions they need to perform when they return to an
activity.

Such external representations, or cognitive aids (Norman, 1992), may include writing lists or
notes, or even tying a knot in a handkerchief. Cognitive aids have applications in HCI, where
the system can be designed to
• user where he was
• remind user of common tasks

5.6. Automatic Processing


Many activities are repeated so often that they become automatic – we do them without any
need to think. Examples include riding a bike, writing, typing, and so on. Automatic
cognitive processes are:
• fast
• demanding minimal attention
• unavailable to consciousness
Automatic processes, are not affected by limited capacity of brain, do not require attention
and are difficult to change once they have been learned. If a process is not automatic, it is
known as a controlled process.

5.7. Memory Constraints


Indeed, much of our everyday activities rely on memory. As well as storing all our factual
knowledge, our memory contains our knowledge of actions or procedures. It allows us to
repeat actions, to use language, and to use new information received via our senses. It also
gives us our sense of identity, by preserving information from our past experiences.

The human memory system is very versatile, but it is by no means infallible. We find some
things easy to remember, while other things can be difficult to remember. The same is true
Page 62
when we try to remember how to interact with a computer system. Some operations are
simple to remember while others take a long time to learn and are quickly forgotten. An
understanding of human memory can be helpful in designing interfaces that people will find
easy to remember how to use.

Humans have four types of memory:


1. Iconic memory: This very short term memory includes images left in memory when a
user closes their eyes.
2. Short-term memory: A temporary memory store where information decays over time.
3. Working memory: A temporary memory store that includes refreshing or reusing the
information.
4. Long-term memory: A memory that is permanently encoded with longer more
permanent memories.

In addition, memory can be classified as declarative memories that include facts or statements
about the world and procedural memories that are used to perform procedures. More
specifically, implicit memories are not reportable and explicit memories can be reported.
Human factors workers must also consider human learning abilities and how to design
information technologies to support different learning styles.

5.8. Levels of Processing Theory


The extent to which things can be remembered depends on its meaningfulness. In
psychology, the levels of processing theory (Craik and Lockhart, 1972) has been developed
to account for this. This says that information can be processed at different levels, from a
shallow analysis of a stimulus (for example the sound of a word) to a deep or semantic
analysis. The meaningfulness of an item determines the depth of the processing – the more
meaningful an item the deeper the level of processing and the more likely it is to be
remembered.

5.8.1. Meaningful Interfaces


This suggests that computer interfaces should be designed to be meaningful. This applies
both to interfaces which use commands and interfaces which use icons or graphical
representations for actions. In either case, the factors which determine the meaningfulness
are:
Page 63
• Context in which the command or icon is used
• The task it is being used for
• The form of the representation
• The underlying concept

5.8.2. Meaningfulness of Commands


The following guidelines are examples taken from a larger set which was compiled to suggest
how to ensure that commands are meaningful (Booth 1994, Helander, 1988):
• Syntax and commands should be kept simple and natural
• The number of commands in a system should be limited and in a limited format
• Consider the user context and knowledge when choosing command names.
• Choose meaningful command names. Words familiar to the user
• The system should recognize synonymous and alternative forms of command syntax
• Allow the users to create their own names for commands

Sometimes a command name may be a word familiar to the user in a different context. For
example, the word ‘CUT’ to a computer novice will mean to sever with a sharp instrument,
rather than to remove from a document and store for future use. This can make the CUT
command initially confusing.

5.8.3. Meaningfulness of Icons


Icons can be used for a wide range of functions in interfaces, for example
• Labeling e.g. toolbar item, web page link
• Warning e.g. error message
• Identifying e.g. file types, applications
• Manipulating e.g. tools for drawing, zooming
• Container e.g. wastebasket, folder

The extent to which the meaning of an icon is understood depends on how it is represented.
Representational form of icons can be classified as follows:
a. Resemblance icons – depict the underlying concept through an analogous image.
Eg. the road sign for "falling rocks" presents a clear resemblance of the roadside hazard.
this represents the Windows calculator application, and resembles a calculator
b. Exemplar icons – serve as a typical example
Page 64
Eg. a knife and fork used in a public information sign to represent "restaurant services".
The image shows the most basic attribute of what is done in a restaurant i.e. eating. this
represents Microsoft Outlook – the clock and letter are examples of the tasks this
application does (calendar and email tasks)
c. Symbolic icons – convey the meaning at a higher level of abstraction
the picture of a wine glass with a fracture conveys the concept of fragility this represents
a connection to the internet – the globe conveys the concept of the internet
d. Arbitrary icons – bear no relation to the underlying concept
the bio-hazard sign consists of three partially overlaid circles this represents a software
design application called Enterprise Architect. There is no obvious meaning in the icon
to tell you what task you can do with the application
Note that arbitrary icons should not be regarded as poor designs, even though they must
be learned. Such symbols may be chosen to be as unique and/or compact such as a red no
entry sign with a white horizontal bar, designed to avoid dangerous misinterpretation.
e. Combination Icons
Icons are often favoured as an alternative to commands. It is common for users who use a
system infrequently to forget commands, while they are less likely to forget icons once
learnt. However, the meaning of icons can sometimes be confusing, and it is now quite
common to use a redundant form of representation where the icons are displayed together
with the command names.

5.9. Lecture Summary

5.10. Lecture review questions


1. Compare and contrast the two models of attention cognitive process
2. Explain the techniques that can be use to guide a User’s attention in HCI
3. Discuss the types of User memory in the context of HCI
4. Discuss the factors that determine the meaningfulness of HCI

5.11. Further Reading

Page 65
LECTURER 6: MENTAL MODELS AND KNOWLEDGE
Lecture Objectives
As the aim of this lecture is to introduce you the study of Human Computer Interaction, so
that after studying this you will be able to:
i. Explain the concept of Mental models in HCI
ii. Explain the characteristics of Mental Models
iii. Explain the applicability of Mental Models to HCI
iv. Discuss the types of Mental Models
v. Explain Knowledge representation in Memory

6.1. Mental Models


By discovering what users know about systems and how they reason about how the systems
function, it may be possible to predict learning time, likely errors and the relative ease with
which users can perform their tasks. We can also design interfaces which support the
acquisition of appropriate user mental models.
"In interacting with the environment, with others, and with the artefacts of
technology, people form internal, mental models of themselves and of the things with
which they are interacting. These models provide predictive and explanatory power
for understanding the interaction."
- Donald Norman (1993)

Mental models are representations in the mind of real or imaginary situations. Conceptually,
the mind constructs a small scale model of reality and uses it to reason, to underlie
explanations and to anticipate events. These models can be constructed from perception,
imagination, or interpretation of discourse. A mental model represents explicitly what is true,
but not what is false. The greater number of mental models a task suggests, and the greater
the complexity of every model, the poorer performance is. These models are more than just
pictures or images, sometimes the model itself cannot be visualized or the image of the model
depends on underlying models. Models can also represent abstract notions like negation or
ownership which are impossible to visualize

Within cognitive psychology the term mental model has since been explicated by Johnson-
Laird (1983, 1988) with respect to its structure and function in human reasoning and

Page 66
language understanding. In terms of structure of mental models, he argues that mental models
are either analogical representations or a combination of analogical and prepositional
representations. They are distinct from, but related to images. A mental model represents the
relative position of a set of objects in an analogical manner that parallels the structure of the
state of objects in the world. An image also does this, but more specifically in terms of view
of a particular model.

An important difference between images and mental models is in terms of their function.
Mental models are usually constructed when we are required to make an inference or a
prediction about a particular state of affairs. In constructing the mental model a conscious
mental simulation may be ‘run’ from which conclusions about the predicted state of affairs
can be deduced. An image, on the other hand, is considered to be a one-off representation.

Having developed a mental model of an interactive product, it is assumed that people will use
it to make inferences about how to carry out tasks when using the interactive product. Mental
models are also used to fathom what to do when something unexpected happens with a
system and when encountering unfamiliar systems. The more someone learns about a system
and how it functions, the more their mental model develops.

Some of the characteristics of mental models are:


• Incomplete
• Constantly evolving
• Not accurate representation (contain errors and uncertainty measures)
• Provide a simple representation of a complex phenomena
• Can be represented by a set of if-then-else rules

6.2. Types of Mental Models


Two main types of mental model were identified in the 1980s:
a) Structural models define facts the user has about how a system works. Its basic advantage
is that the knowledge of how a device or system works can predict the effect of any
possible sequence of actions, meanwhile constructing such a model in mind involves a
great deal of effort

Page 67
b) Functional models, also known as task-action mapping models, are procedural knowledge
about how to use the system. The main advantage of functional models is that they can be
constructed from existing knowledge about a similar domain or system.
Structural models are context free while functional models are context sensitive.

Structural models can answer unexpected questions and make predictions, while functional
models are based round a fixed set of tasks. Most of the time users will tend to apply
functional models as they are usually easier to construct.

The theory of knowledge representation and mental models is applicable in designing every
day's things. By knowing what users know about the system and how they can infer the
system functionality from the provided interface, it will be possible to predict and improve
the learning curve as well as users errors and the ease of use of that system and finally to
design interfaces that support the acquisition of appropriate user model.

6.3. Applicability in HCI


 From an HCI perspective, users form mental models by interacting with a certain computer
system. The content and structure of mental models are influenced by selecting which
information about a certain system is presented to the user and how it is presented. The
interpretation of these models specifies how users interact with that system. Some major
questions in this domain arise such as : To what extent the form of representation used in the
interface affects the way the user solves a certain problem? Furthermore, Is it possible to
develop interfaces that facilitate problem solving and support creativity? Does a graphical
programming environment support innovation because it provides information in a format
that is closer to the user's mental representation of the problem?

6.4. Mental Models in HCI


 Several theories exist relating different models of users, designers and systems. They
proposed four basic models of Mental models that affect the way users interact with a system
which are;
1. User's model of the system which is the model constructed at the users' side through
their interaction with the target system,

Page 68
2. System's model of the user which is the model constructed inside the system as it runs
through different sources of information such as profiles, user settings, logs, and even
errors.
3. Conceptual model which is an accurate and consistent representation of the target
system held by the designer or an expert user
4. Designer's model of the user's model which is basically constructed before the system
exists by looking at similar systems or prototype or by cognitive models or task
analysis.

Figure 5. Mental Models in HCI

Several factors influence the way these models are built and maintained. At the users' side:
their physical and sensory abilities, their previous experience dealing with similar systems,
their domain knowledge and finally ergonomics and environments in which users live. At the
designers' side, the need is to influence the user's model to perceive the conceptual model
underlying the relevant aspects of the system. This can be accomplished using metaphor,
graphics, icons, language, documentations and tutorials. It is important that all these materials
collaborate together to encourage the same model.

Design considerations:
As stated above, systems should be designed to help users form the correct productive mental
models. Common design methods include the following factors:

Page 69
1) Affordance:  Clues provided by certain objects properties on how this object will be used
and manipulated.
2) Simplicity:  Frequently accessed function should be easily accessible. A simple interface
should be simple and transparent enough for the user to concentrate on the actual task in
hand
3) Familiarity:   As mental models are built upon prior knowledge, it's important to use this
fact in designing a system. Relying on the familiarity of a user with an old, frequently
used system gains user trust and help accomplishing a large number of tasks. Metaphors
in user interface design are an example of applying the familiarity factor within the
system.
4) Availability: Since recognition is always better than recall, an efficient interface should
always provide cues and visual elements to relieve the user from the memory load
necessary to recall the functionality of the system.
5) Flexibility: The user should be able to use any object, in any sequence, at any time.
6) Feedback:   Complete and continuous feedback from the system through the course of
action of the user. Fast feedback helps assessing the correctness of the sequence of
actions.

6.5. Knowledge Representation


Knowledge is stored in memory in a highly organized fashion. It usually takes the mind a
fraction of a second to answer some question such as what is the capital of Paris or who is the
president of the United States. Knowledge is represented in memory as:
1. Analogical representations: picture-like images, e.g. a person’s face
2. Propositional representations: language-like statements, e.g. a car has four wheels

Connectionist theorists believe that analogical and propositional representations are


complementary, and that we use networks of nodes where the knowledge is contained in the
connections between the nodes. A connectionist network for storing information about people
is shown below:

Page 70
6.6. Knowledge in the Head vs. Knowledge in the World
Psychologist and cognitive scientist Donald A. Norman published a book titled The
Psychology of Everyday Things. In it he reviews the factors that affect our ability to use the
items we encounter in our everyday lives. He relates amusing but pointed stories of people’s
attempts to use VCRs, computers, slide projectors, telephones, refrigerator controls, etc.

In his book Norman distinguishes between the elements of good design for items that are
encountered infrequently or used only occasionally and those with which the individual
becomes intimately familiar through constant use. Items encountered infrequently need to be
obvious. An example used by Norman is the common swinging door. The individual
intending to pass through the door needs to know whether to push the door or pull it. We all
have experienced doors where it was not obvious what to do, or two doors appeared the same,
but only one would swing. However, when a door is well designed, it is obvious whether one
is to push or pull. Norman refers to knowledge of how to use such items as being in the
world.

However, when one uses something frequently, efficiency and speed are important, and the
knowledge of how to use it needs to reside in the head. Most people can relate to the common
typewriter or computer keyboard. When one uses it infrequently and has never learned to

Page 71
type, the knowledge of which key produces which character on the screen comes from
visually scanning the keyboard and finding the key with the desired character. The
knowledge comes from the world. However, people who frequently use a computer learn to
touch type, transferring the knowledge to the head. Their efficiency and speed far exceed that
of the hunt-and-peck typist. Norman describes the trade-off between knowledge in the world
and knowledge in the head as follows:

e
i

6.7. Lecture Summary

6.8. Lecture Review Questions


1. Explain the concept of Mental models in HCI
2. Explain the characteristics of Mental Models
3. Discuss the applicability of Mental Models to HCI
4. Discuss the types of Mental Models
5. Explain how Knowledge is represented in Memory

6.9. Further Reading


Peere, J., et al. 1994. Human computer interaction. Addison Wesley
Dix, A., et al. Human computer interaction

Page 72
LECTURE 7: COMPUTER FACTORS IN HCI
USER INTERFACE & USER SUPPORT
Lecture Objectives
As the aim of this lecture is to introduce you the study of Human Computer Interaction, so
that after studying this you will be able to:
i. Describe the advantages and disadvantages of different input output devices keeping
in view different aspects of HCI
ii. Describe the Approaches to User Support in HCI
iii. Explain the requirements for User Support in HCI
iv. Explain types of User Support

7.1. Introduction
A user interface is the system by which people user interact with a machine. The user
interface includes hardware (physical) and software (logical) components. User interfaces
exist for various systems and provide a means of:
a) Input, allowing the users to manipulate a system
b) Output, allowing the system to indicate the effects of the users' manipulation

The user interface is a component of a computer or its software which can be visualized,
heard, touched, interacted with, run and understood by the common people or users of the
computer. A computer system is made up of various elements each of these elements affects
the interaction
a) input devices – text entry and pointing
b) output devices – screen (small & large), digital paper
c) virtual reality – special interaction and display devices
d) physical interaction – e.g. sound, haptic, bio-sensing
e) paper – as output (print) and input (scan)
f) memory – RAM & permanent media, capacity & access
g) processing – speed of processing, networks

The devices dictate the styles of interaction that the system supports. If we use different
devices, then the interface will support a different style of interaction.

Page 73
7.2. Input Devices
An input device is
” “a device that, together with appropriate software, transforms information from the user
into data that a computer application can process”

Input is concerned with recording and entering data into computer system and issuing
instruction to the computer. In order to interact with computer systems effectively, users must
be able to communicate their interaction in such a way that the machine can interpret them.
Therefore, input devices can be defined as: a device that, together with appropriate software,
transforms information from the user into data that a computer application can process.

One of the key aims in selecting an input device and deciding how it will be used to control
events in the system is to help users to carry out their work safely, effectively, efficiently and,
if possible, to also make it enjoyable. The choice of input device should contribute as
positively as possible to the usability of the system. In general, the most appropriate input
device will be the one that:
 Matches the physiology and psychological characteristics of users, their training and
their expertise. For example, older adults may be hampered by conditions such as
arthritis and may be unable to type; inexperienced users may be unfamiliar with
keyboard layout.
 Is appropriate for the tasks that are to be performed. For example, a drawing task from
a list requires an input device that allows continuous movement; selecting an option
from a list requires an input device that permits discrete movement.
 Is suitable for the intended work and environment. For example, speech input is
useful where there is no surface on which to put a keyboard but is unsuitable in noisy
condition; automatic scanning is suitable if there is a large amount of data to be
generated.

Many systems use two or more complementary input devices together, such as a keyboard
and a mouse. There should also be appropriate system feedback to guide, reassure and, if
necessary, correct users’ errors, for example:
a) On screen - text appearing, cursor moves across screen
b) Auditory – alarm, sound of mouse button clicking
c) Tactile – feel of button being pressed, change in pressure
Page 74
7.3. Text Entry Devices
There are many text entry devices as given below:
7.3.1. Keyboard
The most common method of entering information into the computer is through a keyboard.
When considering the design of keyboards, both individual keys and grouping arrangements
need to be considered. The physical design of keys is obviously important. Alterations in the
arrangement of the keys can affect a user’s speed and accuracy. Various studies have shown
that typing involves a great deal of analyses of trained typists suggest that typing is not a
sequential act, with each key being sought out and pressed as the letters occur in the works to
be typed. Rather, the typist looks ahead, processes text in chunks, and then types it in chunks.

7.3.2. QWERTY keyboard


Most people are quite familiar with the layout of the standard alphanumeric keyboard, often
called the qwerty keyboard, the name being derived from the first letters in the upper most
row from left to center. This design first became a commercial success when used for
typewriters in the USA in 1874, after many different prototypes had been tested.

The arrangement of keys was chosen in order to reduce the incidence of keys jamming in the
manual typewriters of the time rather than because of any optimal arrangement for typing.
For example, the letters ‘s’, ,t, and ‘h’ are far apart even though they are far apart even though
they are frequently used together.

7.3.3. Alphabetic keyboard


One of the most obvious layouts to be produced is the alphabetic keyboard, in which the
letters are arranged alphabetically across the keyboard. It might be expected that such a
layout would make it quicker for untrained typists to use, but this is not the case. Studies have
shown that this keyboard is not faster for properly trained typists, as we may expect, since
there is no inherent advantage to this layout. And even for novice or occasional users, the
alphabetic layout appears to make very little difference to the speed of typing.

These keyboards are used in some pocket electronic personal organizers, perhaps because the
layout looks simpler to use than the QWERTY one. Also, it dissuades people from attempting

Page 75
to use their touch-typing skills on a very small keyboard and hence avoids criticisms of
difficulty of use.

7.3.4. Phone pad and T9 entry


With mobile phones being used for SMS text messaging and WAP, the phone keypad has
become an important form of text input. Unfortunately a phone only has digits 0-9, not a full
alphanumeric keyboard. To overcome this for text input the numeric keys are usually pressed
several times. The main number-to-letter mapping is standard, but punctuation and accented
letters differ between phones. Also there needs to be a way for the phone to distinguish, say,
the ‘dd’ from ‘e’. on some phones you need to pause far short period between successive
letters using the same key, for others you press an additional key (e.g. ‘#’).

7.3.5. Dvorak Keyboard


The DVORAK keyboard uses a similar layout of keys to the QWERTY system, but assigns
the letters to different keys. Based upon an analysis of typing, the keyboard is designed to
help people reach faster typing speeds. It is biased towards right-handed people, in that 56%
of keystrokes are made with the right hand. The layout of the keys also attempts to ensure
that the majority of keystrokes alternate between hands, thereby increasing the potential
speed. By keeping the most commonly used keys on the home, or middle, row, 70% of
keystrokes are made without the typist having to stretch far, thereby reducing fatigue and
increasing keying speed.

The Dvorak approach is said to lead to faster typing. It was named after its inventor, Dr.
August Dvorak. Dr. Dvorak also invented systems for people with only one hand.

7.3.6. Chord Keyboards


Chord keyboards are smaller and have fewer keys, typically one for each finger and possibly
the thumbs. Instead of the usual sequential, one-at-a-time key presses, chording requires
simultaneous key presses for each character typed, similar to playing a musical chord on a
piano.

7.3.7. Handwriting Recognition


Handwriting is a common and familiar activity, and is therefore attractive as a method of text
entry. If we were able to write as we would when we use paper, but with the computer taking

Page 76
this form of input and converting it to text, we can see that it is an intuitive and simple way of
interacting with the computer. However, there are a number of disadvantages with hand
writing recognition. Current technology is still fairly inaccurate and so makes a significant
number of mistakes in recognizing letters, though it has improved rapidly. Moreover,
individual differences in handwriting are enormous, and make the recognition process even
more difficult.

7.3.8. Speech Recognition


Speech or voice recognition is the ability of a machine or program to recognize and carry out
voice commands or take dictation. In general, speech recognition involves the ability to
match a voice pattern against a provided or acquired vocabulary. Usually, a limited
vocabulary is provided with a product and the user can record additional words. More
sophisticated software has the ability to accept natural speech (meaning speech as we usually
speak it rather than carefully spoken speech). There are three basis uses of Speech
Recognition:
1. Command & Control
a. give commands to the system that it will then execute (e.g., "exit application" or
"take airplane 1000 feet higher")
b. usually speaker independent
2. Dictation
a. dictate to a system, which will transcribe your speech into written text usually
speaker-dependent.
b. usually speaker-dependent
3. Speaker Verification
a. your voice can be used as a biometric (i.e., to identify you uniquely)

Speech input is useful in applications where the use of hands is difficult, either due to the
environment or to a user’s disability. It is not appropriate in environments where noise is an
issue. Much progress has been made, but we are still a long way from the image we see in
science fiction of humans conversing naturally with computers. Speech recognition is a
promising are of text entry, but it has been promising for a number of years and is still only
used in very limited situations. However, speech input suggests a number of advantages over
other input methods:

Page 77
 Since speech is a natural form of communication, training new users is much easier
than with other input devices.
 Since speech input does not require the use of hands or other limbs, it enables
operators to carry out other actions and to move around more freely.
 Speech input offers disabled people such as the blind and those with severs motor
impairment the opportunities to use new technology.

However, speech input suffers from a number of problems:


 Speech input has been applied only in very specialized and highly constrained tasks.
 Speech recognizers have severe limitations whereas a human would have a little
problem distinguishing between similar sounding words or phrases; speech
recognition systems are likely to make mistakes.
 Speech recognizers are also subject to interference from background noise, although
the use of a telephone-style handset or a headset may overcome this.
 Even if the speech can be recognized, the natural form of language used by people is
very difficult for a computer to interpret.

7.3.9. Dedicated Buttons


Some computer systems have custom-designed interfaces with dedicated keys or buttons for
specific tasks. These can be useful when there is a very limited range of possible inputs to the
system and where the environment is not suitable for an ordinary keyboard. In-car satellite
navigation systems and gamepads for computer games are good examples.

7.3.10. Positioning, Pointing and Drawing


Pointing devices are input devices that can be used to specify a point or path in a one-,two- or
three- dimensional space and, like keyboards, their characteristics have to be considering in
relation to design needs. Pointing devices are as follow:

Mouse
The mouse has become a major component of the majority of desktop computer systems sold
today, and is the little box with the tail connecting it to the machine in our basic computer
system picture. It is a small, palm-sized box housing a weighted ball- as the box is moved on
the tabletop; the ball is rolled by the table and so rotates inside the housing. This rotation is

Page 78
detected by small rollers that are in contact with the ball, and these adjust the values of
potentiometers. The mouse operates in a planar fashion, moving around the desktop, and is
an indirect input device, since a transformation is required to map from the horizontal nature
of desktop to the vertical alignment of the screen. Left-right motion is directly mapped, whilst
up-down on the screen is achieved by moving the mouse away-towards the user.

Foot mouse
Although most mice are hand operated, not all are there have been experiments with a device
called the foot mouse. As the name implies, it is foot-operated device, although more akin to
an isometric joysticks than a mouse. The cursor is moved by foot pressure on one side or the
other of pad. This allows one to dedicate hands to the keyboard. A rare device, the foot
mouse has not found common acceptance.

Touch pad
Touchpad’s are touch-sensitive tablets usually around 2-3 inches square. They were first used
extensively in Apple PowerBooks portable computers but are now used in many other
notebook computers and can be obtained separately to replace the mouse on the desktop.
They are operated by stroking a finger over their surface, rather like using a simulated
trackball. The feel is very different from other input devices, but as with all devices users
quickly get used to the action and become proficient.

Joystick and track point


The joystick is an indirect input device, taking up very little space. Consisting of a small
palm-sized box with a stick or shaped grip sticking up form it, the joystick is a simple device
with which movements of the stick cause a corresponding movement of the screen cursor.
There are two type of joystick: the absolute and the isometric.

In absolute joystick, movement is the important characteristic, since the position of the
joystick in the base corresponds to the position of the cursor on the screen. In the isometric
joystick, the pressure on the stick corresponds to the velocity of the cursor, and when
released, the stick returns to its usual upright centered position.

Touch Screen

Page 79
Touch displays allow the user to input information into the computer simply by touching an
appropriate part of the screen. This kind of screen is bi-directional – it both receives input and
it outputs information. Using appropriate software different parts of a screen can represent
different responses as different displays are presented to a user.

Advantages:
a) Easy to learn – ideal for an environment where use by a particular user may only occur
once or twice
b) Require no extra workspace
c) No moving parts (durable)
d) Provide very direct interaction

Disadvantages:
a) Lack of precision
b) High error rates
c) Arm fatigue
d) Screen smudging
Touch screens are used mainly for:
1. Kiosk devices in public places, for example for tourist information
2. Handheld computers

Pen Input
Touch screens designed to work with pen devices rather than fingers have become very
common in recent years. Pen input allows more precise control, and, with handwriting
recognition software, also allows text to be input. Handwriting recognition can work with
ordinary handwriting or with purpose designed alphabets such as Graffiti.

Pen input is used in handheld computers (PDAs) and specialised devices, and more recently
in tablet PCs, which are similar to notebook computers, running a full version of the
Windows operating system, but with a pen-sensitive screen and with operating system and
applications modified to take advantage of the pen input. Pen input is also used in graphics
tablets, which are designed to provide precise control for computer artists and graphic
designers.

Page 80
7.3.11. Cursor Keys
Cursor keys are available on most keyboards. Four keys on the keyboard are used to control
the cursor, one each for up, down, left and right. There is no standardized layout for the keys.
Some layouts are shown in figure but the most common now is the inverted ‘T’. Cursor keys
used to be more heavily used in character-based systems before windows and mice were the
norm. However, when logging into remote machines such as web servers, the interface is
often a virtual character-based terminal within a telnet window.

Cursor keys can be used to move a cursor, but it is difficult to accomplish dragging. Using
keys can provide precise control of movement by moving in discrete steps, for example when
moving a selected object in a drawing program. Some handheld computers have a single
cursor button which can be pressed in any of four directions.

7.4. Choosing Input Devices


The preceding pages have described a wide range of input devices. We will now look briefly
at some examples which illustrate the issues which need to be taken into account when
selecting devices for a system, matching devices with:
a) Matching devices with work
Example: Application – panning a small window over a large graphical surface (such as a
layout diagram of a processor chip) which is too large to be viewed in detail as a whole. The
choice of device is between a trackball and a joystick.
 Task 1. Panning and zooming
 Trackball is better suited as motion of ball is mapped directly to motion over surface,
while motion of joystick is mapped to speed of motion over surface.
No obvious advantage to either device for zooming.
 Task 2. Panning and zooming simultaneously
Zooming and panning is possible with joystick (just displace and twist) but virtually
impossible with trackball, so the joystick is better suited.
 Task 3. Locating object by panning and then manipulating by twisting, without letting
the position drift.
Difficult to locate accurately and keep stationary with joystick, but could be done with
trackball. When trackball is stopped then motion is stopped, and bezel around ball can
be rotated. Trackball is therefore better suited.

Page 81
Note that although these three tasks are similar, the precise nature of the task can have a large
influence on the best choice of device.

b). Matching Devices with Users


Example: Eye control for users with severe disabilities
Most input devices rely on hand movements. When a user is unable to use hands at all for
input, then eye-controlled systems provide an alternative. For example, the Iriscom system
moves the mouse pointer by tracking a person's eye movement and mouse clicks are
performed by blinking. It also has an on-screen keyboard so users can input text. It can be
used by anyone who has control of one eye, including people wearing glasses or contact
lenses. A camera takes the place of the touchpad

c). Matching devices with the environment


Example: The BMW i-drive control
The systems in expensive cars are becoming increasingly computer-controlled. Functions
which were previously provided by separate devices with their own controls are often now
controlled by a single computer interface. An in-car computer system is in a very different
environment from a desktop PC or even a laptop. The driver of the car needs to use its
functions (entertainment, climate, navigation, etc) while driving. This requires an interface
which allows the driver to operate it with one hand, with the input device close to hand. The
output must be clearly visible in a position which minimises the time the driver must spend
looking away from the road ahead. BMW’s solution to this problem is the i-drive, which
consists of a multifunction control which is situated where a gearlever would usually found.
This control allows the driver to select options displayed on a screen mounted high in the
dashboard. The I-drive has been controversial, and it is likely that there will be many further
developments in the devices used to interact with computers in the driving environment.

7.5. Output Devices


Output devices convert information coming from an internal representation in a computer
system to a form understandable by humans. Most computer output is visual and two-
dimensional. Screens and hard copy from a printer are the most common. These devices have
developed greatly in the past two decades, giving greater opportunities for HCI designers:

Page 82
7.5.1. Purposes of Output
Output has two primary purposes:
a) Presenting information to the user
b) Providing the user with feedback on actions

Visual Output
Visual display of text or data is the most common form of output. The key design
considerations are that the information displayed be legible and easy to locate and process.
These issues relate to aspects of human cognition described in previous chapters. Visual
output can be affected by poor lighting, eye fatigue, screen flicker and quality of text
characters.

Visual Feedback
Users need to know what is happening on the computer’s side of an interaction. The system
should provide responses which keep users well informed and feeling in control. This
includes providing information about both normal processes and abnormal situations (errors).
Where possible, a system should be able to:
a) Tell a user where he or she is in a file or process
b) Indicate how much progress has been made
c) Signify that it is the user’s turn to provide input
d) Confirm that input has been received
e) Tell the user that input just received is unsuitable.

In order to increase bandwidth for information reaching the user, it is an important goal to use
more channels in addition to visual output. One commonly used supplement for visual
information is sound, but its true potential is often not recognized. Audible feedback can
make interaction substantially more comfortable for the user, providing unambiguous
information about the system state and success or failure of interaction (e. g., a button press),
without putting still more load onto the visual channel.

7.6. Sound Output


Sound is of particular value where the eyes are engaged in some other task or where the
complete situation of interest cannot be visually scanned at one time, for example:

Page 83
a) Applications where eyes and attention are required away form the screen – e.g. flight
decks, medical applications, industrial machinery
b) Applications involving process control in which alarms must be dealt with
c) Applications addressing the needs of blind or partially sighted users.
d) Data sonification – situations where data can be explored by listening, given an
appropriate mapping between data and sound. The Geiger counter is a well-known
example of sonification.

7.6.1. Natural sounds


These are natural, everyday sounds which are used to represent actions and objects within an
interface. Gaver suggests that sounds are good for providing information on background
processes or inner workings without disrupting visual attention. At that time, most computer
systems were capable only of very simple beeps. Developments in sound generation in
computers have made it possible to play back high quality sampled or synthesised sounds,
and a wide variety of natural sounds are used in applications.

7.6.2. Speech
Speech output is one of the most obvious means of using sound for providing users with
feedback. Successful speech output requires a method of synthesizing spoken words which
can be understood by the user. Speech can be synthesized using one of two basic methods:

7.7. User Support


There is often an implicit assumption that if an interactive system is properly designed it will
be completely intuitive to use and the user will require little or no help or training. This may
be a grand ideal but it is far from true with even the best designed systems currently
available. The type of assistance users require varies and is dependent on many factors: their
familiarity with the system, the job they are trying to do, and so on. There are four main types
of assistance that users require:
 quick reference
 task-specific help
 full explanation
 tutorial.

Page 84
Quick reference is used primarily as a reminder to the user of the details of tools a user is
basically familiar with and has used before. It may, for example, be used to find a particular
command option, or to remind the user of the syntax of the command. Task-specific help is
required when the user has encountered a problem in performing a particular task or when he
is uncertain how to apply the tool to his particular problem. The help that is offered is directly
related to what is being done. The more experienced or inquisitive user may require a full
explanation of a tool or command to enable him to understand it more fully. This explanation
will almost certainly include information that the user does not need at that time. The fourth
type of support required by users is tutorial help. This is particularly aimed at new users of a
tool and provides step-by-step instruction (perhaps by working through examples) of how to
use the tool.

Each of these types of user support is complementary – they are required at different points in
the user’s experience with the system and fulfill distinct needs. Within these types of required
support there will be numerous pieces of information that the user wants – definitions,
examples, known errors and error recovery information, command options and accelerators,
to name but a few. Some of these may be provided within the design of the interface itself but
others must be included within the help or support system

7.8. Requirements for Support


Users have different requirements for support at different times. User support should be
a) Availability continuous - access concurrent to main application
The user needs to be able to access help at any time during his interaction with the
system. In particular, he should not have to quit the application he is working on in order
to open the help application. Ideally, it should run concurrently with any other
application. This is obviously a problem for non-windowed systems if the help system is
independent of the application that is running. However, in windowed systems there is no
reason why a help facility should not be available constantly, at the press of a button.
b) Accuracy and completeness - help matches and covers actual system behaviour
As well as providing an accurate reflection of the current state of the system, help should
cover the whole system. This completeness is very important if the help provided is to be
used effectively. The designer cannot predict the parts of the system the user will need
help with, and must therefore assume that all parts must be supported. Finding no help
available on a topic of interest is guaranteed to frustrate the user.
Page 85
c) Consistency - between different parts of the help - system and paper documentation
Users require different types of help for different purposes. This implies that a help
system may incorporate a number of parts. The help provided by each of these must be
consistent with all the others and within itself. Online help should also be consistent with
paper documentation. It should be consistent in terms of content, terminology and style of
presentation. This is also an issue where applications have internal user support – these
should be consistent across the system. It is unhelpful if a command is described in one
way here and in another there, or if the way in which help is accessed varies across
applications. In fact, consistency itself can be thought of as a means of supporting the
user since it reinforces learning of system usage.
d) Robustness - correct error handling and not predictable behaviour
Help systems are often used by people who are in difficulty, perhaps because the system
is behaving unexpectedly or has failed altogether. It is important then that the help system
itself should be robust, both by correct error handling and predictable behavior. The user
should be able to rely on being able to get assistance when required. For these reasons
robustness is even more important for help systems than for any other part of the system.
e) Flexibility - allows user to interact in a way appropriate to experience and task
Many help systems are rigid in that they will produce the same help message regardless
of the expertise of the person seeking help or the context in which they are working. A
flexible help system will allow each user to interact with it in a way appropriate to his
needs. This will range from designing a modularized interactive help system, through
context-sensitive help, to a full-blown adaptive help system, which will infer the user’s
expertise and task.
f) Unobtrusiveness - does not prevent the user continuing with work
The final principle for help system design is unobtrusiveness. The help system should not
prevent the user from continuing with normal work, nor should it interfere with the user’s
application. This is a problem at both ends of the spectrum. At one end the textual help
system on a non-windowed interface may interrupt the user’s work. A possible solution to
this if no alternative is available is to use a split-screen presentation.

At the other end of the spectrum, an adaptive help system that can provide help actively on its
own initiative, rather than at the request of the user, can intrude on the user and so become a
hindrance rather than a help. It is important with these types of system that the ‘suggest’
option can be overridden by the user and switched off !
Page 86
7.9. Approaches to User Support
There are a number of different approaches to providing help, each of which meets a
particular need. These vary from simple captions to full adaptive help and tutoring systems.
User support comes in a number of styles:
a) Command-based methods: Perhaps the most basic approach to user support is to provide
assistance at the command level – the user requests help on a particular command and is
presented with a help screen or manual page describing it. This type of help is simple and
efficient if the user knows what he wants to know about and is seeking either a reminder
or more detailed information. However, it assumes that the user does know what he is
looking for, which is often not the case. The approach is good for quick reference
b) Command prompts: In command line interfaces, command prompts provide help when
the user encounters an error, usually in the form of correct usage prompts. Such prompts
are useful if the error is a simple one, such as incorrect syntax, but again they assume
knowledge of the command.

Another form of command prompting, which is not specifically intended to provide help
but which supports the user to a limited degree, is the use of menus and selectable icons.
These provide an aid to memory as well as making explicit what commands are available
at a given time. It assumes knowledge of the command
c) Context sensitive help: Some help systems are context sensitive. These range from those
that have specific knowledge of the particular user (which we will consider under
adaptive help) to those that provide a simple help key or function that is interpreted
according to the context in which it is called and will present help accordingly. Such
systems are not necessarily particularly sophisticated. However, they do move away from
placing the onus on the user to remember the command. They are often used in menu-
based systems to provide help on menu options.
d) On-line tutorials: Online tutorials allow the user to work through the basics of an
application within a test environment. The user can progress at his own speed and can
repeat parts of the tutorial if needed. Online tutorials are therefore inflexible and often
unforgiving.

An alternative to the traditional online tutorial is to allow the user to learn the system by
exploring and experimenting with a version with limited functionality. This is the idea
Page 87
behind the Training Wheels interface proposed by Carroll and his colleagues at IBM [60].
The user is presented with a version of the full interface in which some of the
functionality has been disabled. He can explore the rest of the system freely but if he
attempts to use the blocked functionality he is told that it is unavailable. This approach
allows the user freedom to investigate the system as he pleases but without risk.
e. non-line documentation (integrated with application): Online documentation effectively
makes the existing paper documentation available on computer. This makes the material
available continually (assuming the machine is running!) in the same medium as the
user’s work and, potentially, to a large number of users concurrently.

Documentation is designed to provide a full description of the system’s functionality and


behavior in a systematic manner. It provides generic information that is not directed at any
particular problem. The amount of information contained in manual pages is usually high,
which can in itself create problems for the user – there is too much detail and this effectively
‘masks’ the information the user wants to find. Perhaps for this reason, online documentation
is often used by more expert users as a resource or reference, sometimes to enable them to
advise less experienced users.

The use of hypertext can make online documentation more accessible to the inexperienced
user Hypertext stores text in a network and connects related sections of text using selectable
links.

Guidelines for online documentation


 Use clear structure with headings to provide signposting.
 Organize information according to user tasks.
 Keep sentences short, to the point and jargon free. Use simple but un-patronizing
language.
 Set out procedures in order and number steps. Highlight important steps.
 Use examples where possible.
 Support searching via an index, contents, glossary and free search.
 Include a list of error messages.
 Include Frequently Asked Questions (FAQ) with clear answers
.

Page 88
f. Wizards and assistants
A wizard is a task-specific tool that leads the user through the task, step by step, using
information supplied by the user in response to questions along the way. They are distinct
from demonstrations in that they allow the user actually to complete the task

Wizards are common in modern applications and provide support for task completion. The
user can perform quite complex tasks safely, quickly and accurately. This is particularly
helpful in the case of infrequent tasks where the learning cost of doing the task manually may
be prohibitive or where there are many steps that must be completed in a specific sequence.
However, wizards can be unnecessarily constraining, they may not offer the user the options
he wants, or they may request information that the user does not have. A well-designed
wizard will allow the user to move back a step as well as forward, will provide a progress
indicator showing how much of the task is completed and how many steps remain, and will
offer sufficient information to allow the user to answer the questions.

7.10. Designing user Support Systems


There are many ways of providing user support and it is up to the designer to decide which is
most appropriate for any given system. However, there are a number of things which the
designer should take into account. First, the design of user support should not be seen as an
‘add-on’ to system design. Ideally, the help system should be designed integrally with the rest
of the system. If this is done, the help system will be relevant and consistent with the rest of
the system. The same modeling and analytic techniques used to design the system can guide
the design of support material as well. Secondly, the designer should consider the content of
the help and the context in which it will be used before the technology that will be required.

7.11. User Support Presentation issues


How is help requested?
The first decision the designer must make is how the user will access help. There are a
number of choices. Help may be a command, a button, a function which can be switched on
or off, or a separate application. A command (usually) requires the user to specify a topic,
and therefore assumes some knowledge, but may fit most consistently within the rest of the
interface. A help button is readily accessible and does not interfere with existing applications,
but may not always provide information specific to the user’s needs. However, if the help
button is a keyboard or mouse button, it can support context sensitivity, as we saw earlier.
Page 89
The help function is flexible since it can be activated when required and disabled when not.
The separate application allows flexibility and multiple help styles but may interfere with the
user’s current application.

How is help displayed?


The second major decision that the designer must make is how the user will view the help. In
a windowed system it may be presented in a new window. In other systems it may use the
whole screen or part of the screen. Alternatively, help hints and prompts can be given in pop-
up boxes or at the command line level. The presentation style that is appropriate depends
largely on the level of help being offered and the space that it requires

Effective presentation of help


Help screens and documentation should be designed in much the same way as an interface
should be designed, taking into account the capabilities and task requirements of the user. No
matter what technology is used to provide support, there are some principles for writing and
presenting it effectively. Help and tutorial material should be written in clear, familiar
language, avoiding jargon as much as possible. If paper manuals and tutorials exist, the
terminology should be consistent between these and the online support material. Instructional
material requires instructional language, and a help system should tell the user how to use the
system rather than simply describing the system. It should not make assumptions about what
the user knows in advance.

Implementation issues
Alongside the presentation issues the designer must make implementation decisions. Some of
these may be forced by physical constraints, others by the choices made regarding the user’s
requirements for help.

Another issue the designer must decide is how the help data is to be structured: in a single
file, a file hierarchy, a database? Again this will depend on the type of help that is required,
but any structure should be flexible and extensible – systems are not static and new topics
will inevitably need to be added to the help system. The data structure used will, to an extent,
determine the type of search or navigation strategy that is provided. Will users be able to
browse through the system or only request help on one topic at a time? The user may also

Page 90
want to make a hard copy of part of the help system to study later (this is particularly true of
manuals and documentation). Will this facility be provided as part of the support system?

Finally, the designer should consider the authors of help material as well as its users. It is
likely that, even if the designer writes the initial help texts, these will be extended by other
authors at different times. Clear conventions and constraints on the presentation and
implementation of help facilitate the addition of new material.

7.12. Lecture Summary

7.13. Lecture review questions


1. Using your library facilities and the world wide web, investigate the benefits and
limitations of adaptive help systems. What examples of adaptive and adaptable help are
available and how useful are they?
2. What are the four main types of help that users may require? For each type, give an
example of a situation in which it would be appropriate.
3. Which usability principles are especially important in the design of help systems, and
why?
4. Describe some of the different approaches to providing user support systems, with
examples.
5. Applications are often supported by an online version of the paper documentation; in
some cases there is no paper documentation at all.
6. What are the advantages of online documentation? What are the disadvantages, and how
can they be overcome?
7. Discuss the presentation issues involved in the design of effective and relevant help
systems.

7.14. Further Reading


Peere, J., et al. 1994. Human computer interaction. Addison Wesley
Dix, A., et al. Human computer interaction

Page 91
LECTURE 8: HUMAN COMPUTER INTERACTION DESIGN

Lecture Objectives
As the aim of this lecture is to introduce you the study of Human Computer Interaction, so
that after studying this you will be able to:
i. Describe the various types of interactions.
ii. Explain the HCI design principles
iii. Explain the HCI design process

8.1. Interaction Cycle and Framework


Interaction involves at least two participants: the user and the system. Norman’s model of
interaction is perhaps the most influential in Human–Computer Interaction, possibly because
of its closeness to our intuitive understanding of the interaction between human user and
computer. The user formulates a plan of action, which is then executed at the computer
interface. When the plan, or part of the plan, has been executed, the user observes the
computer interface to evaluate the result of the executed plan, and to determine further
actions.

Figure 6: Norman Interactive model

The interactive cycle can be divided into two major phases: execution and evaluation. These
can then be subdivided into further stages, seven in all. The stages in Norman’s model of
interaction are as follows:
1. Establishing the goal.
2. Forming the intention.
3. Specifying the action sequence.
4. Executing the action.
5. Perceiving the system state.
6. Interpreting the system state.
Page 92
7. Evaluating the system state with respect to the goals and intentions.

Norman uses this model of interaction to demonstrate why some interfaces cause problems to
their users. He describes these in terms of the gulfs of execution and the gulfs of evaluation.
As we noted earlier, the user and the system do not use the same terms to describe the domain
and goals – remember that we called the language of the system the core language and the
language of the user the task language. The gulf of execution is the difference between the
user’s formulation of the actions to reach the goal and the actions allowed by the system. If
the actions allowed by the system correspond to those intended by the user, the interaction
will be effective. The interface should therefore aim to reduce this gulf.

Norman’s model is a useful means of understanding the interaction, in a way that is clear and
intuitive. It allows other, more detailed, empirical and analytic work to be placed within a
common framework. However, it only considers the system as far as the interface. It
concentrates wholly on the user’s view of the interaction. It does not attempt to deal with the
system’s communication through the interface. The interaction framework tries to address
this.

8.1.1. The interaction framework

Figure 5. General Interactive framework

The interaction framework attempts a more realistic description of interaction by including


the system explicitly, and breaks it into four main components in an interactive system – the
System, the User, the Input and the Output. Each component has its own language. In
addition to the User’s task language and the System’s core language, there are languages for
both the Input and Output components. Input and Output together form the Interface.

Page 93
As the interface sits between the User and the System, there are four steps in the interactive
cycle, each corresponding to a translation from one component to another, as shown by the
labeled arcs in figure 6 below

Figure 6: Translation between components of Interactive Framework

The User begins the interactive cycle with the formulation of a goal and a task to achieve that
goal. The only way the user can manipulate the machine is through the Input, and so the task
must be articulated within the input language. The input language is translated into the core
language as operations to be performed by the System. The System then transforms itself as
described by the operations; the execution phase of the cycle is complete and the evaluation
phase now begins. The System is in a new state, which must now be communicated to the
User. The current values of system attributes are rendered as concepts or features of the
Output. It is then up to the User to observe the Output and assess the results of the interaction
relative to the original goal, ending the evaluation phase and, hence, the interactive cycle.
There are four main translations involved in the interaction: articulation, performance,
presentation and observation.

ACM SIGCHI Curriculum Development Group have further developed the framework to
show other aspects related to HCI as shown below;

Page 94
Fig 7. A framework of Human-Computer Interaction. Adapted from ACM
SIGCHI Curriculum Development Group

Ergonomics addresses issues on the user side of the interface, covering both input and output,
as well as the user’s immediate context. Dialog design and interface styles can be placed
particularly along the input branch of the framework, addressing both articulation and
performance.

Presentation and screen design relates to the output branch of the framework. The entire
framework can be placed within a social and organizational context that also affects the
interaction. Each of these areas has important implications for the design of interactive
systems and the performance of the user

8.2. Ergonomics
Ergonomics (or human factors) is traditionally the study of the physical characteristics of the
interaction: how the controls are designed, the physical environment in which the interaction
takes place, and the layout and physical qualities of the screen. A primary focus is on user
performance and how the interface enhances or detracts from this. In seeking to evaluate
these aspects of the interaction, ergonomics will certainly also touch upon human psychology
and system constraints. It is a large and established field, which is closely related to but
distinct from HCI, and full coverage would demand a book in its own right.

A few of the issues addressed by ergonomics includes;


1. Arrangement of controls and displays: Sets of controls and parts of the display should be
grouped logically to allow rapid access by the user. The exact organization that this will

Page 95
suggest will depend on the domain and the application, but possible organizations include
the following:
 functional controls and displays are organized so that those that are functionally
related are placed together;
 sequential controls and displays are organized to reflect the order of their use in a
typical interaction (this may be especially appropriate in domains where a particular
task sequence is enforced, such as aviation);
 frequency controls and displays are organized according to how frequently they are
used, with the most commonly used controls being the most easily accessible.
2. The physical environment of the interaction: As well as addressing physical issues in the
layout and arrangement of the machine interface, ergonomics is concerned with the
design of the work environment itself. Where will the system be used? By whom will it
be used? Will users be sitting, standing or moving about? Again, this will depend largely
on the domain and will be more critical in specific control and operational settings than in
general computer use.

However, the physical environment in which the system is used may influence how well
it is accepted and even the health and safety of its users. It should therefore be considered
in all design.

3. Health issues: Perhaps we do not immediately think of computer use as a hazardous


activity but we should bear in mind possible consequences of our designs on the health
and safety of users. Leaving aside the obvious safety risks of poorly designed safety-
critical systems (aircraft crashing, nuclear plant leaks and worse), there are a number of
factors that may affect the use of more general computers. Again these are factors in the
physical environment that directly affect the quality of the interaction and the user’s
performance:
 Physical position Users should not be expected to stand for long periods and, if
sitting, should be provided with back support.
 Temperature Although most users can adapt to slight changes in temperature without
adverse effect, extremes of hot or cold will affect performance and, in excessive
cases, health.
 Lighting The lighting level will again depend on the work environment. However,
adequate lighting should be provided to allow users to see the computer screen

Page 96
without discomfort or eyestrain. The light source should also be positioned to avoid
glare affecting the display.
 Noise Excessive noise can be harmful to health, causing the user pain, and in acute
cases, loss of hearing. Noise levels should be maintained at a comfortable level in the
work environment.
 Time the time users spend using the system should also be controlled.
4. The use of color: Colors used in the display should be as distinct as possible and the
distinction should not be affected by changes in contrast. Blue should not be used to
display critical information. If color is used as an indicator it should not be the only cue:
additional coding information should be included. The colors used should also
correspond to common conventions and user expectations. Red, green and yellow are
colors frequently associated with stop, go and standby respectively. Therefore, red may
be used to indicate emergency and alarms; green, normal activity; and yellow, standby
and auxiliary function. These conventions should not be violated without very good
cause.

8.3. Interaction Styles


The fundamental interaction styles and interfaces used are;
8.3.1. Command Line Languages
This popular category covers the interaction between humans and computers using language
by typing the commands to a computer which prompts a message meaning ready to accept
input. It provides means of expressing instructions to the computer directly, using function
keys, single characters, abbreviations, or whole word commands. The command line
interfaces are powerful in that they offer direct access to the system functionality and can be
combined to apply a number of tools to the same data. The command lines interactions are
disadvantageous because text commands are usually difficult to learn and use as cryptic
keywords and a strict associated syntax which a user has to know before using the system and
usually this influences to an increase rate of errors. They must be remembered. Mnemonics
only can be used as cues. They are therefore better for expert users than for novices.

8.3.2. Menus
Menus are defined as set of options on screen for choosing the action or among options for
data entry ie as “a set of options displayed on the screen where the selection and execution of

Page 97
one (or more) of the options results in a change in the state of the interface. There are three
types of menus
1. Pull-down menus
2. Pop-up menus
3. Hierarchical menus

Unlike command-driven systems, menus have the advantage that users do not have to
remember the item they want, they only need to recognize it” The advantage of using menus
is that user needs to recognize rather than recall objects. The menu options need to be
grouped logically and meaningful, so the user could easily recognize the
needed option.

8.3.3. Direct manipulation


Direct manipulation interfaces are very popular and successful, especially with new users,
because they embed manipulations that are analog to human skills (pointing, grabbing,
moving objects in space), rather than trained behaviors and “users have great control over the
display and as they select items, the details appear in windows on the slides”

Direct manipulation interfaces “present a set of objects on a screen and provide the user a
repertoire of manipulations that can be performed on any of them”. Each operation on the
interface is done directly and graphically. This leads to decreasing syntax errors like you can
not compile non-existing code since it is not on the screen when you click the compile icon
and faster performance of a task. Advantages of direct manipulation to objects:
a) Novices can learn basic functionality quickly, usually through a demon-
b) Experts can work extremely rapidly to carry out a wide range of tasks,
c) Knowledgeable intermittent users can retain operational concepts.
d) Error messages are rarely needed.
e) Users can see immediately if their actions are furthering their goals, and if not, they
can simply change the direction of their activity.

8.3.4. Form fill-in


Form-filling interfaces are used primarily for data entry but can also be useful in data
retrieval applications. The user is presented with a display resembling a paper form, with
slots to fill in. It is “the simplest style of interaction that consists of the user being required to
Page 98
answer questions or fill in numbers in a fixed format rather like filling out a form” . In this
form, the only kind of user interaction is the provision of information which is useful for data
entry into applications. Also spreadsheets are considered as a sophisticated variation of form
filling.

Most form-filling interfaces allow easy movement around the form and allow some fields to
be left blank. They also require correction facilities, as users may change their minds or make
a mistake about the value that belongs in each field. The dialog style is useful primarily for
data entry applications and, as it is easy to learn and use, for novice users. However,
assuming a design that allows flexible entry, form filling is also appropriate for expert users.

8.3.5. Natural Language


Perhaps the most attractive means of communicating with computers, at least at first glance,
is by natural language. Users, unable to remember a command or lost in a hierarchy of
menus, may long for the computer that is able to understand instructions expressed in
everyday words! Natural language understanding, both of speech and written input, is the
subject of much interest and research. Unfortunately, however, the ambiguity of natural
language makes it very difficult for a machine to understand. Language is ambiguous at a
number of levels.

The advantage of using this interaction style is to users that do not have access to keyboards
or have limited experience. While ambiguities of the language may cause unexpected effects
and makes very difficult for a computer to understand.

8.3.6. Question/Answer and Query Dialogue


Question and answer dialog is a simple mechanism for providing input to an application in a
specific domain. The user is asked a series of questions (mainly with yes/no responses,
multiple choice, or codes) and so is led through the interaction step by step. An example of
this would be web questionnaires. These interfaces are easy to learn and use, but are limited
in functionality and power. As such, they are appropriate for restricted domains (particularly
information systems) and for novice or casual users.

Query languages, on the other hand, are used to construct queries to retrieve information from
a database. They use natural-language-style phrases, but in fact require specific syntax, as
Page 99
well as knowledge of the database structure. Queries usually require the user to specify an
attribute or attributes for which to search the database, as well as the attributes of interest to
be displayed. This is straightforward where there is a single attribute, but becomes complex
when multiple attributes are involved. Most query languages do not provide direct
confirmation of what was requested, so that the only validation the user has is the result of the
search. The effective use of query languages therefore requires some experience. A
specialized example is the web search engine.

8.3.7. WIMP interface


WIMP stands for Windows, Icons, Menus, and Pointers (sometimes windows, icons, mice,
and pull-down menus). These interfaces are probably the most popular and influential for
interactive environments. Windows are areas of the screen that behave as if they were
independent terminals in their own right. An icon is a small picture used to represent a closed
window, file, or any other object. The pointer is important component of a WIMP interface,
since it interfaces the pointing, clicking, pressing, dragging and selection of objects on the
screen which could be moved, edited, explored and executed as it better fits to the user’s
vision.

Other tools of computer interface design are menus, dialog boxes, check boxes, and radio
buttons and so on. These make use of visualization methods and computer graphics to
provide a more accessible interface than command-line-based displays. The fundamental goal
of WIMP designs is to give the user a meaningful working metaphor, for example an office
or ‘desktop’ representation as opposed to the command-line interfaces. Its advantages are
general application, make functions explicit and provide immediate feedback.

Humans are highly attuned to images and visual information that in other hand can
communicate some kinds of information much more rapidly and effectively than any other
method, and as is said “a picture is worth a thousand words ”.

8.3.8. Point-and-click interfaces


In most multimedia systems and in web browsers, virtually all actions take only a single click
of the mouse button. You may point at a city on a map and when you click a window opens,
showing you tourist information about the city. You may point at a word in some text and

Page 100
when you click you see a definition of the word. You may point at a recognizable iconic
button and when you click some action is performed.

This point-and-click interface style is obviously closely related to the WIMP style. It clearly
overlaps in the use of buttons, but may also include other WIMP elements. However, the
philosophy is simpler and more closely tied to ideas of hypertext.

8.3.9. Buttons
Buttons are individual and isolated regions within a display that can be selected by the user to
invoke specific operations. These regions are referred to as buttons because they are
purposely made to resemble the push buttons you would find on a control panel. ‘Pushing’
the button invokes a command, the meaning of which is usually indicated by a textual label
or a small icon. Buttons can also be used to toggle between two states, displaying status
information such as whether the current font is italicized or not in a word processor, or
selecting options on a web form. Such toggle buttons can be grouped together to allow a user
to select one feature from a set of mutually exclusive options, such as the size in points of the
current font. These are called radio buttons, since the collection functions much like the old-
fashioned mechanical control buttons on car radios. If a set of options is not mutually
exclusive, such as font characteristics like bold, italics and underlining, then a set of toggle
buttons can be used to indicate the on/off status of the options. This type of collection of
buttons is sometimes referred to as check boxes.

8.3.10. Toolbars
Many systems have a collection of small buttons, each with icons, placed at the top or side of
the window and offering commonly used functions. The function of this toolbar is similar to
a menu bar, but as the icons are smaller than the equivalent text more functions can be
simultaneously displayed. Sometimes the content of the toolbar is fixed, but often users can
customize it, either changing which functions are made available, or choosing which of
several predefined toolbars is displayed.

8.3.11. Palettes
In many application programs, interaction can enter one of several modes. The defining
characteristic of modes is that the interpretation of actions, such as keystrokes or gestures
with the mouse, changes as the mode changes. Palettes are a mechanism for making the set of
Page 101
possible modes and the active mode visible to the user. A palette is usually a collection of
icons that are reminiscent of the purpose of the various modes. An example in a drawing
package would be a collection of icons to indicate the pixel color or pattern that is used to fill
in objects, much like an artist’s palette for paint.

Some systems allow the user to create palettes from menus or toolbars. In the case of pull-
down menus, the user may be able ‘tear off’ the menu, turning it into a palette showing the
menu items. In the case of toolbars, he may be able to drag the toolbar away from its normal
position and place it anywhere on the screen. Tear-off menus are usually those that are
heavily graphical anyway, for example line-style or color selection in a drawing package.

8.3.12. Virtual Reality


“ Virtual environments and virtual realities typically offer a sense of direct physical presence,
sensory cues in three dimensions, and a natural form of interaction (for example via natural
gestures)”.

8.3.13. Dialog boxes


Dialog boxes are information windows used by the system to bring the user’s attention to
some important information, possibly an error or a warning used to prevent a possible error.
Alternatively, they are used to invoke a sub-dialog between user and system for a very
specific task that will normally be embedded within some larger task.

8.4. Interaction Design


This is the design of computers, appliances, machines, mobile communication devices,
software applications, and websites with the focus on the user’s experience and interaction.
The goal of user interface design is to make the user's interaction as simple and efficient as
possible, in terms of accomplishing user goals- what is often called user-centered. Good user
interface design facilitates finishing the task at hand without drawing unnecessary attention to
itself.

The design process requires a solid understanding of the theory behind successful human-
computer interaction, as well as an awareness of established procedures for good user
interface design, including the 'usability engineering' process. Design requires understanding
the specific task domain, users and requirements.

Page 102
8.5. Interaction Design versus Interface Design
Interface design deals with the process of developing a method for two (or more) modules in
a system to connect and communicate. These modules can apply to hardware, software or the
interface between a user and a machine. Interface design suggests an interface between code
on one side and users on the other side, passing messages between them. It gives
programmers a license to code as they please, because the interface will be slapped on when
they are done.

Interaction design is heavily focused on satisfying the needs and desires of the people who
will use the product. It is about shaping digital things for people’s use, ie refers to function,
behavior, and final presentation.

8.6. Why Good HCI design


Good interface design requires diverse knowledge of systems design processes and user
characteristics, including:
a) Users' physical characteristics, limitations, and disabilities: Interface designers need to
understand the characteristics of their users. For example, an Automatic Teller Machine
(ATM) interface must be accessible by elderly, young, and disabled bank customers.
b) Speed and efficiency needs: Many interfaces need to be quickly accessible and effective.
For example, military pilots must have cockpit interfaces that allow quick and efficient
interaction.
c) Reliability issues: Interfaces that affect human lives need to provide reliable and readable
information. For example, if an interface is provided information in a nuclear power plant
system or a hospital operating room, the data quality and presentation needs to be
accurate and reliable.
d) Security concerns: Interfaces must have effective security and access mechanisms as
required by an organization. For example, a bank Automatic Teller Machine (ATM) must
allow bank customers to securely access their accounts and also keep out potential
hackers.
e) Level of usability and functionality required: Interfaces for users with little computing
experience are more simply structured than interfaces designed for expert level users. For
example, many interfaces offer advanced options and features for more expert users.
f) Frequency of product use - Interfaces in high use computer systems need to be more
reliable and effective to cater for fast interaction and a variety of users. For example, a
Page 103
bank Automatic Teller Machine (ATM) is used by hundreds of customers every day. The
interface must allow for quick and effective interaction.
g) Users' past experience with same or similar product - Many interfaces and systems
provide similar features. For example, many bank Automatic Teller Machines (ATMs)
provide identical functions and use similar banking terminology. The concepts of
"withdrawal" and "deposit" that appear on ATM interfaces are familiar to bank users.
h) Level of cognitive or mental effort required from the user - Many complex computer
packages require a high level of financial or accounting knowledge. For example, the
interface to the Quicken software requires knowledge of financial practices and
accounting methods.
i) Users' tolerance for error - Many interfaces allow users to complete actions with serious
consequences when errors occur. For example, in a hospital emergency room, the medical
computer interfaces need to be accurate, reliable, and without error, or patients may die.
j) Users' patience and motivation for learning - Many interfaces are designed to allow users
effective interaction with little learning required. For example, bank Automatic Teller
Machines (ATMs) are simple menu systems that are designed to allow quick and easy
learning.
k) Cultural and language aspects - Interface designers must take account of users' cultural
and language differences. For example, many interfaces that are designed for users in the
multicultural United States society provide interaction in English, Spanish, Chinese, or
other languages

Good human computer interaction work and design is important for obtaining these
measurable outcomes:
 Increasing worker productivity
 Increasing worker satisfaction and commitment
 Reducing training costs
 Reducing errors during interface interaction
 Reducing production costs

8.6.1. Increase in Worker Productivity, Satisfaction, and Commitment


Good human computer interaction work and design is important for increasing worker
productivity. If workers have problems using computer interfaces, due to poor design, their
work effectiveness can be reduced. Effectively designed interfaces that offer customization
Page 104
for users can increase user's work satisfaction. Good human computer interaction work and
design is important for increasing worker satisfaction. Improved interfaces design can lead to
increased worker satisfaction and allow users to achieve their work goals.

Good human computer interaction work and design is important for increasing job
commitment by reducing worker turnover. Poor quality interfaces can lead to stress and strain
on users both mentally and physically. Users may experience sore muscles or eye strain due
to poor HCI interfaces and computer design. Workers may leave their jobs if they are
dissatisfied with their HCI experience.

8.6.2. Reduction in Training Costs, Errors, and Production Costs


Good human computer interaction work and design is important for reducing training costs.
Poor HCI interfaces may require extensive and expensive user training. Good interfaces with
effective online or manual training documents and user system guides can help users to
master their system interaction quickly. Good human computer interaction work and design
is important for reducing errors during interface interaction. Effective interfaces and user
training can reduce errors in system use. For example, an effective retail interface can reduce
the time taken to complete a sale.

Good human computer interaction work and design is important for reducing production
costs. Effective interfaces allow workers to produce better quality products and services. For
example, an effective Website can assist users to view products and services offered by
companies, including better customer services.

In order to design a good human computer interaction, we have to appropriately choose the
type of interface and interaction style to fit with the class of users it is designed for whereas
the human factors must be taken in consideration. The following is recommended;
a) to investigate the advantages and disadvantages of interaction styles and interface
types that best support the activities and styles of learning of users the system is
aimed at;
b) to choose the type of interface and interaction styles that best supports the system
goals;
c) to choose the interaction styles that are compatible to user attributes and that support
the users needs, which means to choose the styles that are more advantageous for
Page 105
aimed users (for example, in a system for learning and practicing programming, direct
manipulation style is more advantageous which are stressed in more detail in section
3.1); and
d) to define the user class (experts, intermediate or novices) that the system is designed
for, where the human factors must be taken in consideration.

8.7. Design Principles


A number of design principles have been promoted. The best known are concerned with how
to determine what users should see and do when carrying out their tasks using an interactive
product. Incorporating HCI design principles, we can ensure better design guidance for
screen layout, menu organization, or color usage according to users attributes. Here we
briefly describe the most common ones

1. Visibility: The more visible functions are, the more likely users will be able to know what
to do next. In contrast, when functions are “out of sight,” it makes them more difficult to
find and knows how to use.
2. Affordance: Affordance is a term used to refer to an attribute of an object that allows
people to know how to use it. For example, a mouse button invites pushing by the way it
is physically constrained in its plastic shell. At a very simple level, to afford means “to
give a clue.” When the affordances of a physical object are perceptually obvious it is easy
to know how to interact with it.
3. Constraints: The design concept of constraining refers to determining ways of restricting
the kind of user interaction that can take place at a given moment. There are various ways
this can be achieved. A common design practice in graphical user interfaces is to
deactivate certain menu options by shading them, thereby restricting the user to only
actions permissible at that stage of the activity. One of the advantages of this form of
constraining is it prevents the user from selecting incorrect options and thereby refuses
the chances of making a mistake. The use of different kinds of graphical representations
can also constrain a person’s interpretation of a problem or information space. There are
three types of constraints
a) Physical constraints: Physical constraints refer to the way physical objects restrict
the movement of things

Page 106
b) Logical constraints: Logical constraints rely on people’s understanding of the way
the world works. They rely on people’s common-sense reasoning about actions and
their consequences.
c) Culture constraints: Culture constraints rely on learned conventions, like the use of
red for warning, the use of certain kinds of signals for danger, and the use of the
smiley face to represent happy emotions. Most cultural constraints are arbitrary in the
sense that their relationship with what is being represented is abstract, and could have
equally evolved to be represented in another form (e.g., the use of yellow instead of
red for warning). Accordingly, they have to be learned. Once learned and accepted by
a cultural group, they become universally accepted conventions. Two universally
accepted interface conventions are the use of windowing for displaying information
and the use icons on the desktop to represent operations and documents.
4. Mapping: This refers to the relationship between controls and their effects in the world.
Nearly all artifacts need some kind of mapping between controls and effects, whether it is
a flashlight, car, power plant, or cockpit. The mapping of the relative position of controls
and their effects is also important
5. Consistency: This refers to designing interfaces to have similar operations and use similar
elements for achieving similar tasks. In particular, a consistent interface is one that
follows rules, such as using the same operation to select all objects.
6. Feedback: Related to the concept of visibility is feedback. Feedback is about sending
back information about what action has been done and what has been accomplished,
allowing the person to continue with the activity. Various kinds of feedback are available
for interaction design-audio, tactile, verbal, visual, and combinations of these. Deciding
which combinations are appropriate for different kinds of activities and interactivities is
central. Using feedback in the right way can also provide the necessary visibility for user
interaction.

8.8. Lecture Summary

8.9. Lecture Review Questions


1. Describe (in words as well as graphically) the interaction framework introduced. Show
how it can be used to explain problems in the dialog between a user and a computer.

Page 107
2. Describe briefly four different interaction styles used to accommodate the dialog between
user and computer.

8.10. Further Reading


1. G. Salvendy, Handbook of Human Factors and Ergonomics, John Wiley, 1997.
2. Peere, J., et al. 1994. Human computer interaction. Addison Wesley
3. R. W. Bailey, Human Performance Engineering: A Guide for System Designers, Prentice
4. Hall, 1982.D. A. Norman, The Psychology of Everyday Things, Basic Books, 1988.

Page 108
LECTURE 9: HCI DESIGN PROCESS

Lecture Objectives
As the aim of this lecture is to introduce you the study of Human Computer Interaction, so
that after studying this you will be able to:
i. Describe the HCI Design Process.
ii. Explain the HCI Design Approaches
iii. Describe Web Interface Design Considerations

9.1. The Design Process


A goal-directed problem solving activity informed by intended use, target domain, materials,
cost, and feasibility

The steps in the HCI design process can include the following steps:
a) Analyzing the users and determining their needs.
 Users can be categorized as Novice and first time users, Knowledgeable intermittent
uses or Expert frequent users.
 Choose the interaction style looking at the advantages and disadvantages based on the
users tasks specification /requirements

Page 109
b) Drafting an initial design based on the users' needs analysis.
Create a design using:
 Principles of interaction,
 Graphics design principles,
 Following guidelines.
c) Testing the initial design with users in an HCI testing laboratory or in a real user work
environment.
d) Developing a prototype system based on the initial design and users' feedback.
e) Testing the prototype system with users in an HCI testing laboratory or real user work
environment.
f) Designing and refining each specific interface and screen.
g) Testing the interface with users in an HCI testing laboratory or a real user work
environment.
h) Refining the interface based on users' feedback.
i) Implementing the interface.

9.2. HCI Design Approaches


Human-Computer Interaction (HCI) design approaches that may be applied to user interface
designs to develop user-friendly, efficient, and intuitive user experiences for humans
includes; These four approaches include the Anthropomorphic Approach, the Cognitive
Approach, the Predictive Modeling Approach, and the Empirical Approach. One or more of
these approaches may be used in a single user interface design.
Anthropomorphic Approach

Page 110
The anthropomorphic approach to human-computer interaction involves designing a user
interface to possess human-like qualities. For instance, an interface may be designed to
communicate with users in a human-to-human manner, as if the computer empathizes with
the user. This uses
Affordances: Human affordances are perceivable potential actions that a person can do with
an object. In terms of HCI, icons, folders, and buttons afford mouse-clicking, scrollbars
afford sliding a button to view information off-screen, and drop-down menus show the user a
list of options from which to choose.  Similarly, pleasant sounds are used to indicate when a
task has completed, signaling that the user may continue with the next step in a process.
Examples of this are notifications of calendar events, new emails, and the completion of a file
transfer.
Constraints: Constraints complement affordances by indicating the limitations of user actions.
A grayed-out menu option and an unpleasant sound (sometimes followed by an error
message) indicate that the user cannot carry out a particular action. Affordances and
constraints can be designed to non-verbally guide user behaviors through an interface and
prevent user errors in a complex interface.
Cognitive Approach
The cognitive approach to human-computer interaction considers the abilities of the human
brain and sensory-perception in order to develop a user interface that will support the end
user.
Metaphoric Design: Using metaphors can be an effective way to communicate an abstract
concept or procedure to users, as long as the metaphor is used accurately. Computers use a
“desktop” metaphor to represent data as document files, folders, and applications. Metaphors
rely on a user’s familiarity with another concept, as well as human affordances, to help users
understand the actions they can perform with their data based on the form it takes. For
instance, a user can move a file or folder into the “trashcan” to delete it.

A benefit of using metaphors in design is that users who can relate to the metaphor are able to
learn to use a new system very quickly. A potential problem can ensue, however, when users
expect a metaphor to be fully represented in a design, and in reality, only part of the metaphor
has been implemented.

Attention and Workload Models: When designing an interface to provide good usability, it is
important to consider the user’s attention span, which may be based on the environment of
Page 111
use, and the perceived mental workload involved in completing a task. Typically, users can
focus well on one-task-at-a-time. For example, when designing a web-based form to collect
information from a user, it is best to contextually collect information separately from other
information. The form may be divided into “Contact Information” and “Billing Information”,
rather than mixing the two and confusing users.

By “chunking” this data into individual sections or even separate pages when there is a lot of
information being collected, the perceived workload is also reduced. If all the data were
collected on a single form that makes the user scroll the page to complete, the user may
become overwhelmed by the amount of work that needs to be done to complete the form, and
he may abandon the website. Workload can be measured by the amount of information being
communicated to each sensory system (visual, auditory, etc.) at a given moment. Some
websites incorporate Adobe Flash in an attempt to impress the user. If a Flash presentation
does not directly support a user’s task, the user’s attention may become distracted by too
much auditory and visual information.

Human Information Processing Model: Human Information Processing (HIP) Theory


describes the flow of information from the world, into the human mind, and back into the
world. When a human pays attention to something, the information first gets encoded based
on the sensory system that channeled the information (visual, auditory, haptic, etc.). Next, the
information moves into Working Memory, formerly known as Short-Term memory. Working
Memory can hold a limited amount of information for up to approximately 30 seconds.
Repeating or rehearsing information may increase this duration. After Working Memory, the
information may go into Long-Term Memory or simply be forgotten. Long-Term Memory is
believed to be unlimited, relatively permanent memory storage. After information has been
stored in long-term memory, humans can retrieve that information via recall or recognition.
The accuracy of information recall is based on the environmental conditions and the way that
information was initially encoded by the senses. If a human is in a similar sensory experience
at the time of memory recall as he was during the encoding of a prior experience, his recall of
that experience will be more accurate and complete.
Empirical Approach
The empirical approach to HCI is useful for examining and comparing the usability of
multiple conceptual designs. This testing may be done during pre-production by
counterbalancing design concepts and conducting usability testing on each design concept.
Page 112
Often, users will appreciate specific elements of each design concept, which may lead to the
development of a composite conceptual design to test.
Human Task Performance Measures: in addition to a qualitative assessment of user
preferences for a conceptual design, measuring users’ task performance is important for
determining how intuitive and user-friendly a web page is. A researcher who is familiar with
the tasks the web page has been designed to support will develop a set of test tasks that relate
to the task goals associated with the page. Users may be given one or more conceptual
designs to test in a lab setting to determine which is more user-friendly and intuitive. User
performance can be assessed absolutely, i.e., the user accomplishes or fails to complete a
task, as well as relatively, based on pre-established criteria.

A/B Testing: If two of three design concepts were rated highly during user testing, it may be
advantageous to conduct an A/B Test during post-production. One way to do this is to set up
a Google Analytics account, which allows a researcher to set up multiple variations of a web
page to test. When a user visits the website, Google will display one variation of the web
page according to the end user’s IP address. As the user navigates the website, Google tracks
the user’s clicks to see if one version of the web page produces more sales than another
version. Other “conversion” goals may be tracked as well, such as registering a user account
or signing up for a newsletter.

Predictive Modeling Approach


a). Goal-oriented design
Goal-oriented design (or Goal-Directed design) "is concerned most significantly with
satisfying the needs and desires of the people who will interact with a product or service.
Based on the concepts of GOMS
GOMS is a method for examining the individual components of a user experience in terms of
the time it takes a user to most efficiently complete a goal. GOMS is an acronym that stands
for Goals, Operators, Methods, and Selection Rules
 Goals are defined as what the user desires to accomplish on the website.
 Operators are the atomic-level actions that the user performs to reach a goal, such as
motor actions, perceptions, and cognitive processes.
 Methods are procedures that include a series of operators and sub-goals that the user
employs to accomplish a goal.

Page 113
 Selection Rules refer to a user’s personal decision about which method will work best
in a particular situation in order to reach a goal.

The GOMS model is based on human information processing theory, and certain
measurements of human performance are used to calculate the time it takes to complete a
goal.

b). Personas
Goal-oriented design advocates for the use of personas, which are created after interviewing a
significant number of users.
The aim of a persona is to "Develop a precise description of our user and what he wishes to
accomplish." The best method to fabricate users with names and back stories who represent
real users of a given product. These users are not as much a fabrication but more so as a
product of the investigation process. The reason for constructing back stories for a persona is
to make them believable, such that they can be treated as real people and their needs can be
argued for.

9.3. HCI Design Rules, Principles, Standards and Guidelines


One of the central problems that must be solved in a user-centered design process is how to
provide designers with the ability to determine the usability consequences of their design
decisions. We require design rules, which are rules a designer can follow in order to increase
the usability of the eventual software product. Principles are abstract design rules, with high
generality and low authority. Standards are specific design rules, high in authority and limited
in application, whereas guidelines tend to be lower in authority and more general in
application.
9.3.1. HCI Design Rules
Design rules are rules a designer can follow in order to increase the usability of the eventual
software product. These rules can be classified along two dimensions, based on the rule’s
authority and generality. By authority, means an indication of whether or not the rule must be
followed in design or whether it is only suggested. By generality, means whether the rule can
be applied to many design situations or whether it is focused on a more limited application
situation. Rules also vary in their level of abstraction, with some abstracting away from the
detail of the design solution and others being quite specific.

Page 114
Design rules for interactive systems can be supported by psychological, cognitive,
ergonomic, sociological, economic or computational theory. Shneiderman’s eight golden
rules provide a convenient and succinct summary of the key principles of interface design.
They are intended to be used during design but can also be applied, like Nielsen’s heuristics,
to the evaluation of systems.
1. Strive for consistency in action sequences, layout, terminology, command use and so on.
2. Enable frequent users to use shortcuts, such as abbreviations, special key sequences and
macros, to perform regular, familiar actions more quickly.
3. Offer informative feedback for every user action, at a level appropriate to the magnitude
of the action.
4. Design dialogs to yield closure so that the user knows when they have completed a task.
5. Offer error prevention and simple error handling so that, ideally, users are prevented
from making mistakes and, if they do, they are offered clear and informative instructions
to enable them to recover.
6. Permit easy reversal of actions in order to relieve anxiety and encourage exploration,
since the user knows that he can always return to the previous state.
7. Support internal locus of control so that the user is in control of the system, which
responds to his actions.
8. Reduce short-term memory load by keeping displays simple, consolidating multiple page
displays and providing time for learning action sequences.

9.3.2. Principles of Human-Computer Interface Design:


Principles are derived from knowledge of the psychological, computational and sociological
aspects of the problem domains and are largely independent of the technology; they depend
to a much greater extent on a deeper understanding of the human element in the interaction.
They can therefore be applied widely but are not so useful for specific design advice.

Proposed HCI Usability Principles by Dix, Finally, Abowd and Beale in their book Usability
Paradigms and Principles in Human-Computer Interface includes;

a) Learnability: The ease of new learners to learn the HCI. How easy is it for the users to
accomplish basic tasks the first time they encounter the design?. This includes;
 Predictability:

Page 115
Does the application produce results that are in accord with pervious commands and
states?  Although the computer is ultimately deterministic the user perspective of the
application is as a black box.  The user knows the system only by the sequence of
states resulting from the user's interaction.  Predictability might require knowing only
the current state or all the previous states and orders.  The later is a high demand on
the user.  The point is:  can the user predicate the results of the system from the
system's history?  For example consider the sequence: 0, 1, 4, ... what comes next?
 Synthesizablity:
Does the user construct the proper model of the system?  Or does the system display
the correct clues to construct a proper model?  While the user learns the system, the
user constructs a picture of the innards of the black box.
 Familiarity:
Do new users get good clues to use the system properly?  Due to prolific use of
metaphors and WIMPI, naive users expect to use applications without study.  The
clues for how to use the system must come from visual organization, names and icons
that are common to the user's domain.
 Generalizabilty:
Can the user guess at the functionality of new commands?  Even advanced users do
not read manuals, they learn by making generalizations of commands and associating
commands from one application to another.
 Consistency
Does the operation perform similarly for similar inputs? More specific examples;
consistent naming of commands, or consistent syntax of commands and arguments.
b). Flexibility: Multiplicity of ways to interact with the system
 Dialog initiative:
Do dialog boxes hold the user prisoner?  Old dialogue boxes were modal and
prevented the user from interacting with any other part of system, called system pre-
emptive.  Modern dialog boxes are user pre-emptive.  The system may request
information from a dialog box, but should not prevent the user from performing
auxiliary activity even in the same application.
 Multi-threading:
Can the user perform simultaneous tasks?  Tasks represent threads, and multi-
threading allows the user to perform simultaneous tasks.

Page 116
 Task migratabilty:
Can the user perform the task or have the computer perform the tasks? Can the user
control computer automated tasks? At a minimum the user ought to be able to
interrupt an application task.
 Subsitutivity:
Can arguments to commands assume different forms for equivalent values?  Can the
output of a command be represented differently? For example the user ought to able
to use inches or centimeters or perhaps an expression. The user should be able to see
the results in the units of the user's choice.
 Customizability:
Can the user modify the interface in order to improve efficiency?  Are the
customizing features easily accessible?  This does not mean changing the color of the
gui. Rather can the user add commands, or change font size for better visibility?
c). Robustness: Level of support for the error handling
 Observability:
Can the user evaluate the state of the application?  At a minimum there ought to be
accurate progress bar, better is for the user to browse the system state.  Persistence is
how long the system states or user inputs are visible to the user.
 Recoverability:
If the user makes a mistake or the application fails can the user recover the work? 
Can the user recover from a mistake in a command by escaping?  At a minimum the
user ought to be able to correct the output, forward recovery.  Users have grown
accustomed to the undo command, backward recovery.  How far back can the user
recover his work?  Does the system save snapshots?
 Responsiveness:
Does the system respond in suitable time?  Some applications require msec response,
others seconds and some applications can run overnight.
 Task conformance:
Does the system perform all the tasks that user needs or wants?  At a minimum the
application should cover all the tasks of the domain, better is for the system to be able
to add tasks.

Other HCI Usability principles includes the Norman’s Seven Principles for Transforming
Difficult Tasks;

Page 117
1) Use both knowledge in the world and knowledge in the head. People work better when
the knowledge they need to do a task is available externally – either explicitly or through
the constraints imposed by the environment. But experts also need to be able to
internalize regular tasks to increase their efficiency. So systems should provide the
necessary knowledge within the environment and their operation should be transparent to
support the user in building an appropriate mental model of what is going on.
2) Simplify the structure of tasks. Tasks need to be simple in order to avoid complex
problem solving and excessive memory load. There are a number of ways to simplify the
structure of tasks. One is to provide mental aids to help the user keep track of stages in a
more complex task. Another is to use technology to provide the user with more
information about the task and better feedback. A third approach is to automate the task
or part of it, as long as this does not detract from the user’s experience. The final
approach to simplification is to change the nature of the task so that it becomes something
more simple. In all of this, it is important not to take control away from the user.
3) Make things visible: bridge the gulfs of execution and evaluation. The interface should
make clear what the system can do and how this is achieved, and should enable the user
to see clearly the effect of their actions on the system.
4) Get the mappings right. User intentions should map clearly onto system controls. User
actions should map clearly onto system events. So it should be clear what does what and
by how much. Controls, sliders and dials should reflect the task so a small movement has
a small effect and a large movement a large effect.
5) Exploit the power of constraints, both natural and artificial. Constraints are things in the
world that make it impossible to do anything but the correct action in the correct way. A
simple example is a jigsaw puzzle, where the pieces only fit together in one way. Here the
physical constraints of the design guide the user to complete the task.
6) Design for error. To err is human, so anticipate the errors the user could make and design
recovery into the system.
7) When all else fails, standardize. If there are no natural mappings then arbitrary mappings
should be standardized so that users only have to learn them once. It is this
standardization principle that enables drivers to get into a new car and drive it with very
little difficulty – key controls are standardized. Occasionally one might switch on the
indicator lights instead of the windscreen wipers, but the critical controls (accelerator,
brake, clutch, steering) are always the same.

Page 118
9.3.3. HCI Design Standards
Standards for interactive system design are usually set by national or international bodies to
ensure compliance with a set of design rules by a large community. Such international
organizations includes; British Standards Institution (BSI) or the International Organization
for Standardization (ISO). Standards can apply specifically to either the hardware or the
software used to build the interactive system.

9.3.4. Guidelines
Guidelines are best practice, based on practical experiences or empirical studies. They are
software development documents which offer application developers a set of
recommendations. Their aim is to improve the experience for the users by making application
interfaces more intuitive, learnable, and consistent.

Written guidelines help to develop a “shared language” and based on it promote consistency
among multiple designers and designs/products. Examples of guidelines:
a). How to provide ease of interface navigation;
Five ways to enhance navigation
 Standardize Task Sequences
 Ensure that embedded links are descriptive
 Use unique and descriptive headings
 Use checkboxes for binary choices
 Use thumbnails to preview larger images
b). how to organize the display.
5 high-level goals of display organization:
 Ensure consistency of data display
 Promote efficient information assimilation by the user
 Put minimal memory load on the user
 Ensure compatibility of data display with data entry
 Allow for flexibility in data display (user controlled)
c). how to draw user’s attention, and
7 issues that can be used to draw user’s attention:
 Intensity: Use high intensity to draw attention
 Marking: e.g. underlining, using arrows, boarders, etc.

Page 119
 Font Size: Use large fonts to draw attention
 Font Style: Use exceptional font styles to draw attention
 Blinking/Animation: Use blinking to draw attention (careful!)
 Color: Use exceptional colors to draw attention
 Audio: Use harsh sounds for exceptional conditions
d). how to best facilitate data entry
5 high-level objectives should be pursued in order to best facilitate data entry. If applied
meaningfully, productivity will increase, time-to-learn will be reduced and the error
probability should be lower.
 Ensure consistency of data entry transactions
 Require minimal input by the user (e.g., button vs. typed command)
 Require minimal memory load on user (e.g. command line vs. forms)
 Ensure compatibility of data entry with data display
 Enable flexibility of data entry (user controlled)

9.4. Web Interface Design Considerations


Information Architecture and Web Navigation
The Web interface is becoming increasingly important as Web sites become a larger part of
everyday life, and as many computer systems and devices use a Web interface to take
advantage of the ubiquity of the Web browser.

The HCI principles involved in user interface design were developed in the context of
graphical user interface (GUI) design, but they apply equally well to Web interface design.
There are, however, several features which are unique to Web interfaces.

Differences between Web and GUI interfaces:


a) The Web is less secure and less private
b) The Web is platform independent
c) The Web has more dynamic content
d) The Web typically has a broader audience
e) Web browsers have compatibility issues
f) Users expect to learn GUIs but to just use the Web
g) Web sites have more than one entry point

Page 120
h) Web navigation is user controlled

The Web was originally created to provide access to information, and this is still a key
function of many web sites. Alongside this, applications have emerged for interactive web
sites. Online shopping sites are probably the most familiar example of this kind of
interactivity. Many sites rely on both information and interaction. For example, Amazon has
information about the products it sells and requires interaction to allow a customer to make a
purchase.

Successful web sites therefore should make use of both interaction design, and information
architecture. Information Architecture is closely related to navigation – navigation systems
provide the way for the user to find the site content which has been organised by the
information architect.

9.5. Information Architecture


Information architecture is concerned with creating organisational and navigational schemes
to allow the user to move through site content effectively. Usually, the architecture of a site
involves; organising and creating a structure for content.

9.5.1. Organisation Schemes


We navigate through organisation schemes every day. Telephone books, supermarkets and
TV programme guides all use organisation schemes to facilitate access. Some, such as the
white pages of the phone book, are easy to use. Some are more difficult – finding particular
items in an unfamiliar supermarket can be very frustrating. One reason for the difference is
that the phone book uses an exact scheme – each item is in a single, well-defined category.
The supermarket has to use an ambiguous scheme – for example, muesli bars could be on
the shelf beside either biscuits OR cereals. The categories in this scheme are not always clear-
cut or mutually exclusive.

Exact Organization Schemes includes


a) Alphabetical: Often used for a lists of names or an index of site content.
b) Chronological: Widely used for news items, most recent first.
c) Geographical: Often used by companies with multinational presence to proved specific
content for customers in different countries or regions.
Page 121
Ambiguous Organization scheme includes
a) Topic: This is one of the most obvious and useful approaches. Most web sites provide
some sort of topical access to content. Examples include organization by subject type in a
news site, by department or course in a college site or by product type in a shopping site.
b) Task: Task-oriented schemes organize content into a collection of processes, functions or
tasks. These schemes are often found in interactive, ecommerce sites
c) Audience: Audience-oriented schemes break a site into smaller, audience-specific mini-
sites, allowing pages which present options of interest only to that audience.
d) Metaphor: We have seen examples previously of the use of metaphors in web sites.
Metaphor-driven “sitemaps” were popular in the early days of the web but are now rare.
e) Hybrid: Pure organization schemes suggest a simple mental model that users can quickly
understand. Users can easily recognize an audience-specific or topical organization.
However, when you start blending elements of multiple schemes, confusion often
follows.

Nevertheless it is sometimes possible to use multiple organization schemes, and to


present these on one page. This can be successful if the schemes are presented separately
on the page.
f) Metadata: Metadata literally means “data about data”. It can be used in information
architecture to attach information such as keywords (topic, date, author, etc.) and
descriptions of content which are stored with the information. This assists access to
information by searching or querying in a similar way to querying a relational database.
In fact, many sites use content management systems which do store information in a
database.

9.5.2. Organisation Structures


The most common site structure is a hierarchical structure, or tree structure. Each page can
have sub-pages which represent narrower concepts, e.g. Books->Computing Books-
>Computing
Books about HCI.

Page 122
Organic structures have no consistent pattern or sections. They can encourage free-form
exploration, as on some education or entertainment sites, but can make it difficult to find
information reliably.

Sequential structures are familiar in offline media (e.g a book). They are often used in web
sites for small sections, such as articles or tutorials, or in task based aspects.

9.6. Web Sites Navigation Systems


Most sites, other than very small ones, provide multiple navigation systems to allow the user
to navigate through the content in a variety of circumstances.

9.6.1. Global navigation


Allows the user to navigate to the key set of access points to get from one end of the site to
the other. The use can (eventually) get to anywhere from the global navigation. A link to the
home page is an example of global navigation.

9.6.2. Local navigation


Provides access to ‘nearby’ content. Local navigation is generally the system used most
often.

Page 123
9.6.3. Supplementary navigation
Provides shortcuts to related content, which is not easily accessible through the global and
local navigation.

9.6.4. Contextual navigation


Contextual navigation, or inline navigation, is embedded as hyperlinks in the content of the
page, and gives users immediate access to something relevant to what they are reading with
out having to look for the correct navigation element.

Supplementary and contextual navigation provide similar paths through the site structure, but
differ in the way they appear on the page.

9.6.5. Courtesy navigation


Provides access to items which the user does not need on a regular basis but which are
commonly provided as a convenience, such as links to contact information, privacy
statements, legal disclaimers, etc.

Like the previous two systems, courtesy navigation provides links which are not easily
accessible through the global and local navigation.

Page 124
9.6.6. Personalisation and Social Navigation
Some sites track some aspect of a user to provide easy navigation to content which is of
specific interest. For example some shopping sites recommend products based on past
purchases.
Social navigation is based on the idea that value for the individual can be derived from
observing the actions of others.

9.7. Labelling Systems


Labels are the most obvious way to show users your organisation and navigation systems. No
matter how good your organisation or navigation design is, if the labelling is unclear or
inappropriate it will be difficult for users to achieve their goals when using your site.

An example of a label is “Contact Us” which is found on many web sites. This label
represents a chunk of information, such as telephone, fax and email information. The label
works as a shortcut which triggers the right association in the user’s mind. Successful
labelling triggers such associations and guides users to the appropriate part of a web site to
find the information they require. Good labelling systems are:
• Consistent in:
a) style and presentation
b) syntax - do not mix verb-based, noun-based or question-based labels in a single
system (“Grooming your dog”, “Diets for dogs”, “How do you train your dog?”)

Page 125
c) audience - do not mix technical and non-technical names for topics in a single system
(“lymphoma”, “tummy-ache”)
 Representative and clear
 User-centric (for example meaningful to the user, not using corporate jargon or other
terms which might only be meaningful to the site owner)

Many labels in navigation systems are familiar to most web users and can be used quite
safely. Some common variants are listed below:
 Main, Main Page, Home
 Search, Find, Browse
 Site Map, Contents, Table of Contents, Index
 Contact, Contact Us
 Help, FAQ, Frequently asked Questions
 News, News & Events, Announcements
 About, About Us, About <company name>, Who We Are

9.8. Navigation Aids


Navigation aids in web sites are used to provide the surface implementations of the
navigation systems. They include:
a) Links
b) Buttons, menus, navigation bars and icons
c) Drop-down lists
d) Site maps
e) History trails
f) Search engines

 Links: Links can be within blocks of text, headlines, or groups of links organised to allow
access to a large number of locations in a small space:
 Buttons, menus, navigation bars and icons: Menus and navigation bars are usually placed
at the left or top of a page. Menus on the left are often used for local navigation systems
alongside global navigation at the top of the page. Pop-up menus can be used to include a
larger number of menu options in a small space.

Page 126
 Drop-down (or pop-up) lists: These provide a very compact way of providing access to a
list of topics.
 Site maps: Central collections of links to all areas of a web site. These support a user’s
mental model of using a map.
 History trails: Show users the route they have followed – can help their route knowledge.
These are sometimes known as breadcrumbs.
 Search engines: Allow keyword searches – can be a very efficient way of finding specific
information or products

9.9. Diagramming Information Architecture and Navigation


9.9.1. Blueprints
Blueprints show the relationships between pages and other components. They are sometimes
referred to as sitemaps. They display the “shape” of the information space in overview. High
level blueprints can be used in the initial design of a site as a basis for discussion. More
detailed blueprints can be drawn later in the process to assist the precise organisation of
information.

9.9.2. Wireframes
Wireframes depict how individual pages should look from an architectural perspective. They
are related to both the information architecture and visual design of the site. The wireframe
forces the architect to consider such issues as where the navigation systems might be located
on a page. It translates the navigation systems from blueprint into a “page”. Ideas can be tried
out at this stage, and if the navigation does not seem to work well, this can lead to the
blueprint being revised.

 The wireframe also helps the information architect to decide how to group content on the
page. Items near the top left of the page tend to be scanned first by the user, so
information can be prioritized by its position in the “page”.
 Wireframes are typically created for the site’s most important pages (main page, major
category pages, etc) and can describe consistent templates that can be applied to many
pages.
 A wireframe is not a finished web page. The final product should also involve graphic
designers to define the aesthetic nature of the site. Where the wireframe represents an

Page 127
interactive page, such as a form, it may be appropriate to involve specialist interaction
designers and programmers in creating the final product.

9.10. Lecture Summary

9.11. Lecture Review Questions


a) Discuss the HCI Design Process
b) Explain the various design that can be used in HCI design Process
c) Explain using example what HCI Design Rules are.
d) Explain the following HCI Usability Principles
i. Learnability
ii. Flexibility
e) Discuss information architecture in the context of Web Interface design
f) Explain why labeling systems are important in Web interfaces design.

9.12. Further Reading


1. H. Sharp, Y. Rogers & J. Preece (2007) Interaction Design. John Wiley & Sons,
Chichester.
2. Shneiderman (1987) Designing the user interface: strategies for effective human
computer interaction. Addison-Wesley, Reading MA.
3. D.A. Norman (1988) The Design of Everyday Things. Doubleday, New York.
4. Preece, Y. Rogers and H. Sharp, Interaction Design: Beyond Human–Computer
Interaction, John Wiley, 2002.
5. J. Carroll, editor, Interacting with Computers, Vol. 13, No. 1, special issue on
‘Scenario-based system development’, 2000.

Page 128
Page 129

You might also like