Download as pdf or txt
Download as pdf or txt
You are on page 1of 26

Prototype Evaluation

lecture
Objectives

Expert evaluation

Evaluation with self reports
Basic idea

Lets start with a basic question
 How do we evaluate a user interface?
 Answer:
 By evaluating usability
 Question:

How?
Basic idea

Used for quick and cheap evaluation

(typically done early in the design)

We need two things:
 Atleast a low fidelity prototype
 An evaluation team

Maybe the design team may include skilled designers

Optionally, few end users maybe included

Should have atleast 3-5 members
Basic idea

Each team member evaluates individually and produces a
report
 The report contains a list of usability issues

All these reports combined to produce a final list

Two expert evaluation methods:
 Cognitive walk through
 Heuristic evaluation
Cognitive walkthrough

A usability inspection method

A scenario based method

Purpose:
 Identify the problems a user is likely to face when using the
interface

Requirements:
 Atleast a prototype with support for several tasks
 An evaluation team

Should have atleast 3-5 members
Example: Prototype Building

Lets consider a simple calendar app. In our app, we show
only the months in the first screen. Once the user selects
the month, another screen appears showing the dates in
that month. A user can select any date and add some note.
What are the tasks that a user can do with this interface?
Example: Prototype Building

Some of the tasks:
 Select a month
 Select a day of the month
 Add note
 Go back to month or day view
Example: Prototype Building

To perform a cognitive walkthrough, we should make a
prototype that supports more than one of these tasks

We can make simpler prototypes of these tasks
Example: Prototype Building cont’d

Additional requirements:

You are planning to take the lecture on Human Computer
Interaction. You are planning to schedule a class on the first
monday of the next month. The students are available only
on the specific monday. On the other days, they do not
have free slot. You want to find out the date so that you can
inform the students.
Additional requirements – Task description

Task: Identify the the date of the first monday of the next
month

Interface level tasks
 Select the next month
 Select the first monday
 Mark the date
Procedure

The scenario is given to the evaluators

They are asked to find out the date (by performing the
interface level task with the prototype)
Additional requirements-Questions

We also need to frame questions (usability) before hand

Evaluators report on these issues while they perform their
tasks

Example questions:
 Are you able to locate the month you are looking for easily?
 Is the interaction required to change from month view to day
view aparent?
 Did you find it difficult to locate the first monday?
 Was the date clearly visible along with the day?
Additional requirements-Questions

Purpose
 To identify problems users are likely to face

Evaluators are not to answer “yes“ or “no“
 Should be a detailed report

Feedback is analyzed to identify broader usability issues
Heuristic evaluation

Cognitive walkthrough helpful in early stages of design

Problem- the method is scenario based
 Evaluation is performed with respect to specific usage scenarios
 Complex systems have many scenarios and we can’t evaluate all.
 Two reasons:

We may not have much time

We may not be able to enumerate all the possible scenarios
Basic evaluation

We may go for comprehensive evaluation of the whole
system
 No need to create scenarios and ask for evaluators to perform
tasks
 We ask evaluators to tick on the checklist of features of the
system as a whole
 That is heuristic evaluation – evaluate a system with a checklist
 Items on the checklist are called heuristics
Basic evaluation cont’d

Many checklists are available

One popular one is “10 heuristics by Nelson“
Ten Heuristics by Nelson
Heuristic Description
Heuristic 1 Visibility of system status

Heuristic 2 Match between system and the real world

Heuristic 3 User control and freedom

Heuristic 4 Consistence and standards

Heuristic 5 Error prevention

Heuristic 6 Recognition rather than recall

Heuristic 7 Flexibility and efficiency of use

Heuristic 8 Aesthetic and minimalist design

Heuristic 9 Help users recognise, diagonise and recover from errors

Heuristic 10 Help and documentation


Setup and procedure

Low-fidelity prototypes – Horizontal prototypes are okay
 Walkthrough requires vetical prototypes

Team of evaluators required (3-5)

Could be designers or expert designers
Evaluation by users – Self reports

With expert evaluation, we rely on the data provided by
those who are not users.
 This is okay in early phases of design

We are likely to get more insight from the data collected
from the users themselves.

One way to do this is to conduct empirical studies.

Involves elaborate setup and expertise to collect and
analyze data
Evaluation by users – Self reports

With expert evaluation, we rely on the data provided by those who
are not users.
 This is okay in early phases of design

We are likely to get more insight from the data collected from the
users themselves.

One way to do this is to conduct empirical studies.

Involves elaborate setup and expertise to collect and analyze data
There is another less complex method – get feedback from users by
asking them questions directly : called evaluation with self reports
Evaluation by users – Self reports

Data collected through this method is called subject or preference data.

Many ways to collect feedback

 Rating on a scale (i.e likert scale)


 Ask them to choose from the list of system attributes

Ask direct open ended questions (e.g what do you feel about the
system?)
 Problems with open ended questions: feedback may not be reliable
 Feedback on rating scales are generally more reliable
Evaluation by users – Self reports

Collection of feedback using questionairs

Two points for data collection

 After every task


 At the end of the session ( after all tasks)

Desirable to collect at both data points

The questionairs will not be the same
Evaluation by users – Self reports

Post session vs post data feedback

Collecting feedback from from each task with respect to specific
tasks and parts of specific interfaces used

 Users are likely to remember problems after each task



With post session feedback, we are more likely to learn the user
perception of the system as as a whole.

 It may not be possible to relate that perception to a specific task


or user interface.
Evaluation by users – Self reports

Post task Questionair

Three statements:

 Rating scale
 Are you satified with ease of completing the task
 Are you satisfied with the time it took took to complete the task
 Are you satified with the support information to complete tasks
The End

You might also like