Download as pdf or txt
Download as pdf or txt
You are on page 1of 20

The Essentials of Usability Testing

Rachell Hayes

Overview
Practice of usability
testing
Importance of
assessment
Unique cultural
interpretations
Utilizing the results

Practice of Usability Testing


All of the time Ive
spent watching
people use the Web
has led me to the
opposite conclusion:
all Web users are
unique, and all Web
use is basically
indiosyncratic
(Krug 136).

Practice of Usability Testing


Get It Testing
Introduce the site to users. Ask them a series
of questions to determine if they
understood the purpose of the site, if it was
appealing, if the organization was easy to
navigate, and any other comments or
concerns (Krug 153).
Feedback is an essential component of
usability testing and should be valued. Users
are your audience. If they dont get it, your
site may as well not exist.

Key Task Testing


Provide users with a task to complete and
observe how well they perform. Its best to let
the user choose a task they have an interest in
carrying out within your site . When people
are doing made-up task, they have no
emotional investment in it, and they cant use
as much of their personal knowledge (Krug
153).

Practice of Usability Testing


Usability work
encompasses a broad
range of activities
including field
studies of work
practice, cooperative
prototyping, user
workshops, and postrelease tests
(Madsen 2).

Practice of Usability Testing


In the software industry, teams often consist
of developers, designers, writers, and other
specialists. They create usability tests based on
the functions and designs of their products.
Then, they decide on how the results of these
tests will be communicated to their
supervisors and stakeholders.
End-user software usability testing relies
heavily on communicating with the target
audience. If the usability group is involved
throughout the development process, the
collaboration is closer, as the usability group
performs more activities with the developers
(Borgholm and Madsen 95).

Practice of Usability Testing


Usability testing is also essential in
the toolkit of assessment methods
and is part of that iterative process
of working directly with our users to
provide the support and resources
they need for their work. It can help
designers and other not only to
identify what doesnt work well from
the user perspective, but also to
provide input on what would be the
most useful and important to
potential users (Ward and Hiller
170).

Its not only important to diversify


users in usability testing, but also the
evaluators. the evaluator effect,
that is the, the finding that evaluators
in similar conditions construct
substantially different sets of usability
problems produces concerns in
matching user problem. Practitioners
are advised to use teams of evaluators
to analyze usability problems
(Hornbaek and Frokjaer 25).

Importance of Assessment
Testing helped DDC staff work efficiently by
pinpointing the most critical problemsthe three
line items and the terminology
pointing out that taxpayers made the most errors
on line 7, allowing us to make changes that had the
most impact
showing us why people made the errors they did,
which directed the ways in which we attacked these
problems.
Testing also ensured that the revised form did
indeed meet the needs of the audience and
provided measurable proof to the IRS that it
worked significantly better than the original.
Without testing, the IRS would have continued to
believe that taxpayers understood the items on the
form and were making no errors when, in fact,
95% of the people we tested filled out the original form
incorrectly (Wright 48).

Importance of Assessment
Similarly, surveys and logs from sales and
support staff may offer clues as to the
frustrations site visitors face. They can also
reveal the language and terms prospects use
when thinking about offerings--and that may
differ from what they see the site itself.
Then there are walkthroughs of the site,
both by conversion experts and by the
marketers themselves. Marketers can also
observe panels of site users navigating their
sites remotely, through shared-site software.
The problem is that while marketers can
track a tester's journey through the site, and
record typed-in answers to questions, they
lose facial and other non-verbal response,
which can be key, especially among
technically oriented b-to-b buyers who may
not be as comfortable expressing themselves
verbally (Levey 1).

Leveys Things to Consider


First impressions, in which subjects look at a web page for five seconds, then look away and are asked what they
remember. Is the information the marketer wants to convey memorable? Is the progression to the next part of
the sales cycle clear?
Ease of search, which can be influenced by the various ways marketers can listen to customers. Is the site
optimized to provide the information searchers are looking for (as opposed to that which marketers want to
tout)? And does it use the terms searchers are looking for?

Ease of navigation. Can a searcher get to product information, or relevance of offerings to specific verticals,
without having to use the site's search box?
Is it easy for b-to-b prospects to obtain what they need, such as promised information downloads or product
specs, and is their next step clear? Once they have the information they want, can they ask for an additional
contact or follow-up? Are you presenting information more clearly than your competitors? This is equivalent to
the checkout process on a consumer site, but in many ways it is more critical, because on a b-to-b site the
transaction won't necessarily be completed online.
Does the site work on mobile devices? This is especially important for b-to-b products which might be
researched and ordered while in the field, such as construction or maintenance supplies.
Do referring pages and ads jump right to promised information, or do they require prospects to further search
for desired topics? If the latter, and a b-to-b marketer is using pay-per-click services, is that marketer paying for a
lot of prospects who are landing on the site and then abandoning their quests?

Importance of Assessment
You shouldnt care about small issues in usability. When
redesigning a website for usability, the average
improvement in key performance indicators is 83%.
Clearly, most websites still contain horrible usability
problems. Intranets and mobile sites/apps are often worse.
Your focus should thus be on the really big design
problems, where your user experience is failing to meet
customer needs. Better to invest heavily in those crucial
improvements than mess around with changes thatll gain
you only a percent or two.
Wasting your budget on overly precise measurements can
easily sidetrack you from the important issues; for sure,
youd have less budget left over to work on them.
Maybe in 20 years, user interfaces will be good enough
that our only remaining goal will be to fine-tune them for
the last few percents quality gain. Thats definitely not the
case today (Nielsen).

Importance of Assessment
Usability testing provides feedback on a
designers work. Weve been drilled with
the notion that a script can only benefit
from having input from its intended
audience. IVR scriptwriting, like any
other type of writing, gains infinitely
from the writer seeing firsthand how
words land on other people. In fact, in
teaching students to write, cycles of
feedback and rewriting make up the bulk
of instruction, which ensures the writers
intent and the readers experience are in
sync (Polkosky).

Quesenberrys Five Es
Effectiveness

Efficiency

Engagement

Error
Tolerance

Ease of
Learning

Definition

How
successfully a
user completes
a task

How quickly a
user fulfills a
task,
effectively

Design choices
in font
readability,
color choices
for headings,
legible
placements of
text and
graphics

Developers
prepare errors
messages,
allow for
process
reversal, and
provide steps
to carry out
task correctly

Flexible
interfaces
capable of
adapting to the
users needs
through
consistency
and
predictability

Evaluation

Measured by
how accurately
users complete
a task without
errors

Measured by
time or
number of
clicks to
complete a
task

Surveys and
interviews
determine the
sites
appearance

Prepare tasks
expected to
meet with
errors to
measure
predictable
tolerance

Recruit users
with different
levels of
experiences to
test how easy
the interface is
to learn

Unique Cultural Interpretations


Before beginning his research, Torkil
Clemmensen hypothesized, More work is
needed to develop the practice of usability
problem identification in culturally diverse
settings. One suggestion is to build a
comparative informatics approach that
can help researchers and practitioners to
become more sensitive towards culturally
diverse settings for usability and user
experience. More theoretical work can help
us to discuss how and if we should support
national and ethnic cultural diversity in
interactive and collaborative computing.
The overall goal of such studies should be
to enable a satisfying IT experience for a
much broader range of users (Usability
Diverse Settings 171).

Unique Cultural Interpretations


After performing his research, these are his
conclusions:
Across countries there is agreement as to
what senior usability professionals consider
to be part of a usability test. This agreement
can be described as a template with parts
that are constants and parts that vary
depending on the context of the usability
test.

A culturally specific template for usability


testing can be created that shows what
specific parts to add to the test and what
additional contexts to consider for each part
of the usability test (CrossCultural/Culturally Specific Usability
Testing 665).

Clemmensens Usability Template


Usability Test
Prescription

Is associated with

Participants Cultural
Diversity

Is associated with

Application to be
Tested

Is cause of

Experienced Usability
Problems

Is part of

Usability
Problem Report

Utilizing the Results


Two major methodologies that have been
applied to usability testing of mobile
applications are laboratory experiments
and field studies.
In a laboratory experiment, human
participants are required to accomplish
specific tasks using a mobile application in
a controlled laboratory setting, whereas a
field study allows users to use mobile
applications in the real environment. Both
methodologies have pros and cons.
Therefore, selection of an appropriate
methodology for a usability study depends
on its objectives and usability attributes
(Zhang and Adipat 298).

Utilizing the Results


Quick and clean usability
testing methods are needed, and such
methods should offer more valid and
reliable data. Usability testing should
be conducted in more naturalistic
environments such as simulated
homes or classrooms. Both usability
problem reports and testing methods
specifically tailored for industry
should receive more research due to
their importance, as well as the
testing of mobile phones and
handheld devices. In this regard,
there is a growing demand for
conducting usability tests in a short
time with few resources and on a low
budget (Alshamari and Mayhew
405).

Wilsons Usability Report Feedback


Usability reports are also important for showing what a usability specialist has done. They can also be
used to determine some metrics, such as the number of problems addressed by development or the
number of problems that occurred during successive prototypes or versions of a product. The
content, format, and wording that you use for your reports should be evaluated for usability. You
might want to create a template of your report format, verify that it will be useful to the intended
readers when you first start working with them, then get feedback on subsequent reports. Asking the
development team to critique your report can provide a small political benefit, assuming that you are
willing to make changes to your next report. Some questions you might ask are:

Is the report too long or too short?


Is there enough detail for managers or developers to understand the problem?
Do you want me to set priorities or is that the development team's responsibility?
Do you want me to recommend solutions?
How much detail do you want on the methods that I used?
Does the inclusion of screen shots make it easier for developers to understand the problem?
Is the language clear and tactful?

Works Cited
Alshamari, Majed, and Pam Mayhew. Technical Review: Current Issues of Usability Testing. IETE Technical Review 26.6 (2009): 402-406. Print.
Borgholm Thea, and Kim Madsen. Cooperative Usability Practices. Communications of the ACM 42.5 (1999): 91-91. Print.
Clemmensen, Torkil. Templates for Cross-Cultural and Culturally Specific Usability Testing: Results from Field Studies and Ethnographic Interviewing in Three Countries.
International Journal of Human-Computer Interaction 27.7 (2011): 634-669. Print.
---Usability Problem Identification in Culturally Diverse Settings. Information Systems Journal 22 (2011): 151-175. Print.
Hornbaek, Kasper, and Erik Frokjaer. A Study of the Evaluator Effect in Usability Testing. Human-Computer Interaction 23 (2008): 251-277. Print.
Krug, Steve. Dont Make Me Think! A Common Sense Approach to Web Usability. Berkley: New Riders, 2000. Print.
Levey, Richard. Website Usability Testing Boosts Profitability for B-to-Bs. Multichannel Merchant Exclusive Insight (2012): 1. Print.
Madsen, Kim. The Diversity of Usability Practices. Communications of the ACM 42.5 (1999): 60-63. Print.
Nielsen, Jakob. Accuracy vs. Insights in Quantitative Usability. Nielsen Norman Group: Evidence-Based User Experience Research, Training, and Consulting ,Nov. 2011. Web.
<http://www.nngroup.com/articles/accuracy-vs-insights-quantitative-ux/>
Polkosky, Melanie. To Usability Testor Not? Speech Technology Magazine. Information Today Inc., Sept./Oct. 2010. Web. 1 May 2013.
Quesenberry, Whitney. What Does Usability Mean: Looking Beyond Ease of Use. WQUsability.com. Web. <http://www.wqusability.com/articles/more-than-ease-of-use.html>
Ward, Jennifer, and Steve Hiller. Usability Testing, Interface Design, and Portals. Journal of Library Administration 43.1/2 (2005): 155-171. Print.
Wilson, Chauncey. Usability Techniques: Analyzing and Reporting Usability Data. Usability Interface. Society for Technical Communication, Oct. 1997. Web.
<http://www.stcsig.org/usability/newsletter/9710-analyzing-data.html >
Wright, Anita. The Value of Usability Testing in Document Design. Bulletin of the Association for Business Communication Dec. 1994: 48-51. Print.
Zhang, Dongsong, and Boonlit Adipat. Challenges, Methodologies, and Issues in the Usability Testing of Mobile Applications. International Journal of Human-Computer Interaction 18.3
(2005): 293-308. Print.

You might also like