Download as pdf or txt
Download as pdf or txt
You are on page 1of 16

MODULE-2

Managing Design Processes

Introduction, Organizational Design to support Usability ,The Four Pillars of Design, Development
methodologies: Ethnographic Observation, Participatory Design, Scenario Development, Social
Impact statement for Early Design Review, Legal Issues

3.1 Introduction

 In the first decades of computer-software development, technically oriented programmers


designed text editors, programming languages, and applications for themselves and their
peers.
 The substantial experience and motivation of these users meant that complex interfaces
were accepted and even appreciated.
 Designers should be based on careful observation of current users, refined by thoughtful
analysis of task frequencies and sequences, and validated through early usability and
thorough acceptance tests are likely to produce high-quality interfaces.
 Designers seek direct interaction with users during the design phase, the development
process, and throughout the system lifecycle.

3.2 Organizational Design to support Usability


 Corporate marketing and customer-assistance departments are becoming more aware of
the importance of usability and are a source of constructive encouragement.
 When competitive products provide similar functionality, usability engineering is vital
for product acceptance.
 Organizational awareness can be stimulated by "Usability Day" presentations, internal
seminars, newsletters, and awards. However, resistance to new techniques and a changing
role for software engineers can cause problems in organizations.
 “Usability engineering” has evolved into a recognized discipline with maturing practices
and a growing set of standards
 Usability engineers and user-interface architects, sometimes called the user experience
(UX) team are gaining experience in organizational change
 There are numerous papers and reporting addressing return on investment (ROI) for
usability testing
 The Usability Professional's Association (UPA) holds annual meetings called the
“World Usability Day”
 Design is inherently creative and unpredictable. Interactive system designers must blend
knowledge of technical feasibility with a mystical esthetic sense of what attracts users.
shared language
 Characterization of design:
o Design is a process; it is not a state and it cannot be adequately represented
statically.
o The design process is nonhierarchical; it is neither strictly bottom-up nor strictly
top-down.
o The process is radically transformational; it involves the development of partial
and interim solutions that may ultimately play no role in the final design.
o Design intrinsically involves the discovery of new goals.

Rajatha S, Assistant Professor, Dept of MCA, RNSIT 1


3.3 The Four Pillars of Design
The four pillars described in this section can help user-interface architects to turn good ideas into
successful systems (Fig. 3.1). They are not guaranteed to work, but experience has shown that
each pillar can produce an order-of-magnitude speedup in the process and can facilitate the
creation of excellent systems.

Fig 3.1
3.3.1 User Interface requirements
 Soliciting and clearly specifying user requirements is a major key to success in any
development activity
 Laying out the user-interface requirements is part of the overall requirements
development and management process
 User interface requirements describe system behavior

Examples of user-interface requirements regarding system behavior.

 Performance requirements:

“The website shall give users the ability to update their user profiles, e.g., name,
mail address, e-mail, phone”

“The system shall permit the ATM customer 15 seconds to make a selection. The
customer shall be warned that the session will be ended if no selection is made”

“The mobile devices shall be able to save drsft text messages when out of the
service area.”

 Functional requirements

“The system shall ensure that PIN entered matches the one on file”

“The web site shall provide other, related purchase options based on past vists to
the web site”

Rajatha S, Assistant Professor, Dept of MCA, RNSIT 2


“The credit card transaction must be approved prior to displaying a confirmation
number”

 Interface Requirements

“The web site shall permit ordering stamps online.”

“The mobile device shall permit downloading of ring tones.”

 Ethnographic Observation: Identifying and observing the user community in action

3.3.2 Guidelines documents and processes

Each project has different needs, but guidelines should be considered for:
 Words, icons, and graphics
 Terminology (objects and actions), abbreviations, and capitalization
 Character set, fonts, font sizes, and styles (bold, italic, underline)
 Icons, graphics, line thickness, and
 Use of color, backgrounds, highlighting, and blinking

 Screen-layout issues
 Menu selection, form fill-in, and dialog-box formats
 Wording of prompts, feedback, and error messages
 Justification, white space, and margins
 Data entry and display formats for items and lists
 Use and contents of headers and footers

 Input and output devices


 Keyboard, display, cursor control, and pointing devices
 Audible sounds, voice feedback, touch input, and other special devices
Response time for a variety of tasks

 Action sequences
 Direct-manipulation clicking, dragging, dropping, and gestures
 Command syntax, semantics, and sequences
 Programmed function keys
 Error handling and recovery procedures

 Training
 Online help and tutorials
 Training and reference materials Command syntax, semantics, and sequences

 When preparing/working with guidelines always consider the 4Es:


o Education: Provide training opportunities, give changes to discuss guideline.
o Enforcement: Establish procedures that facilitate guidelines distribution and
communication among stakeholders, establish procedures that ensure
enforcement

Rajatha S, Assistant Professor, Dept of MCA, RNSIT 3


o Exemption: Allow for exemptions, install a process that allows for rapid
adaptation if necessary.
o Enhancement: Constantly improve/refine guidelines (where appropriate and
possible without affecting the progress of the project too much)

3.3.3 User Interface Software Tools


 One difficulty in designing interactive systems is that customers and users may not have
any idea of what the system will look like when it is done.
 The serious design decisions may not be able to change due to cost, time and complexity.
 These problems have no solution; some of the serious difficulties can be avoided at early
stage.
 The customers and users can be given a realistic impression of what the final system look
like.
 The prototype of a menu system may have one or two paths active, instead of the
thousands of paths of the final system.
 Prototypes have been developed with simple drawing or word-processing tools or even
power point presentations of screen drawings
 Storyboards
o Can use simple drawing tools
 Paint, word, power point
o Can be animated
 Flash, Ajax
 Limited functionality simulations
o Some part of system functionality provided by designers
o Visual studio, java Lightweight UI toolkit

3.3.4 Expert reviews and usability testing


 Necessary in order to ensure usability and high quality UIs
 Examples: Early pilot testing (possibly several components individually), expert review
methods, tests with the intended users, surveys, or automated analysis tools
 Again, what procedures to choose and how they should be shaped individually greatly
depends on the nature of the project (e.g, number of expected users)

3.4 Development Methodologies


 Many software projects fail due to communication deficits between developers and
clients/users (gap between business/user and IT/developer)
 These problems may be due to poor communication between developers and clients or
between developers and their users.
 Successful developers use user-centered design which leads to systems that generate
fewer problems during development and have lower maintenance costs.
 They are easier to learn, produce faster performance, reduce user errors substantially, and
encourage users to explore features that go beyond the minimum required to get by.
 Software developers have learned that consistently following established development
methodologies can help them meet budgets and schedules.

Rajatha S, Assistant Professor, Dept of MCA, RNSIT 4


 Since software-engineering methodologies are effective in facilitating the software
development process, they have not provided clear processes for studying the users,
understanding their needs, and creating usable interfaces.
 Agile technologies and methodologies provide the room to be responsive to user-
interface development and usability needs.
 These business-oriented approaches specify detailed deliverables for the various stages of
design and incorporate cost/benefit and return-on-investment analyses to facilitate
decision making.
 There are dozens of advertised development methods but the focus here is on rapid
contextual design as in the below steps.

1. Contextual Enquiry
Plan for, prepare and then conduct field interviews to observe and
understand the work tasks being performed. Review business practices.
2. Interpretation Sessions and work modeling
Hold team discussion to draw conclusions based on the conceptual
inquiry, including gaining an understanding of the workflow processes in
the organization as well as cultural and policy impacts.
3. Model Consolidation and affinity diagram building
Present the data gathered to date from users and the interpretation and
work modeling to a larger, targeted population to gain insight and
concurrence.
4. Persona development
Develop personas to represent the different user types within a targeted
demographic that might use a site or product
5. Visioning
Review and “walk” the consolidated data, sharing the personas created.
The visioning session helps define how the system will streamline and
transform the work of the users.
6. Storyboarding
The vision guides the detailed redesign of user tasks using pictures and
graphs to describe the initial user-interface concepts, business rules, and
automation and assumptions. Storyboarding defines and illustrates the “to
be built” assumptions.
7. User environment design
The single, coherent representation of the users and the work to be
performed is expressed in the user environment design.
8. Interview and evaluations with paper prototypes and mock-ups
Conduct interview and tests with actual users, beginning with paper
prototypes and then moving onto higher-fidelity prototypes. Capturing the
results of the interview aids in ensuring that the system will meet end-user
requirements.

3.5 Ethnographic Observation


 The early stages of most methodologies include observation of users. Since interface
users form a unique culture, ethnographic methods for observing them in the workplace
are becoming increasingly important.
 Ethnographers work or home environments to listen and observe carefully, sometimes
stepping forward to ask questions and participate in activities.

Rajatha S, Assistant Professor, Dept of MCA, RNSIT 5


 Ethnographers, user-interface designers gain insight into individual behavior and the
organizational context.
 The goal of this observation is to obtain the necessary data to influence interface
redesign.
 Unfortunately, it is easy to observations, to disrupt normal practice and to overlook
important information.
 Following a validated ethnographic process reduces the likelihood of these problems
 Guidelines for preparing for the evaluation, performing field study, analyzing the data,
and reporting the findings might include the following:
I. Preparation
 Understand organization policies and work culture.
 Familiarize yourself with the system and its history.
 Set initial goals and prepare questions.
 Gain access and permission to observe or interview.
II. Field Study
 Establish rapport with managers and users.
 Observe or interview users in their workplace, and collect subjective
objective quantitative and qualitative data.
 Follow any leads that emerge from the visits.
 Record your visits.
III. Analysis
 Compile the collected data in numerical, textual, and multimedia
databases.
 Quantify data and compile statistics.
 Reduce and interpret the data.
 Refine the goals and the process used.
IV. Reporting
 Consider multiple audiences and goals.
 Prepare a report and present the findings.

3.6 Participatory Design


 This is direct involvement of people in the collaborative design of the things and
technologies they use.
 The arguments in favor suggests that more user invlovment brings more accurate
information about the tasks and an opportunity for users to influence design decisions.
 On the other hand, extensive user involvement may be costly and may lengthen the
implementation period.
 It may also generate antagonism from people who are not involved or whose suggestions
are rejected, and potentially force designers to compromise their redesigns to satisfy
incompetent participants.
 Participatory design experiences are usually positive
 Controversial
o One positive side, more user involvement brings:
 More user involvement brings:
 more accurate information about tasks
 more opportunity for users to influence design decisions
 a sense of participation that builds users' ego investment in successful
implementation
Rajatha S, Assistant Professor, Dept of MCA, RNSIT 6
 potential for increased user acceptance of final system
o On the negative side, extensive user involvement period
 be more costly
 lengthen the implementation period
 build antagonism with people not involved or whose suggestions rejected
 force designers to compromise their design to satisfy incompetent
participants
 build opposition to implementation
 personality conflicts between design-team members and users
 show that organizational politics and preferences of certain individuals are
more important than technical issues

Fig: Druin’s model of the four levels of user participation. The blue areas(Informant and Design
partner) represent stages of participatory design.

 Users: The taxonomy describes the roles of children, older adults in developing the
interface for Children, older adults in developing interfaces whose typical users will be
partners.
 Testers: are merely observed as they try out novel designs.
 Informants: comment to designers through interviews and focus groups.
 Design Partners: are active members of a design, which in the case of children's s/w will
naturally involve participants of many ages-the intergenerational team.

3.7 Scenario Development

 When a current interface is being redesigned or a well-polished manual system is being


automated, reliable data about the distribution of task frequencies and sequences is an
enormous asset.
 If current data do not exist, then usage logs can quickly provide insight. A table with
user communities listed across the top and tasks listed down the side is helpful.
 Day-in-the-life scenarios:
o characterize what happens when users perform typical tasks
o can be acted out as a form of walkthrough

Rajatha S, Assistant Professor, Dept of MCA, RNSIT 7


o may be used as basis for videotape
o useful tools
o table of user communities across top, tasks listed down the side
o table of task sequences
o flowchart or transition diagram

3.8 Social Impact Statement for Early Design Review


 Interactive systems often have dramatic impact on large numbers of users.
 To minimize risks, a thoughtful statement of anticipated impacts circulated among the
stakeholders can be helpful process for productive suggestions early in the development
when changes are easiest.
 Describe the new system and its benefits
o Convey the high level goals of the new system.
o Identify the stakeholders.
o Identify specific benefits
 Address concerns and potential barriers
o Anticipate changes in job functions and potential layoffs.
o Address security and privacy issues.
o Discuss accountability and responsibility for system misuse and failure.
o Avoid potential biases.
o Weigh individual rights vs. societal benefits.
o Assess trade-offs between centralization and decentralization.
o Preserve democratic principles.
o Ensure diverse access.
o promote simplicity and preserve what works.
 Outline the development process
o Present and estimated project schedule.
o Propose process for making decisions.
o Discuss expectations of how stakeholders will be involved.
o Recognize needs for more staff, training, and hardware.
o Propose plan for backups of data and equipment.
o Outline plan for migrating to the new system.

3.9 Legal issues


 Potential Controversies
o What material is eligible for copyright?
o Are copyrights or patents more appropriate for user interfaces?
o What constitutes copyright infringement?
o Should user interfaces be copyrighted?
o Evolving public policies related to:
 Privacy
 Liability related to system safety/reliability
 Freedom of speech

Rajatha S, Assistant Professor, Dept of MCA, RNSIT 8


Evaluating Interface Design
Introduction, Expert Reviews, Usability Testing and Laboratories, Survey Instruments,
Acceptance tests, Evaluation during Active Use, Controlled Psychologically Oriented
Experiments

4.1 Introduction

 Designers can become so entranced with their creations that they may fail to evaluate
them adequately.
 Experienced designers have attained the wisdom and humility to know that extensive
testing is a necessity.
 The determinants of the evaluation plan include:
o Stage of design (early, middle, late)
o Novelty of project (well defined vs. exploratory)
o Number of expected users
o Criticality of the interface (life-critical medical system vs. museum exhibit
support)
o Costs of product and finances allocated for testing
o Time available
o Experience of the design and evaluation team
 The range of evaluation plans might be from an ambitious two-year test to a few days
test.
 The range of costs might be from 10% of a project down to 1%.

4.2 Expert Reviews


 While informal demos to colleagues or customers can provide some useful feedback,
more formal expert reviews have proven to be effective.
 Expert reviews entail one-half day to one week effort, although a lengthy training
period may sometimes be required to explain the task domain or operational
procedures.
 There are a variety of expert review methods to chose from:

o Heuristic evaluation- Review UI to determine compliance with a short list of


design heuristics (e.g “The 8 golden rules of UI design”)

o Guidelines review - Review UI for conformance with the guidelines


document. Because guidelines documents may contain a thousand items or
more, it may take expert reviewers some time to absorb them and days or
weeks to review large interface.

o Consistency inspection - Verify consistency across several UIs, within a UI,


or within a tutorial

o Cognitive walkthrough – Experts verify simulate users walking through the


interface to carry out typical tasks. An expert may try the walkthrough
privately and explore the system, but there also should be group meeting with
Dept. of MCA, RNSIT 1
designers, users, or managers to conduct a walkthrough and provoke
discussion.

o Metaphors of human thinking- Experts conduct an inspection that focuses on


how user thinks when interacting with an interface. They consider five aspects
of human thinking : a habit, the stream of thought, awareness and associations,
the relation between utterances and thought, and knowing.

o Formal usability inspection - Experts participate in a meeting/discussion with


a moderator who presents the interface and asks specific questions

 Experts reviews can be scheduled at several points in the development process, when
experts are available and when the design team is ready for feedback.
 The number of experts reviews will depends on the magnitude of the project and on
the amount of resources allocated.
 After choosing a review method you need to select the right approach/means and/or
reporting style to implement it. Some examples:
o Ranked Recommendation (assign priorities)
o Birds-Eye View (study printed screens from distance)
o Use of Software Tools (speed up the review process)
o General challenge: Experts may lack an understanding of the task domain
and/or user community, or may be biased. Hence it is crucial to chose
knowledgeable experts that are familiar with the project and organization.

4.3 Usability Testing and Laboratories

 Actual potential users test the UI, commonly in a lab-environment.


 The emergence of usability testing and laboratories since the early 1980s is an
indicator of the profound shift in attention to user needs.
 The remarkable surprise was that usability testing not only sped up many projects but
that it produced dramatic cost savings.
 The movement towards usability testing stimulated the construction of usability
laboratories.
 Participants should be chosen to represent the intended user communities, with
attention to background in computing, experience with the task, motivation,
education, and ability with the natural language used in the interface.
 Participation should always be voluntary, and informed consent should be obtained.
Professional practice is to ask all subjects to read and sign a statement like this one:
o I have freely volunteered to participate in this experiment.
o I have been informed in advance what my task(s) will be and what procedures
will be followed.
o I have been given the opportunity to ask questions, and have had my questions
answered to my satisfaction.
o I am aware that I have the right to withdraw consent and to discontinue
participation at any time, without prejudice to my future treatment.

Dept. of MCA, RNSIT 2


o My signature below may be taken as affirmation of all the above statements; it
was given prior to my participation in this study.
 Videotaping participants performing tasks is often valuable for later review and for
showing designers or managers the problems that users encounter.
 Field tests attempt to put new interfaces to work in realistic environments for a fixed
trial period. Field tests can be made more fruitful if logging software is used to
capture error, command, and help frequencies plus productivity measures.
 Game designers pioneered the can-you-break-this approach to usability testing by
providing energetic teenagers with the challenge of trying to beat new games. This
destructive testing approach, in which the users try to find fatal flaws in the system, or
otherwise to destroy it, has been used in other projects and should be considered
seriously.
 For all its success, usability testing does have at least two serious limitations: it
emphasizes first-time usage and has limited coverage of the interface features.
 These and other concerns have led design teams to supplement usability testing with
the varied forms of expert reviews.

4.3.1 Usability labs


 A typical modest usability lab would have two 10 by 10 foot areas, one for the
participants to do their work and another, separated by a half-silvered mirror, for the
testers and observers (designers, managers, and customers).
 The number of types and sources of participants are identified.
 Develop a test plan
o Test Plan contains a list of tasks, subjective satisfaction and debriefing
questions, identifies number, types and sources of participants
o Developed in collaboration with the design team, also consider deadlines and
budgets
 Conduct pilot test
o Usually a couple of weeks before the actual test
o Very limited number of participants
o Purpose: Validate procedure, tasks, questions, adjust if problems occur
 Choose participants
o Selection Criteria: background, experiences with the tasks, motivation,
education, natural language ability, physical abilities
o Also consider issues such as time, date, noise, etc.
 Recording participants performing tasks is often valuable for later review and for
showing designers or managers the problem that users encounter.
 Another relatively new technique available to the usability-evaluation professional is
eye-tracking software. The eye-tracking data can show where participants gazed at the
screen and for how long.

4.3.2 Handling participants and the Institutional Review Board(IRB)

Dept. of MCA, RNSIT 3


 Participants should always be treated with respect and should be informed that it is
not they who are being tested; rather it is software and user interface that are under
study.
 They should be told about what they will be doing and how long they will be expected
to stay
 An Institutional Review Board governs any research performed with human subjects.
There are different levels of review and precise procedures that must be followed.

4.3.3 Think aloud and related techniques

 An effective during usability testing is to invite users to think aloud about what they
are doing as they are performing the task.
 The designer or tester should be supportive of the participants, not taking over or
giving instructions, but prompting and listening for clues about how they are using the
interface
 For example, they may hear comments such as “This web page text is too small… so
I’m looking for something on the menus to make the text bigger…”

4.3.4 The spectrum of usability testing

 Usability testing comes in many different flovors and formats.


 The purpose of the test and the type of the data that is needed are important
considerations
 The following is a list of the various types of usability testing. Testing can be
performed using combinations of these methods as well

o Paper mockups. Early usability studies can be conducted using paper mock-
ups of screen displays to assess user reactions to wording, layout, and
sequencing. A test administrator plays the role of the computer by flipping the
pages while asking a participant user to carry out typical tasks. This informal
testing is inexpensive, rapid, and usually productive.

o Discount usability testing. This quick-and-dirty approach to task analysis,


prototype development, and testing has been widely influential because it
lowered the barriers to newcomers. A controversial aspect is the
recommendation to use only three to six test participants. Advocates point out
that most serious problems are found with a few participants, enabling prompt
revision and repeated testing, while critics hold that a broader subject pool is
required to thoroughly test more complex systems. The formative evaluation
identifies problems that guide redesign, while the summative evaluation
provides evidence for product announcements("94% of our 120 testers
completed their shopping tasks without assistance")and clarifies training needs
("with 4 minutes of instruction, every participant successfully programmed the
videorecorder").

Dept. of MCA, RNSIT 4


o Competitive usability testing. Competitive testing compares a new interface to
previous versions or to similar products from competitors. This approach is
close to a controlled experimental study, and staff must be careful to construct
parallel sets of tasks and to counterbalance the order of presentation of the
interfaces. Within-subjects designs seem the most powerful, because
participants can make comparisons between the competing interfaces-fewer
participants are needed, although each is needed for a longer time period.

o Universal usability testing. This approach tests interfaces with highly diverse
users, hardware, software platforms, and networks. When a wide range of
international users is anticipated, such as for consumer electronics products,
web-based information services, or e-government services, ambitious testing is
necessary to clean up problems and thereby help ensure success. Trials with
small and large displays, slow and fast networks, and a range of operating
systems or Internet browsers will do much to raise the rate of customer
success.

o Field tests and portable labs. This testing method puts new interfaces to work
in realistic environments for a fixed trial period. Field tests can be made more
fruitful if logging software is used to capture error, command, and help
frequencies, as well as productivity measures. Portable usability laboratories
with videotaping and logging facilities have been developed to support more
thorough field testing.

o Remote usability testing. Since web-based applications are available


internationally, it is tempting to conduct usability tests online, without
incurring the complexity and cost of bringing participants to a lab. This makes
it possible to have larger numbers of participants with more diverse
backgrounds, and may add to the realism since participants do their tests in
their own environments, using their own equipment.

o Can-you-break-this tests. Game designers pioneered the can-you-break-this


approach to usability testing by providing energetic teenagers with the
challenge of trying to beat new games. This destructive testing approach, in
which the users try to find fatal flaws in the system or otherwise destroy it, has
been used in other projects and should be considered seriously.

4.4 Survey Instruments

 Written user surveys are a familiar, inexpensive and generally acceptable companion
for usability tests and expert reviews.
 The keys to successful surveys are clear goals in advance and then development of
focused items that help attain the goals.

4.4.1 Preparing and designing survey questions

Dept. of MCA, RNSIT 5


 A survey should be prepared, reviewed by colleagues, and tested with a small sample
of users before a large-scale survey is conducted .
 Survey goals can be tied to the components of the Objects and Action Interface model
of interface design. Users could be asked for their subjective impressions about
specific aspects of the interface such as the representation of:
o task domain objects and actions
o interface domain metaphors and action
o syntax of inputs and design of displays.

 Other goals would be to ascertain


o Users background (age, gender, origins, education, income)
o Experience with computers (specific applications or software packages, length
of time, depth of knowledge)
o Job responsibilities (decision-making influence, managerial roles, motivation)
o Personality style (introvert vs. extrovert, risk taking vs. risk aversive, early vs.
late adopter, systematic vs. opportunistic)
o Reasons for not using an interface (inadequate services, too complex, too
slow)
o Familiarity with features (printing, macros, shortcuts, tutorials)
o Feeling state after using an interface (confused vs. clear, frustrated vs. in-
control, bored vs. excited).
 Online surveys avoid the cost of printing and the extra effort needed for distribution
and collection of paper forms.
 Many people prefer to answer a brief survey displayed on a screen, instead of filling
in and returning a printed form, although there is a potential bias in the sample.
 A survey example is the Questionnaire for User Interaction Satisfaction (QUIS).

4.5 Acceptance Test


 For large implementation projects, the customer or manager usually sets objective and
measurable goals for hardware and software performance.
 If the completed product fails to meet these acceptance criteria, the system must be
reworked until success is demonstrated.
 Rather than the vague and misleading criterion of "user friendly," measurable criteria
for the user interface can be established for the following:
 Time to learn specific functions
 Speed of task performance
 Rate of errors by users
 Human retention of commands over time
 Subjective user satisfaction
 In a large system, there may be eight or 10 such tests to carry out on different
components of the interface and with different user communities.
 Once acceptance testing has been successful, there may be a period of field testing
before national or international distribution.

4.6 Evaluation During Active Use

Dept. of MCA, RNSIT 6


 A carefully designed and thoroughly tested system is a wonderful asset, but successful
active use requires constant attention from dedicated managers, user-services
personnel, and maintenance staff.
 Perfection is not attainable, but percentage improvements are possible and are worth
pursuing.
 Interviews and focus group discussions
o Interviews with individual users can be productive because the interviewer can
pursue specific issues of concern.
o After a series of individual discussions, group discussions are valuable to
ascertain the universality of comments.
 Continuous user-performance data logging
o The software architecture should make it easy for system managers to collect
data about the patterns of system usage, speed of user performance, rate of
errors, or frequency of request for online assistance.
o A major benefit of usage-frequency data is the guidance they provide to
system maintainers in optimizing performance and reducing costs for all
participants.
 Online or telephone consultants, email, and online suggestion boxes
o Online or telephone consultants are an extremely effective and personal way
to provide assistance to users who are experiencing difficulties.
o Many users feel reassured if they know there is a human being to whom they
can turn when problems arise.
o On some network systems, the consultants can monitor the user's computer
and see the same displays that the user sees while maintaining telephone voice
contact.
o This service can be extremely reassuring; the users know that someone can
walk them through the correct sequence of screens to complete their tasks.
o Electronic mail can be employed to allow users to send messages to the
maintainers or designers.
o Such an online suggestion box encourages some users to make productive
comments, since writing a letter may be seen as requiring too much effort.
 Discussion groups, wikis and newsgroup
o Many interface designers offer users an electronic bulletin board or
newsgroups to permit posting of open messages and questions.
o Bulletin-board software systems usually offer a list of item headlines, allowing
users the opportunity to select items for display.
o New items can be added by anyone, but usually someone monitors the bulletin
board to ensure that offensive, useless, or repetitious items are removed.
o Newsletters that provide information about novel interface facilities,
suggestions for improved productivity, requests for assistance, case studies of
successful applications, or stories about individual users can promote user
satisfaction and greater knowledge.
o Printed newsletters are more traditional and have the advantage that they can
be carried away from the workstation.

Dept. of MCA, RNSIT 7


o Online newsletters are less expensive and more rapidly disseminated
o Discussion groups allow workers to exchange experiences with colleagues,
promote novel approaches, stimulate greater dedication, encourage higher
productivity, and develop a deeper relationship of trust.
 Tools for automated evaluation
o Software can be effective in evaluating user interfaces for desktop
applications, websites, and mobile devices.
o Straight forward tools to check spelling or terms benefit interface designers.
o Simple metrics that report numbers of displays, widgets, or links between
displays capture the size of a user interface project, but inclusion of more
sophisticated evaluation procedures can allow interface designers to access
whether a menu tree is too deep or contains redundancies, whether widget
labels have been used consistently.

4.7 Controlled Psychologically-oriented Experiments


 Scientific and engineering progress is often stimulated by improved techniques for
precise measurement.
 Rapid progress in the designs of interfaces will be stimulated as researchers and
practitioners evolve suitable human-performance measures and techniques.
 The outline of the scientific method as applied to human-computer interaction might
comprise these tasks:
o Deal with a practical problem and consider the theoretical framework
o State a lucid and testable hypothesis
o Identify a small number of independent variables that are to be manipulated
o Carefully choose the dependent variables that will be measured
o Judiciously select subjects and carefully or randomly assign subjects to groups
o Control for biasing factors (non-representative sample of subjects or selection
of tasks, inconsistent testing procedures)
o Apply statistical methods to data analysis
o Resolve the practical problem, refine the theory, and give advice to future
researchers
 Managers of actively used systems are coming to recognize the power of controlled
experiments in fine tuning the human-computer interface.
 limited time, and then performance could be compared with the control group.
Dependent measures could include performance times, user-subjective satisfaction,
error rates, and user retention over time.

Dept. of MCA, RNSIT 8

You might also like