How To Integrate Software Development Lifecycle & Human Centered Design

You might also like

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 17

how to integrate software development lifecycle & human centered

design

User Centered Design is an important aspect of any product to evaluate its success in the
market, where there is an abundance of product of similar functionalities. The software
products are moving towards the phase where the end user can take the blocks of software,
customize and build their own products. This adds more emphasize on the role of UCD in
the product development. Many a time, the process for providing a good user experience is
treated separate from the process of developing the product or software. There is an
increased awareness that the two process need to integrate and proceed hand in hand for
success of the product. UCD is a design process focusing on user research, user
interface design and usability evaluation. UCD communities, however, a full
understanding of user requirements is often seen as incompatible with early and
quick development iterations. We performed a literature review aiming to identify 

usability inspection methods

Usability inspection is the generic name for a set of methods that


are all based on having evaluators inspect a user interface.
Typically, usability inspection is aimed at finding usability
problems in the design, though some methods
Usability inspection is the name for a set of methods where an evaluator inspects a user interface.
This is in contrast to usability testing where the usability of the interface is evaluated by testing it on
real users. Usability inspections can generally be used early in the development process by
evaluating prototypes or specifications for the system that can't be tested on users. Usability
inspection methods are generally considered to be cheaper to implement than testing on users. [1]
Usability inspection methods include:

 Cognitive walkthrough (task-specific)
 Heuristic evaluation (holistic)
 Pluralistic walkthrough
Usability inspection [14] is an important approach to achieving usability. It asks human inspectors to
detect usability problems in a user interface design so that they can be corrected to improve usability. It
usually requires multiple inspectors, who can either work individually or as a team.
Usability testing method
Usability testing is a technique used in user-centered interaction design to evaluate a product by
testing it on users.

usability-testing session, a researcher (called a “facilitator” or a


“moderator”) asks a participant to perform tasks, usually using one or
more specific user interfaces. usability testing” is often used
interchangeably with “user testing.” Why Usability Test?
The goals of usability testing vary by study, but they usually include:

 Identifying problems in the design of the product or service


 Uncovering opportunities to improve
 Learning about the target user’s behavior and preferences

Ele
ments of Usability Testing
There are many different types of usability testing, but the core elements
in most usability tests are the facilitator, the tasks, and the participant.

In a usability test, the facilitator gives instructions and task scenarios to

the participant. The participant provides behavioral and verbal feedback

about the interface while he performs those tasks.


Top
7 Usability Testing Methods
Usability testing is a powerful tool for evaluating a website's functionality
and making sure people can navigate it efficiently. Each usability testing
method gives answers to your research questions. The method you choose
will depend on both your resources and your objectives.

1. Moderated + in-person usability testing


Tests that are moderated and conducted in-person offer the most control.
They are resource-heavy but excellent for collecting in-depth information.

# Lab usability testing

# Guerrilla testing

2. Moderated + remote

Moderated and remote usability tests are performed via a computer or phone
and require a trained moderator. They’re good for picking from a wide range
of testers while still taking advantage of a moderator's skills and ability to dive
deep.

# Phone interviews

# Card sorting

3. Unmoderated + remote

Relying mostly on computer programs, these passive testing methods provide


insight into how users interact with a website in their ‘natural environment.’
# Session recordings

# Online testing tools and platforms

4. Unmoderated + in-person

Unmoderated in-person tests are conducted in a controlled, physical setting


but don't require a person to administer the test. This gives you many of the
benefits of testing in a controlled atmosphere and reduces the possibility that
a moderator could lead or influence participants with their questions.

# Observation

# Eye-tracking

There are many types of user tests, from behavioral and attitudinal to
qualitative and quantitative, each with a set number of participants for
optimal results.

Attitudinal and behavioral testing is summed up as “what people say” vs.


“what people do.” Many times the two are very different.

Qualitative and quantitative testing is described as “direct observation” vs.


“indirect measurement.”

EXPLORATORY EVALUATION METHOD
EXPLORATORY EVALUATION DEFINED

Conducting evaluations of programs that are useful to project managers, decision makers,


stakeholders and partners, and funders is the hallmark of successful evaluation. Exploratory
evaluation (EE) is the process of examining how a program has been conceived and
implemented.s Exploratory evaluations clarify program goals and evaluation criteria, provide
evaluation findings that may be useful in the short term, and help in designing more definitive
evaluations to further inform managers, policymakers, and other evaluation stakeholders. All of
these approaches can be accomplished relatively quickly, and each approach enhances the
likelihood that further evaluation will prove to be feasible, accurate, and useful. Evaluation is
the systematic acquisition and assessment of information to provide useful feedback
about some object Exploratory evaluations clarify program goals and evaluation criteria,
provide evaluation findings Exploratory evaluations clarify program goals and evaluation criteria, ...

Predictive Evaluation Method

Predictive Evaluation (PE) uses a four-step process to predict results then designs and evaluates
a training intervention accordingly

. Evaluation • Gathering data about usability of a


design by a specified group of users fora particular activity within a specifiedenvironment•
Goals1.Assess extent of system’s functionality2.Assess effect of interface on user3.Identify specific
problems with system Predictive Evaluation
• Basis:– Observing users can be timeconsuming and expensive– Try to predict usage rather than
observing it directly– Conserve resources (quick & low cost)• Approach– Expert reviews (frequently
used)HCI experts interact with system and try to find potentialproblems and give prescriptive
feedback– Best if• Haven’t used earlier prototype• Familiar with domain or task• Understand user
perspectives

Predictive evaluation enables you to effectively and accurately forecast training’s value to your
company, measure against these predictions, establish indicators to track your progress, make
midcourse corrections, and report the results in a language that business executives respond to
and understand.
At its
heart, any predictive evaluation technique requires a model for how a user
interacts with an interface.  
Evaluation Evaluation • Gathering data about
usability of a Gathering data about usability of a design by a design by a specified group of users
specified group of users for a particular activity particular activity within a within a specified specified
environment Goals • 1. Assess extent of system’s functionality 1. Assess extent of system’s functionality
• 2. Assess effect of interface on user 2. Assess effect of interface on user • 3. Identify specific problems
with system Predictive Evaluation Predictive Evaluation • Basis: – Observing users can be time Observing
users can be time-consuming and consuming and expensive expensive – Try to predict usage rather than
observing it Try to predict usage rather than observing it directly directly – Conserve resources (quick &
low cost)

FORMATIVE EVALUATION METHOD

formative evaluation (sometimes referred to as internal) is a method for judging the


worth of a program while the program activities are forming (in progress). They can be
conducted during any phase of the ADDIE process. They can be conducted
during any phase of the ADDIE  process. This part of the evaluation
focuses on the process.
Thus, formative evaluations are basically done on the fly. They permit
the designers, learners, instructors, and managers to monitor how well
the instructional goals and objectives are being met. Its main purpose is
to catch deficiencies ASAP so that the proper learning interventions can
take place that allows the learners to master the required skills and
knowledge.

Scriven (1967) first suggested a distinction between  formative


evaluation and summative evaluation. Formative evaluation was intended to
foster development and improvement within an ongoing activity (or person,
product, program, etc.). Formative evaluations are conducted during
product development or as a product is still being formed. The goal is
to influence design decisions as they are being made formative
evaluation helps you form the design and summative evaluation helps
you sum up the design Formative Evaluations are evaluations FOR learning. They
are often ungraded and informal. Their aim is to provide both the students and instructor
with a gauge of where their level of understanding is at the current moment, and enable the
instructor to adjust accordingly to meet the emerging needs of the class. Some
examples of Formative Evaluations:
One-Minute Paper
Muddiest Point Paper
Directed Paraphrase
Mid-Semester Evaluation

Formative evaluation is a type of usability evaluation that helps to "form" the design for a product or service.

Formative evaluations involve evaluating a product or service during development, often iteratively, with the goal of

detecting and eliminating usability problems. Formative assessment


explained

Formative assessment is more diagnostic than evaluative. It is used to


monitor pupil learning style and ability, to provide ongoing feedback and allow
educators to improve and adjust their teaching methods and for students to
improve their learning.

Most formative assessment strategies are quick to use and fit seamlessly into
the instruction process. The information gathered is rarely marked or graded.
Descriptive feedback may accompany formative assessment to let students
know whether they have mastered an outcome or whether they require more
practice.

Formative assessment examples:

 Impromptu quizzes or anonymous voting

 Short comparative assessments to see how pupils are performing against their peers

 One-minute papers on a specific subject matter

 Lesson exit tickets to summarise what pupils have learnt

 Silent classroom polls

 Ask students to create a visualisation or doodle map of what they learnt


SUMMATIVE EVALUATION METHOD

Summative assessment, summative evaluation, or assessment of learning refers to the assessment


of participants where the focus is on the outcome of a program. This contrasts with formative
assessment, which summarizes the participants' development at a particular
time.  Methods of summative assessment aim to summarize overall learning at the completion of
Summative evaluations are intended to provide a
the course or unit.
package of results used to assess whether a program works or not.
Summative assessment aims to evaluate student learning and academic
achievement at the end of a term, year or semester by comparing it against a
universal standard or school benchmark. Summative assessments often have
a high point value, take place under controlled conditions, and therefore have
more visibility.

Summative assessment examples:

 End-of-term or midterm exams


 Cumulative work over an extended period such as a final project or creative portfolio

 End-of-unit or chapter tests

 Standardised tests that demonstrate school accountability are used for pupil admissions;

SATs, GCSEs and A-Levels


Summative assessment aims to evaluate student learning and academic
achievement at the end of a term, year or semester by comparing it against a
universal standard or school benchmark. Summative assessments often have
a high point value, take place under controlled conditions, and therefore have
more visibility.

Summative assessment examples:

 End-of-term or midterm exams

 Cumulative work over an extended period such as a final project or creative portfolio

 End-of-unit or chapter tests

 Standardised tests that demonstrate school accountability are used for pupil admissions;

SATs, GCSEs and A-Levels


Summative evaluations describe how well a design performs Summative
evaluation looks at the impact of an intervention on the target group. This type of evaluation is
arguably what is considered most often as 'evaluation' by project staff and funding bodies- that is,
finding out what the project achieved. The goal of formative assessment is to monitor
student learning to provide ongoing feedback that can be used by instructors to improve
their teaching and by students to improve their learning. More specifically, formative
assessments:
o help students identify their strengths and weaknesses and target areas that need
work
o help faculty recognize where students are struggling and address problems
immediately
Formative assessments are generally low stakes, which means that they have low or no
point value. Examples of formative assessments include asking students to:
o draw a concept map in class to represent their understanding of a topic
o submit one or two sentences identifying the main point of a lecture
o turn in a research proposal for early feedback

You might also like