HCI Assignment Group 2

You might also like

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 9

DEBRE TABOR UNIVERSTY

GAFAT INSTITUTE OF TECHNOLOGY


DEPARTMENT OF COMPUTER SCIENCE
Human-computer interaction ASSIGNMENT
Group 2
GROUP MEMBERS
NAME ID NO
1. TEWODROS ABERE------------------------------ 1637
2. BALEWGIZE ALMAW ---------------------------0253
3. SAMUEL KASSAHUN ----------------------------1450
4. ABEL MEKONNEN--------------------------------3004

Submitted to: Mr. Wondifraw M.

Submission date: 03/10/2016 E.C


Table of Contents
A. What problems do users with cognitive impairments and learning difficulties face and what could
be taken into consideration to ensure that your design supports users with those impairments...........1
Problems Faced by Users with Cognitive Impairments and Learning Difficulties..................................1
Design Considerations to Support These Users......................................................................................1
B. Questionnaires play an important role in the Evaluation Process. Describe in detail the possible
structure for questionnaires along with the pros and cons for this type of evaluation method..............2
Structure for Questionnaires..................................................................................................................2
Pros and Cons of Questionnaires............................................................................................................4
C. Define formative and summative evaluation. When would each be used in the design process?.......4
Formative Evaluation..............................................................................................................................4
Summative Evaluation............................................................................................................................5
Comparison.............................................................................................................................................6
Reference....................................................................................................................................................7
A. What problems do users with cognitive impairments and learning
difficulties face and what could be taken into consideration to
ensure that your design supports users with those impairments.
Problems Faced by Users with Cognitive Impairments and Learning Difficulties

1. Memory Limitations:
o Issue: Users with cognitive impairments often have short-term memory
challenges, making it difficult to remember instructions, sequences, or steps
required to complete tasks.
o Impact: They may forget what they are doing, leading to confusion and repeated
mistakes, especially in interfaces requiring multiple steps or navigation between
different parts of the system.
2. Attention Deficits:
o Issue: Users with attention deficits can find it hard to maintain focus on a task for
long periods or may be easily distracted by irrelevant information.
o Impact: Interfaces that are cluttered with too much information or lack a clear
focal point can be overwhelming, making it difficult for users to complete tasks
efficiently.
3. Problem-Solving and Processing Speed:
o Issue: Cognitive impairments can slow down the speed at which users process
information and solve problems, leading to frustration and errors.
o Impact: Users may struggle with interfaces that require quick decisions, complex
problem-solving, or understanding sophisticated interactions.
4. Reading and Comprehension Difficulties:
o Issue: Some users may have dyslexia or other reading impairments, making it
challenging to read and comprehend text, particularly if it is complex or lengthy.
o Impact: Long, jargon-filled instructions, and dense text can be difficult to
understand, hindering the ability to use the interface effectively.
5. Difficulty with Abstract Concepts:
o Issue: Users might struggle with abstract thinking, which affects their ability to
understand icons, metaphors, and symbolic representations.
o Impact: Abstract or symbolic navigation structures and icons that are not
immediately clear can cause confusion and hinder navigation.

Design Considerations to Support These Users

1. Simplified Navigation:
o Description: Create clear, consistent, and straightforward paths for users to
follow.
o Implementation: Minimize the number of steps required to complete a task, use
familiar layout patterns, and provide clear navigation cues like breadcrumbs or
step-by-step progress indicators.
2. Clear and Concise Instructions:

1
o Description: Provide easy-to-understand instructions and feedback.
o Implementation: Use plain language, avoid technical jargon, and break down
instructions into small, manageable steps. Utilize visual aids like icons and
diagrams to support text instructions.
3. Consistent Layouts:
o Description: Maintain a uniform layout across all parts of the application.
o Implementation: Use templates and design patterns that are consistent
throughout the interface. Keep navigation elements, buttons, and other interactive
elements in the same place on each screen.
4. Accessible Content:
o Description: Ensure that all content is easily readable and understandable.
o Implementation: Use large fonts, high-contrast text, and simple language.
Provide alternative text descriptions for images and multimedia content, and use
bullet points to make information easier to scan.
5. Error Prevention and Recovery:
o Description: Design the system to help users avoid errors and recover from them
easily.
o Implementation: Include confirmation dialogs for critical actions, provide undo
and redo options, and give clear, actionable feedback on errors with suggestions
for correction.
6. Multi-Sensory Feedback:
o Description: Use various sensory modalities to convey information.
o Implementation: Combine visual feedback (like color changes or animations)
with auditory signals (such as beeps or spoken messages) and tactile feedback
(like vibrations) to reinforce actions and information.
7. Progress Indicators:
o Description: Show users their progress through tasks.
o Implementation: Use progress bars, step indicators, or other visual cues to
indicate where users are in a process and how much is left to complete. This helps
users understand their progress and reduces anxiety.
8. Customization and Flexibility:
o Description: Allow users to tailor the interface to their needs.
o Implementation: Provide options to adjust text size, color schemes, and input
methods (e.g., keyboard vs. mouse vs. touch). Offer simplified modes or
advanced settings depending on user preference and capability.B. Questionnaires
play an important role in the Evaluation Process. Describe in detail the possible
structure for questionnaires along with the pros and cons for this type of evaluation
method.

B. Questionnaires play an important role in the Evaluation Process.


Describe in detail the possible structure for questionnaires along
with the pros and cons for this type of evaluation method.
Structure for Questionnaires

1. Introduction:
2
o Purpose: Explain the purpose of the questionnaire and how the responses will be
used.
o Content: Include a brief introduction, confidentiality assurance, and clear
instructions on how to complete the questionnaire.
o Details: Specify the estimated time to complete the questionnaire and contact
information for any questions or concerns.
2. Demographic Questions:
o Purpose: Gather background information about the participants to understand the
context of their responses.
o Content: Questions about age, gender, occupation, level of education, and
experience with similar technology.
o Examples: "What is your age?", "What is your highest level of education?",
"How frequently do you use technology similar to this?"
3. Closed-Ended Questions:
o Purpose: Obtain specific, quantifiable information from participants.
o Content: Multiple-choice questions, Likert scale questions (e.g., rating scales
from 1 to 5), and yes/no questions.
o Examples: "How often do you use this application?" (Daily, Weekly, Monthly,
Rarely), "Rate your overall satisfaction with the user interface on a scale from 1 to
5."
4. Open-Ended Questions:
o Purpose: Gather detailed, qualitative insights into participants' thoughts and
experiences.
o Content: Questions that allow participants to provide free-form answers.
o Examples: "What features do you find most useful and why?", "Describe any
difficulties you encountered while using the interface."
5. Rating Scales:
o Purpose: Measure attitudes, perceptions, and satisfaction levels in a structured
manner.
o Content: Scales (e.g., 1-5, 1-7) for rating various aspects of the interface, such as
usability, aesthetic appeal, and functionality.
o Examples: "Rate the ease of navigation on a scale from 1 to 5, where 1 is very
difficult and 5 is very easy."
6. Behavioral Questions:
o Purpose: Understand how users interact with the system and their usage patterns.
o Content: Questions about specific actions or behaviors related to the system.
o Examples: "How frequently do you use the search feature?", "Which sections of
the application do you visit most often?"
7. Final Remarks:
o Purpose: Provide space for any additional comments or suggestions from
participants.
o Content: An open text box for any other feedback.
o Examples: "Is there anything else you would like to add?", "Do you have any
suggestions for improving the interface?"

3
Pros and Cons of Questionnaires

Pros:

 Scalability: Questionnaires can be distributed to a large number of participants easily and


quickly, making them ideal for gathering data from diverse user groups.
 Quantifiable Data: Closed-ended questions and rating scales provide measurable data
that can be analyzed statistically to identify trends and patterns.
 Anonymity: Participants may be more honest and open in their responses when they are
anonymous, leading to more accurate data.
 Cost-Effective: Online questionnaires are generally inexpensive to create and distribute,
and can reach a wide audience with minimal effort.

Cons:

 Limited Depth: Questionnaires may not provide deep insights into user behaviors,
motivations, and experiences, particularly if they rely heavily on closed-ended questions.
 Response Bias: Participants may provide socially desirable answers rather than their true
opinions, or they may rush through the questionnaire without careful consideration.
 Interpretation Issues: Questions may be interpreted differently by different participants,
leading to inconsistencies and potential misunderstandings.
 Low Response Rates: Questionnaires often suffer from low response rates, which can
impact the validity and representativeness of the results.

C. Define formative and summative evaluation. When would each be


used in the design process?
Formative Evaluation

Definition: Formative evaluation is conducted during the development process with the goal of
improving the design. It involves iterative testing and feedback to refine and enhance the system
before it is finalized.

Purpose:

 Identify Usability Issues: Detect and address usability problems early in the design
process to prevent costly fixes later.
 Gather Feedback for Improvement: Collect user feedback to make incremental
improvements and ensure the design meets user needs and expectations.
 Support Iterative Design: Use insights from formative evaluations to inform and guide
the iterative design process, refining prototypes and concepts.

Characteristics:

 Iterative: Conducted multiple times throughout the development process.


 Diagnostic: Focuses on identifying problems and areas for improvement.

4
 Qualitative and Quantitative: Can involve both types of data (e.g., usability testing, user
interviews, surveys).
 User-Centered: Often involves direct interaction with users to gather their insights and
feedback.

When to Use:

 Early and Middle Stages of Design: During initial concept development, wireframing,
and prototyping.
 Development of Prototypes: When creating low-fidelity or high-fidelity prototypes to
test and validate design ideas.
 Continuously: As new features or changes are introduced, formative evaluation should
be ongoing to ensure the design remains user-centered.

Methods:

 Usability Testing with Prototypes: Conduct tests with low-fidelity (paper prototypes) or
high-fidelity prototypes to observe how users interact with the design.
 Heuristic Evaluation: Engage usability experts to evaluate the design against established
usability principles and identify potential issues.
 Cognitive Walkthroughs: Have designers and usability experts walk through tasks step-
by-step to identify potential usability problems.
 User Interviews and Focus Groups: Gather qualitative feedback from users through
interviews and group discussions to understand their experiences and needs.

Summative Evaluation

Definition: Summative evaluation is conducted after the system has been developed and is ready
for deployment. It aims to assess the effectiveness, efficiency, and satisfaction of the final
product.

Purpose:

 Evaluate Overall Success: Measure the success of the final product against its intended
goals and user requirements.
 Validate Design Decisions: Confirm that the design meets the needs and expectations of
users and stakeholders.
 Provide Evidence of Performance: Gather data to demonstrate the system’s
performance, usability, and effectiveness.

Characteristics:

 Finality: Conducted at the end of a development cycle or after implementation.


 Assessment: Focuses on the outcomes and overall performance.
 Quantitative: Often relies on metrics and statistical analysis (e.g., user satisfaction surveys,
performance metrics).

5
 Objective-Oriented: Measures success against the original goals and objectives.

When to Use:

 End of Development Process: After the system has been fully developed and is ready
for final testing or release.
 Before or After Release: Conduct summative evaluations to validate the system before
launch or to assess its performance after it has been deployed to users.
 Comparison Against Benchmarks: Compare the final product against usability
benchmarks, industry standards, or competing products to evaluate its relative
performance.

Methods:

 Formal Usability Testing: Conduct structured usability tests with the final product to
measure task completion rates, error rates, and user satisfaction.
 Surveys and Questionnaires: Use post-implementation surveys to gather user feedback
on their experience with the system.
 Field Studies: Observe users interacting with the system in their natural environment to
gather insights into real-world usage and identify any remaining issues.
 Performance Metrics Analysis: Analyze quantitative data such as task completion time,
error rates, and user satisfaction scores to assess the system’s effectiveness and
efficiency.

Comparison

Aspect Formative Evaluation Summative Evaluation

Timing During development After development


Purpose Improve design Assess final product
Focus Identifying issues, refining design Measuring success, validation
Usability testing, heuristic evaluation, Formal usability testing, surveys,
Methods
cognitive walkthroughs field studies
Outcome Recommendations for improvement Overall performance assessment

By using both formative and summative evaluations, designers can ensure a comprehensive
assessment of the system throughout its lifecycle, leading to a more user-friendly and effective
final product.

Reference

6
 Shneiderman, B., Plaisant, C., Cohen, M., Jacobs, S., Elmqvist, N., & Diakopoulos, N.
(2016). Designing the User Interface. Pearson.
 Norman, D. (2013). The Design of Everyday Things. Basic Books.
 W3C Web Accessibility Initiative (WAI). Introduction to Web Accessibility. Retrieved from
W3C WAI
 Sauro, J., & Lewis, J. R. (2016). Quantifying the User Experience. Morgan Kaufmann.
 Lazar, J., Feng, J. H., & Hochheiser, H. (2017). Research Methods in Human-Computer
Interaction. Morgan Kaufmann.
 Nielsen, J. (1994). Usability Engineering. Morgan Kaufmann.
 Preece, J., Sharp, H., & Rogers, Y. (2015). Interaction Design: Beyond Human-Computer
Interaction. Wiley.
 Dix, A., Finlay, J., Abowd, G. D., & Beale, R. (2004). Human-Computer Interaction.
Pearson.

 Albert, W., Tullis, T., & Tedesco, D. (2010). Beyond the Usability Lab. Morgan Kaufmann.

You might also like