Assignment Number: 2 Assignment Unique Code: 797334 Student Number: 67685862 Last Name & Initials Ss Gumede

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 18

INF4820

2020

Assignment
2
number:
Assignment
797334
unique code:
Student number: 67685862
Last name &
SS GUMEDE
Initials

Statement of Originality
This is to certify that to the best of my knowledge; the content of this assignment is my own work.

I certify that the intellectual content of this assignment is the product of my own work and that all
the assistance received in preparing this assignment and sources have been acknowledged.

School of Computing
Question 1
Which of the following examples presents an "unmistakable usability issue"? Answer either Yes, or
No, or Not necessarily

a) Yes

b) No

c) No

d) No

e) Yes

Question 2
a) When doing the usability study, its best to choose one usability specialist when using usability
rating because:

• The usability rating systems needs to be consistent as there a few different


ones, having multiple experts might delay the study and it will not provide ease
in comprehending the usability issues. Experts will need to reach an
agreement as there will be different judgement.

• It will be difficult to get good ratings; each expert will find a different rating of
the usability study. There should be aggregation of ratings, discuss or get the
average before they can be reported.

b) 20

c) I would use the below analyses:

• Issues by Category: Categorises the focus of design improvements from tactical


perspective and only on certain areas of the system (categories are group). Presents
the frequency of usability issues grouped by categories and version/design iteration.

• Issues by Task: Identifies the number of tasks that should be focused on for the next
version / design iteration of the system and issues are analysed on task level.

• Frequency of Unique Issues: Analyses the frequency of unique usability issues that
are most useful in an iterative design process, its best when you want informative
data about issues that occur with each new version/design iteration.

• Frequency of Issues Per Participant: Examines nonunique issues experienced by


each participant over different versions/design iterations. Its best for analysing
average issues experienced in two web-based system versions experienced by
participants in iterations.

• Frequency of Participants: An analysis of important usability issues and identifies the


consistency between participants and designs. Useful metric when you need to focus
on whether there is improvement in certain elements of design.

School of Computing
Question 3

Participant T1 T2 T3 T4 T5 T6 T7 T8 T9 T10 Proportion


1 x x x x x x x 0.7
2 x x x x x 0.5
3 x x x x x x 0.6
4 x x x x 0.7
5 x x x x x x x 0.7
6 x x x x x x x 0.7
7 x x x x 0.4
8 x x x x x x x x x 0.9
9 x 0.1
10 x x x x x x x 0.7
Proportion 0.4 0.4 0.4 0.6 0.8 0.6 0.6 0.7 0.6 0.6 0.585

Sample 3

=1−(1− 0.585) 3
=0.93 or about 93%, of the issues would be identified with a sample of 5 users from this study.

Sample 4

=1−(1− 0.585)4
=0.97 or about 97%, of the issues would be identified with a sample of 4 users from this study.

Sample 5

=1−(1− 0.585)5
=0.99 or about 99%, of the issues would be identified with a sample of 5 users from this study.

School of Computing
Question 4
Consider the following conditions and then state if you would use 5-10 or more than 10 users in a
usability test

a) More than 10 users

b) 5 - 10 users

c) 5 – 10 users

Question 5
I would capture the reported data on emotional/task level and use likert scales (rating scales)

The below question would be asked “You will definitely download the application once its released”
, the below statement will need a user to provide an answer using the five -point rating scale , the
following is used to rate the statement for the user.

1 = Strongly disagree

2 = Disagree

3 = Neither agree nor disagree

4 = Agree

5 = Strongly agree

(P.T.O)

School of Computing
Question 6
Desktop system name: Iress Pro – Market data and Trading System

Strongly Strongly
Disagree Agree

1. I think that I would like to use Iress Pro √


frequently 4
1 2 3 4 5

2. I found Iress Pro unnecessarily complex √


4
1 2 3 4 5

3. I thought Iress Pro was easy to use 2


1 2 3 4 5

4. I think that I would need the support of



a technical person to be able to user 2
Iress Pro
1 2 3 4 5

5. I found the various functions in Iress √


Pro were well integrated 4
1 2 3 4 5

6. I thought there was too much √


inconsistency in Iress Pro 1
1 2 3 4 5

7. I would imagine that most people



would learn to use Iress Pro quickly 1
1 2 3 4 5

8. I found Iress Pro very cumbersome to



use 4
1 2 3 4 5

9. I felt very confident using Iress Pro 3


School of Computing
1 2 3 4 5

10. I needed to learn a lot of things before √ 1


I could get going with Iress Pro

The scores were calculated as below :

1. 5-1 = 4
2. 5-1 =4
3. 3-1 =2
4. 5-3= 2
5. 5-1 = 4
6. 5-4 = 1
7. 2-1 = 1
8. 5-1= 4
9. 4-1 = 3
10. 5-4 = 1

Total Score = 26

SUS Score = 26 * 2.5


= 65

According to Bangor and colleagues the above SUS score falls under the below range 50-70 and
its regarded as marginal score.

Question 7
a) The below are TWO suggested guidelines for preventing self-reported bias in usability
testing data collection:

• UX expert needs to collect post-data in such a way that the moderator / UX expert or
the facilitator does not get to see the users’ score until the user who participate has
left , one way to do this is to leave the room when the user completes the survey.

• Ask the users who participated in the usability test to complete a post-test survey
when they get back home or at the office , one way to do this is to send an email a
user can click on a URL that will redirect to an online survey.

b) Using 0 (Zero) in usability study allows UX experts to find stability by start numbering at 0.
This means rating scale starting at 0 allows 0 to correspond to 0% unlike to start numbering
from 1 (assume its 1-5) it gives a correspondence percentage of 20%.

School of Computing
Question 8

a)

School of Computing
b)
DWELL TIME: Looks at the total amount of time participants have spent looking on
a certain AOI.

The longer the duration looking at a certain AOI can indicate high level of interest
and information is well received, while less or short duration can indicate the total
opposite on specific AOI of the homepage is more appealing than others. The level
of interest on one AOI is conveyed.
1
What useful data will it offer?
The below AOI will provide useful data as it’s the centre of the homepage. the
following data can be presented: entry time, dwell time, first fixation and fixation
count.

• NUMBER OF FIXATIONS: The “number of fixations” with a certain AOI, this means the
total count of fixations.

Like dwell time, the reported dwell time on a certain AOI is recorded. The more the
AOI appeals the participants, the number of fixations will be high as opposed to an
area with less interest.

All AOI can provide useful data for each matric but the below is the logo of the team
in the homepage and number of fixations will be higher. The following data can be 2
3
presented: dwell time, number of fixations You don't know - you are testing to see if it
Why? is so

School of Computing
• FIXATION DURATION: Presents the average time for fixations lasted for, for
participants on how long the fixation lasted for.

More comparisons of AOI allows UX experts to confirm which AOI had more focus
that other AOI and what are possible reasons. Presented date can be average fixation
4 and number of fixations. This matric can provide
durations combined with dwell time
Why?
useful data on the below AOI.

• SEQUENCE: Presents the sequence in which each AOI is first fixed

The order of attention in eye tracking reflects the participants interest and how each
elements of the website homepage stand out in terms of their colour, using the
sequence fixation on the homepage screen will provide useful data as in which AOI
provides the first interest and which one gets the last interest , example is as below :
5
Why?

School of Computing
• TIME TO FIRST FIXATION: Presents the amount of time it takes users to first notice a certain
element.

This matric can be used on the below AOI to indicate the amount of time it will take a
participant (or all participants on average) to for an element, in this case on the homepage
we can find how long it takes a participant to find the football club logo and the search
6
button. Why do we want to know that?

• REVISITS: Presents the number of times that the eye fixates within an AOI

This matric can be use on the below AOI to find the number of revisits and this will
provide useful data about how many times a participant leaves the AOI and comes
back to have a look again , or does the participant fixate and never comes back or
they find themselves coming back to the AOI. The follow can be presented: Ratio,
Re-visitors, Revisits and Fixations.

School of Computing
7
You are not explaing why?

• HIT RATIO: Presents the percentage of users who participated and had at least one
fixation with the AOI.

The hit ratio determines the information (in %) about how many of participants have
looked on a specific AOI (such as football club logo on the homepage, adverts or
interesting story). This provides a useful order the actual percentage of people who
have looked at the specific AOI, the following 8
data can be presented: time spent,
ration and fixations. And why would it be useful?

This matric can be used in the below AOI:

School of Computing
Question 9
Three ways are presented on how to derive new usability metrics from the product I have choose
and data. I will be deriving new usability metrics from my data and there after I will combine metrics
to measure usability.

1. Combining Metrics Based on Target Goals

I will be combining different metrics data and check if when combined they meet the target
goal, there for each combination of these metrics, in this case it will be task completion and
task time that will be measured.

The target goal will consider data point of two metrics, the goal is to measure If each of the
following participants (8) managed to complete at least 85% of their tasks without issues and
task time is less or equal to 70 seconds on average time. The target will get only 1 if the
criteria is met and 0 if not.

Participant Task completion Task Time Target Met?


(Seconds)
1 81% 50s 0

2 97% 70s 1

3 68% 80s 0

4 75% 90s 0

5 86% 69s 1

6 98% 50s 1

7 80% 78s 1

8 95% 57s 1

Average: 85% 68s 63%

The results show that 5 out 8 participants have met the criteria, where complete time is at
least 85% and task time is less or equal 70 seconds. The average for task completion is 85%
and for task times is 68s, it shows the task target was met as the percentage is 62.5%.
The results are looking good and all participants did9participate in the test which made the
results to be reasonable in any way. 63% is good?

School of Computing
2. Combining Metrics Based on Percentages

The below technique will be trying to combine scores on different scales and convert each score to
a percentage and after average the scores, this allows different metrics to be combine when you do
not have any target goals.
Some of the columns in the table are; Time per task which is the average time to complete a task
and presented in seconds , task completed presents the number of all the task and total to 10.The
last column is the rating column as I will 10
rate each task ( from zero to four) where four is regarded
You rate?
the best.

Participant # Time Per Task Task Completed Rating (0-4)


(Seconds) (out of 10)
1 55 9 3.2

2 60 10 3.8

3 49 7 2.2

4 78 5 1.3

5 59 9 2.6

6 80 6 3.3

7 75 8 3.1

8 89 7 2.7

9 73 7 1.2

10 28 4 1.1

The below second table is calculated by taking the manipulating data as follows from the first table:
For task completion, each score was divided by 10 to get a percentage, For rating score was divided
by 4 and for the time column , firstly the difference between longest time 89 seconds and 28 seconds
was calculated then the difference of the longest time (89) and the score is further divided by the
difference of the longest and shortest time.

School of Computing
Participant # Time Per Task Task Rating Average
(Seconds) (out of 10)
1 56% 90% 80% 75%

2 48% 100% 95% 81%

3 65% 70% 55% 63%

4 18% 50% 33% 33%

5 49% 90% 65% 68%

6 80% 60% 83% 74%

7 15% 80% 78% 58%

8 0% 70% 68% 46%

9 26% 70% 30% 42%

10 100% 40% 28% 56%

To sum the combining metrics based on percentages, the third table focuses on giving weight to our
data and that is combining the above two metrics (task completion and task time) with ratings which
is a self-reported data.

The below table will shows how weighted averages are calculated, each percentages score is
multiplied with its associated weight (Time is multiplied by 1 , task is multiplied by 1 and rating is
multiplied by 2) and sum up together then further divided by the total of weights (4).

Time and Task are given equal weight rating however for ratings we multiplied the weight twice so
we can give high preference to performance since its self-reported data.

School of Computing
Participant # Time Weight Task Weight Rating Weight Average
(Seconds) (out of 10)
1 56% 1 90% 1 80% 2 75%

2 48% 1 100% 1 95% 2 81%

3 65% 1 70% 1 55% 2 63%

4 18% 1 50% 1 33% 2 33%

5 49% 1 90% 1 65% 2 68%

6 80% 1 60% 1 83% 2 74%

7 15% 1 80% 1 78% 2 58%

8 0% 1 70% 1 68% 2 46%

9 26% 1 70% 1 30% 2 42%

10 100% 1 40% 1 28% 2 56%

School of Computing
3. Combining Metrics Based on Z Scores

This matric focuses on normal distribution and it indicates the number of units that are above
or under the of the distribution of the values given, this means scores are translated to their
corresponding Z score.

When calculating associated Z scores, the distribution can have 0 or 1 which is the mean
and the standard deviation respectively. I will use the below formula to calculate any score
to its associated Z score, z = (x − µ) / σ
Where x is the score that must be transformed, µ is the mean and σ is the standard deviation
for the score distribution

Task
Rating z Time Z
Completed
Time Per
Participant
Task z Time z Tasks
# (out of 10) (0-4) (-1) Rating
(Seconds)

1 55 9 3.2 -0.53 0.53 0.96 0.32


2 60 10 3.8 -0.26 0.26 1.49 0.50
3 49 7 2.2 -0.87 0.87 -0.11 -0.03
4 78 5 1.3 0.74 -0.74 -1.17 -0.39
5 59 9 2.6 -0.31 0.31 0.96 0.32
6 80 6 3.3 0.86 -0.86 -0.64 -0.22
7 75 8 3.1 0.58 -0.58 0.43 0.14
8 89 7 2.7 1.35 -1.35 -0.11 -0.03
9 73 7 1.2 0.47 -0.47 -0.11 -0.04
10 28 4 1.1 -2.03 2.03 -1.71 -0.57
Mean 64.6 7.2 2.45 0.0 0.0 0.0 0.0
Standard
18.00740588 1.87379591 0.96522882 1.0 1.0 1.0 1.0
Deviation

The results of the table are shown above, and each score was calculated as below:
• From each and every initial score, the z score values were calculated by
subtracting the means score distribution then after divided by the score
standard deviation.

11
What does all of these mean? Why is it useful?

School of Computing
RESULTS
30 = 15

No Elements = 50

------------------------------

Total = 65 / 100 (65%)


COMMENTS
1 What useful data will it offer?
2 You don't know - you are testing to see if it is so
3 Why?
4 Why?
5 Why?
6 Why do we want to know that?
7 You are not explaing why?
8 And why would it be useful?
9 63% is good?
10 You rate?
11 What does all of these mean? Why is it useful?

You might also like