Tobi Adelaja, Refining A Flight Booking Portal For Accessibility and Customer Retention (UX Research)

You might also like

You are on page 1of 57

ONE PAGE SUMMARY FOR NON UX

RESEARCHERS
Wakanow.com is an online travel concierge which provides international travelers with
a one stop portal for flight booking, hotel reservations, vacation packages, travel cards,
and advisory services. It was founded in 2008 by Obinna Ekezie and has its head office
at Plot 8, Elegushi Beach road Ikate roundabout, Lekki Lagos, Nigeria.

PROBLEM: A very useful portal that provided exceptional service but with users having
problems navigating key aspect of the site.

OBJECTIVE: The objective was to comb the system for flaws, errors and problems and
see if travelers can use the portal to plan their trips; find out ways to enhance the user
experience and make the experience addictive; and recommend possible suggestions to
increase the ROI.

PROCEDURE: To start with, I sought to access the ease of use and efficiency of the portal,
in booking flights according to the task specified in the task description (More on this in
the report).

PROJECT TIMELINE: 30 Days. (July 2022)

METHOD USED: Usability testing framework along with task completion rate, SUS
scores, heuristic evaluation and others.

TEAM: User Experience Researcher (Tohbie Adelaja)

CONTEXT: This exercise was embarked on, as a reason of my quest to understudy design
systems and see how they can be improved on.

SUMMARY OF FINDINGS: (Visit here)

Up next: Full report

Page | 1
1.1 LIST OF TABLES
Tables 1: Showing demographics of Test Participants ............................................................................. 6
Tables 2: showing errors, mean time on task and task completion rate for participants ....................... 8
Tables 3: showing task completion rate................................................................................................. 10
Tables 4: Showing Time Taken For Users To Complete Task ................................................................. 11
Tables 5: Showing Number Of Non Critical Errors ................................................................................. 12
Tables 6: showing number of critical errors per task from participant test .......................................... 13
Tables 7: showing aggregate errors (both critical and non-critical errors for tasks .............................. 13
Tables 8: showing SOS grading system .................................................................................................. 15
Tables 9: Showing SUS Score Calculations For Participant 001 ............................................................ 16
Tables 10: Showing SUS Score Calculation For Participant 2 ................................................................. 17
Tables 11: Showing SUS Score Calculation For Participant 3 ................................................................. 18

1.2 LIST OF FIGURES


Figures 1: Task Options on the Home Page ........................................................................................... 21

Figures 2: Search Results after Task Query ............................................................................................ 23

Figures 3: Hotel Listings on the Find Hotel page .................................................................................... 26

Figures 4: Menu dummy button on the Home Page.............................................................................. 28

Figures 5: Wakanow Menu button from the Home Page ...................................................................... 29

Figures 6: System Busy display after Task Input ................................................................................... 30

Figures 7: The pay in part feature .......................................................................................................... 32

Page | 2
1.3 EXECUTIVE SUMMARY
A usability test was conducted on wakanow.com, Nigeria’s leading flight booking portal
to examine its interface and evaluate its performance against specified task booking
scenarios. 3 participants who had prior experience to using similar portal were recruited
in this study.

The test was based on 6 task scenarios and was ranked from easy to difficult, according
to flight booking goals. Volunteers were monitored while they embarked on this rest.
After the test, the time on task, Task completion rate, Error free rate and the SUS scores
were computed. The findings reveals a low SUS score of 67.5. A mean time on task of
388.22 seconds suggesting problems with task completion, the mean task completion
rate was 44.44 %, a low score set against a bench mark of 80% and the error free rate
was 16.67%.

The portal does a great job, and was designed quite well, however to increase the user
experience, the following problems discovered during the usability test has to be
addressed;
 First of all, the default settings on site was not applicable for some users.
 Secondly, the total number of search results were not aggregated and
summarized for the user to see.
 Thirdly, there was no rating on most of the hotels advertised for booking.
 Fourthly, the menu button was not functional.
 Fifth, there was no clear indication of how long the user has to wait to obtain
search results.
 Sixth, there was no matching offers for an innovative feature offered on the site
this feature is represented in the “pay in part” initiative.
 Seventh, distractive adverts were found on the wrong part of the website which
impacted the user experience.
 Eight, the link that led to the page for terms of service was not functioning
properly.
 Lastly, the contrast, fonts and colours of the site compared to competitor site was
quite basic.

Page | 3
Page 5 to 9 discusses the methods and procedures, while pages 10 to 34 discusses
the metrics and findings, pages 40 to 53 shows the copies of the test scripts,
questionnaires, task scenarios and other materials used to conduct the test. The last
three pages of this report shows the prioritized list.

Page | 4
2.0 INTRODUCTION
Wakanow.com as noted earlier, is an online travel concierge which provides
international travelers with a one stop portal for flight booking, hotel reservations,
vacation packages, travel cards, and advisory services. It was founded in 2008 by Obinna
Ekezie and has its head office at Plot 8, Elegushi Beach road Ikate roundabout, Lekki
Lagos, Nigeria.

The goal of this usability test was to see if frequent travelers can use the portal to plan
their trips, find possible problems on the interface and recommend useful ways to
enhance the user experience and increase the customer retention. To answer this
question, I sought to access the ease of use and efficiency of this portal, in booking flights
according to tasks that will be specified shortly.

2.1 METHODOLOGY
3 users were assigned to perform 6 moderated tasks via remote screen sharing app
called Team viewer while they were observed and monitored following best practices. A
screen recorder e was used to record the activities while they performed the test. This
was to monitor the participant’s interaction with the website for further analysis.

These participants were seated at their work station and communication was supported
via phone call.

All three participants were in their thirties, one was a masters student, the second a
telecom engineer and the third, an IT consultant.

Page | 5
Tables 1: Showing demographics of Test Participants
Participant Frequency of travel
S/N Age Gender Occupation
code within the last 1 year

1 31 Female Masters student 3


UX 001

2 35 Female Telecom engineer 2


UX 002

3 39 Male IT Consultant 4
UX 003

The Test sessions occurred late evening, on 3 separate days between the hours of 4pm
and 7pm.

The criteria for selection was that:

i. Participants must have bought a plane ticket online in the past year.
ii. Participants must not have used the site before.

It was explained to the participant that the goal was to test the website with the overall
aim of improving its functionality and they are not in any way being tested.
The user test logging sheet (refer to Appendix VIII) was utilized to note the time, each
participant began a task and when it ended.

Unusual interactions and behaviours on the website was also noted and attempt was
made to interpret these patterns.

All participants were further urged to attempt all tasks. All three participants read and
signed the consent form (See Appendix VI) which acknowledged the voluntary nature of
their participation and how they could stop the session if they so wish, at any time
without any consequence.

Page | 6
They were also urged to make a running commentary on what they were doing as they
worked through the tasks. I explained that i wouldn’t interfere when they got stuck as I
just want to see what they would normally do, if they were using the website on their
own.

When the task was done, I debriefed them, asked for their honest opinions regarding
the usability of the website, and later got to the analysis.

The analysis was done on how easy it was to use the site, through the following
measures:

I. How fast they could find the information they sought for.
II. Whether or not they could identify which page contained the information they
needed.
III. Their relative ease of navigating the site to accomplish their goals.
IV. And other metrics in the debriefing guidelines.

For the analysis, the critical error rates was computed, the task completion rate, as well
as the time taken to complete the task were measured, and finally a subjective
evaluation based on SUS calculation was performed. The next session provides the
results.

Page | 7
2.2 SUMMARY OF RESULTS
The Tables below shows the aggregate errors, mean time on task, task completion rate
and error free rate obtained in the study.

Task 1 had the highest level of completion.


Task 2 and 3 on the other hand, has the same task completion rate.
Task 4 ranked poorly, as only one of the test participant person completed task 4.
For task 5 and task 6, the test records a critical error and participants couldn’t obtain a
closure.

Tables 2: showing errors, mean time on task and task completion rate for participants

Aggregate Mean time on Task completion


Task Error free rate
Errors on task task rate

1 1 324.67 100% 66.67%

2 3 269.67 66.67% 33.33%

3 3 371.67 66.67% 0%

4 4 375.33 33.33% 0%

5 3 466.67 0% 0%

6 3 521.33 0% 0%

TOTAL 17 2329.34 - -

MEAN 2.83 388.22 44.44% 16.67%

Page | 8
The 33 % and zero percent completion rate indicative of Task 4, 5 and 6 respectively are
pointers to problems on the website. The relatively higher values for task error in Tables
2 shows a point of struggle, as the candidate sought to carry out the assigned task.

The higher mean time of completion of task 5 and 6 combined with the zero completion
rate points at how the users prodded the tools and tried various ways to accomplish the
task but could not.

Participants found wakanow.com a very useful portal, but struggled with the interface
while carrying out the moderated task, this task was set to test the interface and it
indicates the motive for the service was right, but there are gaps to be filled by the
design team.

Page | 9
3.0 USABILITY METRICS
The Usability metrics measured in this study were measured against specific
performance goals necessary to satisfy usability requirements. The four metrics used
were task completion rate, Time on Task (TOT) i.e. time taken to complete task, Error-
free rate, and finally a subjective evaluations. The following provides a breakdown.

3.1 TASK COMPLETION RATE


The Completion rate indicated at the end of the Table is the percentage of test
participants who successfully complete the task without critical errors. Each task was
closed ended and required the participants to obtain specific information from the
interface, these inputs had a clear success criteria, and based on this, it was established
whether or not the task was completed successfully or not.

The Table below shows that the task one had the highest completion rate while task two
and three had the second highest completion rate, task four had a 33.33% completion
rate and both Task 5 and 6 had zero completion rate.

Tables 3: showing task completion rate

PARTICIPANT TASK 1 TASK 2 TASK 3 TASK 4 TASK 5 TASK 6

UX 001 DONE X DONE X X X


UX 002 DONE DONE DONE X X X
UX 003 DONE DONE DONE DONE X X
SUCCESS 3 2 3 1 0 0
COMPLETION
RATE 100% 66.67% 66.67% 33.33% 0% 0%

Page | 10
3.2 TIME OF TASK
The time to complete a scenario also known as "time on task" was measured from the
time the participant began the task to the time they signaled completion. The Table
below shows the time taken for each user to complete the task, where “t” is expressed
in seconds.

The task with the lowest completion time was task two, which was for the user to find
the cheapest total price for a trip of 4 people from Chicago to New York.

Tables 4: Showing Time Taken For Users To Complete Task

PARTICIPANT TASK 1 TASK 2 TASK 3 TASK 4 TASK 5 TASK 6

UX 001 296 225 410 337 412 592


UX 002 356 280 327 388 455 541
UX 003 322 304 378 401 533 431
Average time on
task 324.67 269.67 371.67 375.33 466.67 521.33

3.3 NUMBER OF ERRORS


Assistance on the task was withheld during the test, however whenever any of the
participant got completely stuck and required mandatory assistance in order to achieve
a correct output or proceed with the test, then the task was scored as a critical error and
the overall completion rate for the task was also affected.

NON CRITICAL ERRORS


Tables 5 below shows the number of non-critical error participants made while trying to
complete the task.

Page | 11
Tables 5: Showing Number of Non-Critical Errors

PARTICIPANT TASK 1 TASK 2 TASK 3 TASK 4 TASK 5 TASK 6

UX 001 - 1 1 0 0 0
UX 002 1 - 1 0 0 0
UX 003 - 1 1 2 0 0

Errors 1 2 3 2 0 0

This errors are defined as non-critical, because these mistakes didn’t impact on the
likelihood of the user from accomplishing the task. Task without errors are marked as an
empty “dash” while task with critical errors are marked as zero and excluded from this
list. See Tables 7 for a total summation of errors.

The task with the lowest number of non-critical errors was task 1, which was for
participants to plan a round trip from Detroit to Atlanta for under $250.00. The pricing
of this task was adjusted due to inflation to march current rates. Task 2 had the next
lowest non critical error rate. Task 4, 5 and 6 were ticked as zero, and put in red, because
they scored as critical error tasks. See the next Tables below for critical error counts.

Whenever the participants ended up performing the task even though the recordings
showed they made errors in the process, the problems in the interface that led to those
errors were given a lower severity impact. (These problems are discussed in later section
of this report).

CRITICAL ERRORS
Tables 6 shows the number of critical errors made, the critical errors meant the task
were not completed, and the goals set out for the task could not be accomplished.

Given that the error free rate of 80% is the goal for each task in this usability test. The
problems which led to those errors were attributed to have catastrophic impact. For
instance, in task 5 when the candidates attempted to go outside of the portal to look for

Page | 12
information on the sites with Wi-Fi access as they couldn’t obtain this information within
the interface itself, it was scored as a critical error.

Tables 6: showing number of critical errors per task from participant test

PARTICIPANT TASK 1 TASK 2 TASK 3 TASK 4 TASK 5 TASK 6

UX 001 0 1 0 1 1 1
UX 002 0 0 0 1 1 1
UX 003 0 0 0 0 1 1
Total number of
critical Errors 0 1 1 2 3 3
Task Completion
Rate 100% 66.67% 66.67% 33.33% 0% 0%

Additionally, anytime the participant failed to obtain the right result for the task set out,
whether or not, they were aware that their output was incorrect or incomplete, I ticked
it as a critical error.

Tables 7: showing aggregate errors (both critical and non-critical errors for tasks

PARTICIPANT TASK 1 TASK 2 TASK 3 TASK 4 TASK 5 TASK 6

UX 001 0 2 1 1 1 1
UX 002 1 0 1 1 1 1
UX 003 0 1 1 2 1 1
Total number of
Errors 1 3 3 4 3 3
Task Completion
Rate 100% 66.67% 66.67% 33.33% 0% 0%

Error Free rate 66.67% 33.33% 0% 0% 0% 0%

Page | 13
According to this table, if a task could not be carried out successfully on the portal, it
was scored as a critical error, hence the fact that a task has a greater count of error
doesn’t literally mean it was harder to accomplish, just as seen in the case of Task 4
whose errors are more than Task 5 and 6, it simply meant the users struggled with the
task and it could be accomplished on the interface with the current design.
For Task 5 and 6, the completion rate is zero and those tasks were scored as a single
critical error as it was impossible for to use the interface given the current design to
accomplish the task.

Higher count of errors are hence not to be taken as judgement calls, for how easy or
hard it was to accomplish a task.

The last row in the Table above shows the error free rate, and is defined as the
percentage of test participants who completed the task without any errors (either
critical or non-critical errors). Only task 1 had a passable rating, hence considering our
current result, we would define these task as difficult to accomplish on the interface.

For more detailed interpretation of errors encountered, please refer to section 4: Key
Findings and Recommendations.

3.4 SYSTEM USABILITY SCALE


A Post-test questionnaire was given for the participants to answer a few questions based
on a scale of 1 -5, where 1 is strongly disagree and 5 is strongly agree, based on the
System Usability Scale.
UX 001 had a score of 75, while UX 002 had a score of 67.5 and UX003 a score of 60, (see
Tables 9, 10 and 11 below.
To calculate the Score, the questions were grouped first of all in odd and even numbers.
For odd numbers, the formulae: [User Rating] – 1 = Score. For even numbers, the
formulae 5 – [User Rating] = Score. To get a weighted average, each of the scores were

Page | 14
then multiplied by 2.5 and summed up. This translated to a scale of 100, the
interpretation of this SUS rating is given in the next table.

Tables 8: showing SOS grading system

This in essence is a perceived usability test and a subjective test. Although this is a
popular test, according to the experts, people can perform poorly on a task but still think
a system is usable and may perform well on task but still think a system is not usable.
This subjective evaluations is however a measure of the perception of the user about
the interface, they give clues to the satisfaction level and the quality of user experience
derived.

The next table shows the answers provided by User 1 (UX 001), the scores derived and
the weighted average.

Page | 15
Tables 9: Showing SUS Score Calculations for Participant 001
Weighte
UX0 Strongly Disa Agre Strongly d
01 Questions Disagree gree Neutral e Agree Formula Score Average
Rating 1 2 3 4 5
I think that I
1 would like to use
this system frequently X R-1 4 10
I found the
2 system unnecessarily
complex X 5-R 2 5
I thought the
3 system was easy
to use X R-1 3 7.5
I think that I
would need the
4 support of a technical
person to be able
to use this system X 5-R 3 7.5
I found the
various functions
5
in this system
were well integrated X R-1 2 5
I thought there
was too much
6
inconsistency in
this system X 5-R 3 7.5
I would imagine
that most people
7
would learn to use
this system very quickly X R-1 3 7.5
I found the
8 system very
cumbersome to use X 5-R 4 10
I felt confident
9
using the system X R-1 3 7.5
I needed to
learn a lot of
10 things before I
could get going
with this system X 5-R 3 7.5

TOT
AL 75

Page | 16
Tables 10: Showing SUS Score Calculation for Participant 2
Stron
gly
UX Disag Neut Strongly Scor
002 Questions ree Disagree ral Agree Agree Formula e Total
Rating 1 2 3 4 5
I think that I
1 would like to use
this system frequently X R-1 2 5
I found the
2 system unnecessarily
complex X 5-R 3 7.5
I thought the
3 system was easy
to use X R-1 3 7.5
I think that I
would need the
4 support of a technical
person to be able
to use this system X 5-R 3 7.5
I found the
various functions
5
in this system
were well integrated X R-1 3 7.5
I thought there
was too much
6
inconsistency in
this system X 5-R 2 5
I would imagine
that most people
7
would learn to use
this system very quickly X R-1 2 5
I found the
8 system very
cumbersome to use X 5-R 2 5
I felt confident
9
using the system X R-1 3 7.5

I needed to learn a lot of


10 things before I could get
going with this system X 5-R 4 10

TOT
AL 67.5

Page | 17
Tables 11: Showing SUS Score Calculation for Participant 3
UX Strongly Strongly
003 Questions Disagree Disagree Neutral Agree Agree Formula Score
1 2 3 4 5
I think that I
would like to use
1
this system
frequently X R-1 2 5
I found the
2 system unnecessarily
complex X 5-R 2 5
I thought the
3 system was
to use X R-1 3 7.5
I think that I
would need the
4 support of a technical
person to be able
to use this system X 5-R 2 5
I found the
various functions
5
in this system
were well integrated X R-1 3 7.5
I thought there
was too much
6
inconsistency in
this system X 5-R 4 5
I would imagine
that most people
7 would learn to use
this system very
quickly X R-1 3 7.5

8 I found the system


very cumbersome… X 5-R 4 5
I felt confident
9
using the system X R-1 2 5

I needed to learn a
10 lot of things before I
could get going with
this system X 5-R 3 7.5

TOTAL 60

Page | 18
Given that the SUS scores for participant 1, 2 and 3 was 75, 67.5 and 60 respectively, the
mean SUS scores is. 67.5. From the values expressed in tables 8 which shows the general
guidelines used in interpreting SUS scores, a grade of 67.5 is a poor grading, the findings
and recommendations discussed below should hence be considered.

Page | 19
4.0 KEY FINDINGS AND RECOMMENDATIONS
On task one, all there participants were able to complete the task, the average task
timing was an average of 324.67 seconds.
On task two, there was a critical errors as one participant failed to send in the accurate
figure. She was however not aware of her error. The average task timing was 269.67
seconds. This failure led to one important finding.

MAJOR FINDINGS ONE


DEFAULT SETTINGS ON SITE NOT APPLICABLE FOR
SOME USERS

Heuristic Violated

2 cases detected

#7: Flexibility and efficiency of use


#3 User control and freedom.
Severity: 4
Her mistake was because the default value left by the developers of the app, was for
flights to include a return trip. This could be easily changed, but as she was new to the
portal, she failed to discover this important distinction. She in essence exclaimed at the
price of the trip, and was unaware of this error. The talk aloud procedure helped to
uncover that the exuberant price had made her to lose interest and to think of an
alternative booking service. The diagram below shows the default interface that pops
up when the website is opened on a new device.

Page | 20
Figures 1: Task Options on the Home Page

Hence even though this default value was a good design as it helped customers to use
the site, to include the return trips, it got her to think the portal was more expensive.
However if competing firms who offered similar services used this default as well, it
would not have been a surprise to her.

RECOMMENDATION

I did a brainstorming session and came up with a few work around to this problem

1. The design team can leave the field empty or suggestive such that users have
the option to click and choose what works for them. This could however impact
on the speed and ease of use.
2. As Default settings are so important because they affect the overall user
experience, and it would have been more appropriate if guest and visitors on
the site were asked a few basic questions before some settings were put to
effect. The app for instance can display a pop up for first time users, with a
question like; “where would you love to go today?” Good! We got you covered!
“How many people are travelling with you? A cookie setting can then be used
to save some of the preference.

Page | 21
3. It would also help if the site offers a quick tip or work around of the salient
details to be observed, while using the site
4. If the value for a round trip remains the default. A pop can be created to
suggest, alternatives and hint on the fact that it is double ticket and the price
can be much lower

The pros was that the existing design possibly brought more profits to the system as
clients used the service for both single and return ticket, hence it would be the ultimate
prerogative of the project manager after examining the bounce rates, and other
analytics to decide whether this design should be altered.

On task three, all participants were able to interpret the questions correctly and they
completed the task and sent in the accurate price. The average task completion time
was 371.67 seconds.

Each of the participants however struggled with filtering through their search results,
but were able to discover the right answers, this problem is discussed in more details
under findings two.

On task four, two participants had a critical error, although they were able to see the
results, they failed to give an accurate figure. The reason for see this is explained below.
The average task completion time was 375.33 seconds.

MAJOR FINDING TWO

TOTAL NUMBER OF SEARCH RESULTS WERE NOT AGGREGATED


AND SUMMARISED FOR THE USER TO SEE

Heuristic Violated: 2 cases detected

#7 Flexibility and efficiency of use.


#1 Visibility of system status.

Page | 22
Severity: 4

Even though the test participants saw the result, they had no clue how many of these
results were generated, and what the possibilities were.

A picture showing the portal at wakanow.com listing several dozens of flight options but
no summary of result.

Figures 2: Search Results after Task Query

Page | 23
The reason for this critical error was because even though the portal was able to
generate a lot of results for the queries, it showed no intuitive summation. When the
participants scrolled down to the first ten, the interface was busy loading more options
which they were either not aware of, or were out of patience to see.

A better way to display the results, would have been showing 10 out of 518 flight
options.

RECOMMENDATIONS

It is recommended for the design team to populate a summation of results, and separate
the results into pages, so readers can see all the results grouped and decide whether to
proceed with more searches or settle for existing options displayed on the first page.

On task five, none of the candidates were able to successfully complete the task. They
did find a hotel using the portal, but couldn’t ascertain if the hotel had a Wi-Fi or not,
they also couldn’t establish the ranking of the hotel unless they went outside of the
website, which was an exercise they were not allowed to do, leading us to the third
discovery.

MAJOR FINDINGS THREE

INSUFFICIENT DETAILS WAS DISPLAYED ON HOTELS

Heuristic Violated: #7 Flexibility and efficiency of use.

Severity: 4

This attracts a high level of severity because it impacts on the goals for which the site
was created. It is evident that one of the selling point of the site was hotel reservations
however, the information displayed for hotels as indicative of the actions of participants

Page | 24
were not convincing enough for the user to make a close on the app, and hence were
tempted to look elsewhere for the information they needed.

In essence, the participants were unable to complete this task not because the interface
did not show lot of hotel reservations but because it failed to provide the icing on the
cake, it didn’t provide a Wi-Fi information alongside some very important details
(discussed in the next findings).

RECOMMENDATIONS

Wi-Fi information should be aggregated into the hotel displayed, this will add as an extra
incentive to users, and help them to decide on a hotel without going outside of the app.

A user who drifts away may likely book with the interface providing the missing
information he is seeking, Hence the design team may find more engagement with users,
when they add these details.

MAJOR FINDING FOUR


NO RATING ON MOST OF THE HOTELS

Heuristic Violated: 2 cases detected again

#7 Flexibility and efficiency of use.


#6 Recognition rather than recall.

Severity: 4

Most users are convinced to use a service when they see other people have accredited
and given it a fair rating, this is why testimonials are so effective in social media
marketing. The rating on a hotel works in a similar way, hence as typified in the task

Page | 25
scenario, most users will want to know the type of hotel displayed to ascertain that the
amount justifies the price.
Below is a pictures of wakanow.com with zero reviews, zero rating and zero information
on Wi-Fi and other important deciding factor.

Figures 3: Hotel Listings on the Find Hotel page

Also banking on the idea that users have heard of a popular hotel and should remember
the ranking is a violation of the principle of recognition rather than recall. If a hotel is
popular, the rating should be presented again for the user to see. In the case of the user
who had no idea of what the hotel is and what ranking it gathered in the outside world,
they would be tempted to check as visible in the logging sheet, (see appendix VIII) which
compromises the design principle of flexibility and ease of use.

While analyzing the result of task five, it was simply evident why each of the users had
problems with this task: the price is not often sufficient to make a booking, the rating
was absent and this resulted in a critical error.

Page | 26
Upon close examination, it was however discovered that there were a few ratings,
presumably left by former clients who had used the website and this data is minute
compared to the extensive real world data that could be aggregated from an external
source.

It wasn’t surprising that the users knew by default that a better information could be
accessed elsewhere, the real problem is that the possibility of the user going out of the
app to look for this information also reduces the chance of the user, coming back to the
website to book the hotel. Unless of course targeted ads are used to lure him back which
by the way is extra cost.

For the test participants, it was a moment of agitation, they had no clue what type of
hotel was displayed and two of the test participants were intent on going outside of the
website to find the rating, for which I forbade them not to. This is a critical error and I
reckon the impact on real world users will be catastrophic as well.

RECOMMENDATIONS

It is suggested for the design team to aggregate ratings from a third party tool, into the
app, such that users won’t have to go to a different site, to find additional information
about the hotels they wanted to book. In fact if the design team were looking for how
to maximize returns, or they wanted the commissions to skyrocket, they needed to fix
this error as soon as possible.

MAJOR FINDINGS FIVE


For task six, all three participants were clueless on how to set up a fare alert, and after
several minutes of exasperation, while I repeated suggested they should give it a go and
attempt it all the same, they threw in the towel.

RECOMMENDATIONS

Page | 27
A fare alert system should be looked into. The recommendation for this is discussed in
more details later.

OTHER NOTABLES DISCOVERIES

The scores of SUS for UX 001, UX 002 and UX 003 are 75, 67.5 and 60 respectively. The
average was 67.5. Hence even though the interface can do the basic function of flight
booking well enough, and the service behind the site looks remarkable, the website itself
can do with a great dose of improvement.

MAJOR FINDINGS SIX


MENU BUTTON NOT FUNCTIONAL

Severity: 4
Heuristic Violated: #4 Consistency and standards.
The menu button on the site has the symbol below, it was however not where it was
supposed to be.

Figures 4: Menu dummy button on the Home Page

Page | 28
It was positioned at the top right side of the screen and users based on intuition took it
as the menu, however they found it was a dummy button which had no functionality.
The test participants went to this button when they wanted to change the interface from
flight booking to an hotel booking option, they also clicked on this menu when they
wanted to edit, modify or cancel their flights options, but after clicking several part of
the app, they realized it wasn’t working, it was discovered that the Wakanow button,
see the image below.

Figures 5: Wakanow Menu button from the Home Page

Acted as menu option, which was confusing to UX 001 and UX 003. Even though this was
a craft, it was an inconsistent design that breaks the conventional design practice.

RECOMMENDATION

To make the app easier to use, it is suggested for the design team to make a visible menu
button accessible and functional in the right place, and if they intend to keep existing
options to remove the dummy button as it had no functionality on the display screen.
Perhaps the design team made use of a heat map to know what aspect of the screen
was getting heat, I discovered by the time of compiling this report the dummy button

Page | 29
had been taken out. A possible indication that the maintenance team were working well
with their analytics and doing a great job.

MAJOR FINDINGS SEVEN

NO CLEAR INDICATION OF HOW LONG THE USER HAS TO WAIT TO


OBTAIN SEARCH RESULTS

Heuristic Violated: #1 Visibility of system status


Severity: 2

When the search queries are entered in the portal, it takes some time for the
information to surface. Even though the interface displays a measure to show it’s
working, the display is not intuitive enough to warrant a closure, see the image below
for the pop up.

Figures 6: System Busy display after Task Input

The line actually indicates that the system is busy and shows the progression of the task
being done internally to the query, but this line sounds very primitive and the copy
writing for this feature can do with some improvement

Page | 30
RECOMMENDATION
Three or more second to go! Sounds like a better and a concrete feedback which can
help quench the frustration of the typical user who is out of patience and is wondering
how long he has to wait.
It is recommended for this pop up to be better designed, this elementary feedback, text
styling and font doesn’t seem to march the executive world class business at hand, I
would recommend a subtle redesign that looks a little more classy and exotic

MAJOR FINDINGS EIGHT


NO MATCHING OFFERS WITH AN INNOVATIVE
FEATURE OFFERED ON THE SITE

Heuristic Violated: #7 Flexibility and efficiency of use.

Severity: 1
It was discovered during the task that the website has no fare alert system for users
looking to set up such a system or who were looking for a more affordable flight options.
This was the main reason why all the participants failed to perform Task 6. The app
however had a unique feature, for users to pay part payment for flights, this feature is
represented in the “pay small small” option and was a very innovative feature. See the
diagram below.

Page | 31
Figures 7: The pay in part feature

While this feature can help users planning for holiday trips and cruises to exotic places
and destination. It is however left for the design team to do proper analytics and decide
if this feature was profitable given the fast and dynamic nature of the industry.

It was also observed that while the pay in part option was useful for those planning
holiday trips, the site on first glance majorly offered a flight and hotel booking service.
To a first time user, there was no concrete offer of a holiday travel or a trip to an exotic
destination.

It can be argued that the pay in bit package works more for the occasional traveler who
is planning on going for a treat or a vacation somewhere exclusive and not for the
everyday business professional.

Hence offering this pay in bit package without displaying the most appropriate options
that goes with it, may fail to appeal to those who will be interested in it. Upon close
evaluation, it was found that the prearranged vacation offers and packages were only
accessible further down the website, and was available only through a subscription
service via email. And I suppose that is fine in some aspect. A competitor analysis
however showed that some other sites were using this holiday feature to target specific
customers to a great advantage.

RECOMMENDATIONS 8

Page | 32
Each business has a unique selling point, and each must focus on what differentiates it
from the competitor, if the returns are coming through regular business flight and
wakanow.com is dominating the market share in this regard, it can continue to focus on
this, but if it isn’t, and if one of the unique selling point is for the site to cater for those
planning holiday trips, as suggestive in their pay in bits package, then holiday reservation
packages should be visible as well, within the field of view, where the pay in part offers
are displayed.

Page | 33
4.1 CONCLUSION
Although the site had a poor SUS score, the website does a good job of listing various
flight options, and can be used effectively to book basic flight easily, it was however
difficult to decide on an hotel reservations, and manage other advanced tasks. Hence
the site can do some improvement on the design recommendation earlier given, such as

 A visible rating of the listed hotels.


 A review of the default settings to better accommodate different preferences
among its users.
 A redesign of the menu button such that is it simple and intuitive.
 A clear grouping of flight option result when the portal is queried.
 A reduction or total elimination of adverts.
 And a subtle changes in its fonts and action call buttons.

All this changes will go a long way in making the site more user friendly and a popular
choice among international travelers.

Page | 34
4.2 LIMITATIONS
This test was performed on a small sample size of 3, and extensive testing with as many
as ten, twelve or even twenty users may be useful to find out more of how the ratings
and metrics affects the usability and functionality of the site. In this regard, this is a
useful study, but is not encompassing.

If the measures suggested in this study are applied, a subsequent study can focus on
other aspect and may adopt other research methods and techniques in order to have a
complete study.

Page | 35
4.3 REFERENCES
1. Nielsen, J. (1994) Heuristic Evaluation. In J. Nielsen. &
R. L. Mack (Eds.) Usability Inspection Methods. New York, NY: John
Wiley & Sons.
2. Gough, D. and Phillips H (2005) Remote Online Usability Testing: Why, How, and
When to Use It.
3. Jared M. Spool (2001): Testing web sites: five users is nowhere near enough,
DOI:10.1145/634067.634236.
https://www.researchgate.net/publication/200553186_Testing_web_sites_five_u
sers_is_nowhere_near_enough
4. Michael D. Levi Frederick G. Conrad. Usability testing of World Wide Web sites.
https://dl.acm.org/doi/10.1145/1120212.1120358
5. Christopher C. Whitehead, Evaluating web page and web site usability
https://dl.acm.org/doi/10.1145/1185448.1185637
6. Video transcripts, and research materials found in appendix provided from
the course guide: evaluating designs with users; in the UX research and design
specialization.

Page | 36
5.0 APPENDIX I: PRETEST CHECKLIST

5.1 APPENDIX I: User Test Script


PRETEST CHECKLIST

● Acquire demographic details of participant and establish suitability


● Establish time and date of Test and, set a google reminder
● Ask the participant to download Team viewer
● Download and install screen recorder extension for browser.
● Go through the Task Scenarios, check success criteria and make sure they
are adjusted to march inflation rate

48 HOURS BEFORE THE TEST


● Login into email, print Test scripts, logging sheets and remind participant
of the Usability Test.

12 HOURS BEFORE THE TEST,


● Double check on data, network accessibility
● Establish a standby back up internet connection.
● Ensure Team viewer is on Pro plan and time restriction of 40 minutes
doesn’t apply.
● Remind participants of schedule

30 MINUTES TO THE COMMENCEMENT OF TEST


● Establish remote communication with participant through Team viewer.
● send email containing edited PDF to the participants, the PDF to be sent are
Consent form
Task Instructions
Post Test Questionnaires
● Start screen recording, and allow the participants to download the file
● Confirm download and start the pretest test script

Page | 37
5.2 APPENDIX II: Pre-test Questionnaire
1. Have you used wakanow.com before?
2. Tell me about the last trip you planned.
a. What do you usually use to plan your trip?
b. What is your primary purpose for travelling?
c. What is your primary concern?
d. What is your budget?
3. What information is the most important when you are planning your
trip?
4. How often do you travel?

Page | 38
5.3 APPENDIX III: Posttest Checklist
1. Stop recording, save audio and video to one drive.
2. Verify the survey form are filled
3. Fill logging sheet

Page | 39
5.4 APPENDIX IV: Moderator’s Test Script
Hello (with a laughter inflection!): Thanks for participating in this usability test!
The goal for today’s session is test the website – wakanow.com. I’m here to learn from
you so I’ll ask a lot of questions, but I’m not testing you. There are no right or wrong
answers.
I’ll start this session by asking some background questions. Then I’ll show you some
things we’re working on, and ask you to do some tasks. As you work on the tasks, please
think aloud.

This means that you should try to give a running commentary on what you're doing as
you work through the tasks. Tell me what you're trying to do and how you think you can
do it. If you get confused or don't understand something, please tell me. If you see things
you like, tell me that too. I want to emphasize that, you won’t hurt my feelingsby
telling me what you think. In fact frank candid feedback is the most helpful.

If you do get stuck, I’m going to try not to answer your questions or tell you
what to do. I’m just trying to see what you would do if you were using it on your
own. But don’t worry. I’ll help you if you get completely stuck.

Do you have any questions before we begin?


I would like you to go over the consent form.

 Ask the participant to download the Consent form from their emails
 Summarize it and obtain their consent.

Page | 40
5.5 APPENDIX V: Post Test Questionnaire One
Answer the following questions based on the scale of 1 -5 where 1 is strongly disagree
and 5 is strongly agree
I think that I would like to use this system frequently
1 2 3 4 5
2. I found the system unnecessarily complex
1 2 3 4 5
3. I thought the system was easy to use
1 2 3 4 5
4. I think that I would need the support of a technical person to be able to use
this system
1 2 3 4 5
5. I found the various functions in this system were well integrated
1 2 3 4 5
6. I thought there was too much inconsistency in this system
1 2 3 4 5
7. I would imagine that most people would learn to use this system very quickly
1 2 3 4 5
8. I found the system very cumbersome to use
1 2 3 4 5
9. I felt confident using the system
1 2 3 4 5
10. I needed to learn a lot of things before I could get going with this system.

1 2 3 4 5

Page | 41
5.6 Post Test Questionnaire 2:

Debriefing
1. What difficulties did you have on ____? I noticed you struggled with____,
can you tell me what happened? You paused here, tell me more about that.

2. Preferences: What did you think of the site? What did you like/dislike?
Whichparts of this page are most/least important to you?

3. Changes: If you had 3 wishes to make this better for you, what would
they be? Why?

4. Understanding: How would you describe this to a friend?

5. Use Cases: Under what circumstances would you use this? Why?

Wakanow.com was selected after ensuring it could be used to:

I. Book flights at different times of day.


II. Explore different trip options and destinations.
III. Allow filtering of flight search results based on various criteria, such as number of
passengers, booking class, and number of stops.

Conclusion
MODERATORS SCRIPT

Page | 42
This has been incredibly helpful. Today, you mentioned… [Moderator: Try to briefly
summarize some key parts of the discussion or issues.] Your input is really valuable for
me and the team as we think about the next steps for these ideas. We really appreciate
your taking the time to come in, and answering all of my questions. Thanks so much!

Page | 43
5.7 APPENDIX VI: Consent Form
I agree to participate in refining the flight booking portal to improve accessibility and
customer retention and for the researcher to get hands on practice with test designs
and scenarios.

I consent to the recording of this test. This recording will be used for Research and
Product Improvements only.

I understand that participation in this usability study is voluntary and I agree to


immediately raise any concerns or flag areas of discomfort during the session with Tobi,
the study administrator.

Please sign below to indicate that you have read and you understand the information on
this form and that any questions you might have about the session have been answered.
Name:

Date:
Thank you!
We appreciate your participation.

Page | 44
5.8 APPENDIX VII: Task Instructions
NB: Some changes were made to reflect the local currency.

Task 1

Your manager asks you to help her plan a few trips for the company. She has heard of a
website called wakanow.com” that can help and encourages you to use it.
Plan a round trip from Detroit to Atlanta for under $250.00 (or the next cheapest price)
from January 16, 2023 to January 19, 2023. Email the itinerary to me.

Note - Unless otherwise specified, any arrival/departure time is okay

Task 2

4 people from the Chicago office want to attend a conference in New York from
January 8, 2023 to January 10, 2023. What is the cheapest total price of the trip?

TASK 3

Your manager wants to join the Chicago team in New York (your office is in Detroit), but
then she wants to go to London for a week then return to Detroit. She plans to fly
business class for the entire trip. What is the cheapest price for her trip?

TASK 4

The L.A. office manager has a meeting in New York on October 16, 2022 at noon. She
wants to leave on October 15 after 9am, and can arrive any time before 9am on the next
day. How many flight options do you have?

Task 5

Page | 45
Help your manager book a place to stay from October 16-18. Find the top rated hotel
that has Wi-Fi for under $350/night in New York City.

Task 6
You want to surprise your family with a visit over Christmas but money is tight. Set up a
fare alert for a trip from Detroit to Seattle from December 22, 2022 to December 26,
2022.

The above are the standard metrics, adjusted were made to the price to march the
inflation rate and the local currency.

Page | 46
5.9 APPENDIX VIII: User Test Logging Sheet

LOG SUMMARY Date: August 3, 4 and 5,


FOR WAKANOW.COM 2022

For Participant UX001, 002 and 003

Start Time Time to complete Completed Summary of Critical Incidents

August 3 UX 001
TASK 1 : 17: 02 4 minutes 56 Yes Incident 1: Roams with the mouse
seconds Appeared confused after inputting the
Travel destination
TASK 2: 17: 07 3 minutes 45 Errors Fails to spot the distinction between round trip and
seconds single trips but interpreted task correctly and selected
business class
Exclaims. Rants
TASK 3 17: 11 6 minutes 50 Yes Clicked several times on the menu bar, no response
seconds
TASK 4 17: 18 5 minutes 37 Errors Unnecessary movement on the pay in bit feature,
seconds clicks on this several times, appears fascinated,
heads to the search results. Laughs
Counted the search manually and
Failed to detect more results were loading
provided the wrong input

TASK 5 17: 24 6 minutes 52 No Pauses indefinitely over search Results,


seconds Moved mouse to the edges and requested
to open another browser, request was declined
TASK 6 17:31 9 minutes 52 No Adjusted screen resolution (possible problems with
seconds font size)

Page | 47
Explored the search functionality
Ended the first heads back to the menu button, went to flight booking
Test Scenario at made comments about the difficulty of the task
18: 41 clicked on the menu button severally till time elapsed
August 4
TASK 1 :18: 00 5 minutes 56 Yes Asked if he could use another browser
seconds Request was declined
Font was adjusted
inputted the
Travel destination wrongly, but corrected error
without any prompt
TASK 2: 18:06 4 minutes 40 Yes All task done without errors
seconds
TASK 3 18: 13 5 minutes 27 Yes Interpreted the task correctly as consisting of 3 flights
seconds Went to flight management option,
Clicked on the menu button and got no response
Computed the task manually one after the other on a
spread sheet
Asked to use a calculator to crosscheck, request was
declined
flagged as a non-critical error
Right input was obtained
TASK 4 18: 19 6 minutes 28 Error tries to open another browser, cautioned
seconds inputted the task correctly
clicked in error on the cheapest visible option, failed
to notice other results were yet to emerge

TASK 5 18: 26 7 minutes 35 No Explored the support and customer line settings,
seconds asked for feedback whether if it was the right
Course of action
response; as I said earlier, I’m here to watch
you, not to guide you,

Page | 48
Just keep trying, if at the end of the end of ten
minutes, a task cannot be done, it will be counted as
unsuccessful,
Successfully recognized the hotel booking option
Could not differentiate a hotel with Wi-Fi from the one
without one.
TASK 6 18:34 9 minutes 1 No Went to booking options
seconds clicked on customer care line,
scrolled down to email functionality
Ended the first Does nothing
phase of the Task admitted task cannot be done
scenarios at
18: 43
August 5
TASK 1 : 17:30 5 minutes 22 Yes Commented on how cheap the test was
seconds

TASK 2: 17:36 5 minutes 4 Yes Clicked on the homepage options, inputted fare alert
seconds wrongly, corrected this without prompt
adjusted screen resolution and display
brightness: remember to ask about this debriefing
session
TASK 3 17:41 6 minutes 18 Yes Task seems to be interpreted successfully
seconds Input accepted done
Hovers on search results, seems unsure of the
Action.
Pauses, seem to be waiting for other search result to
emerge
Picked the wrong option, signs,
Seem to realize the error,
Chose another answer; the right one
Computed the summation of each task correctly
Provided the right answer

Page | 49
TASK 4 17:47 6 minutes 41 Yes Hovers on search results
seconds Applied search metrics correctly,
discovered the dummy menu and clicked
severally, recovered from this errors,
activity completed correctly
TASK 5 17:54 8 minutes 53 No Main observation: made a random guess for a hotels
seconds with Wi-Fi
TASK 6 18:03 7 minutes 13 No Asked if this was similar to google alert
seconds Clicked on the pay in bits feature
Task 6 ended without success
TASK SCENARIO
ENDED AT 18: 11

Page | 50
SUS TEST INPUT OO1
Strongly Strongly
UX001 Questions Disagree Disagree Neutral Agree Agree
Rating 1 2 3 4 5
I think that I X
1 would like to use X
this system frequently X
I found the X
2 system unnecessarily X
complex X
I thought the X
3 system was easy X
to use X
I think that I
would need the
4 support of a technical X
person to be able X
to use this system X
I found the
various functions X
5
in this system X
were well integrated X
I thought there
was too much X
6
inconsistency in X
this system X
I would imagine
that most people X
7
would learn to use X
this system very quickly X
I found the X
8 system very X
cumbersome to use X
X
9 I felt confident X
using the system X
I needed to
learn a lot of
10 things before I X
could get going X
with this system X

Page | 51
TEST INPUT OO2
Strongly
Strongly
UX 002 Questions Disagree Disagree Neutral Agree Agree
Rating 1 2 3 4 5
I think that I X
1 would like to use X
this system frequently X
I found the X
2 system unnecessarily X
complex X
I thought the X
3 system was easy X
to use X
I think that I
would need the
4 support of a technical X
person to be able X
to use this system X
I found the
various functions X
5
in this system X
were well integrated X
I thought there
was too much X
6
inconsistency in X
this system X
I would imagine
that most people X
7
would learn to use X
this system very quickly X
I found the X
8 system very X
cumbersome to use X
X
9 I felt confident X
using the system X
I needed to
learn a lot of
10 things before I X
could get going X
with this system X

Page | 52
TEST INPUT OO3
Strongly Strongly
UX 003 Questions Disagree Disagree Neutral Agree Agree
1 2 3 4 5
I think that I X
1 would like to use X
this system frequently X
I found the X
2 system unnecessarily X
complex X
I thought the X
3 system was easy X
to use X
I think that I
would need the
4 support of a technical X
person to be able X
to use this system X
I found the
various functions X
5
in this system X
were well integrated X
I thought there
was too much X
6
inconsistency in X
this system X
I would imagine
that most people X
7
would learn to use X
this system very quickly X
I found the X
8 system very X
cumbersome to use X
I felt confident X
9
using the system X
I needed to
learn a lot of
10 things before I X
could get going X
with this system X

Page | 53
LIST OF HEURISTICS
Severity
S/N FINDINGS ON WAKANOW.COM Heuristics Violated Ranking
DEFAULT SETTINGS ON SITE NOT APPLICABLE FOR #7: Flexibility and
SOME USERS efficiency of use
the exuberant price had made her to lose interest and #3 User control and
1 to think of an alternative booking service. freedom. 4
TOTAL NUMBER OF SEARCH RESULTS WERE NOT
AGGREGATED AND SUMMARISED FOR THE USER TO
SEE #7 Flexibility and
Users could not see the field of results as it was not efficiency of use.
separated into page clusters, hence failed to find a #1 Visibility of system
2 more affordable options. status. 4
INSUFFICIENT DETAILS WAS DISPLAYED ON HOTELS
hotel reservations upon query but didn’t provide a Wi-
Fi information alongside some very important details
and test participants felt compelled to go out of the
app to look for this information which reduced the #7 Flexibility and
3 possibility of their using the app to book the service. efficiency of use. 4
NO RATING ON MOST OF THE HOTELS
rating was absent and this resulted in a critical error
where participants couldn’t perform assigned task
the design team seem to bank on the idea that users
who have heard of a popular hotel should remember #7 Flexibility and
the ranking efficiency of use.
violation of the principle of recognition rather than #6 Recognition rather
4 recall. than recall. 4
MENU BUTTON NOT FUNCTIONAL
The menu button on the site
was a dummy button with no functionality, clicking on
this button gave endless frustration as it gave no
response, as a menu, a wakanow button was designed #4 Consistency and
5 instead and users failed to recognize this early on. standards. 3

Page | 54
NO CLEAR INDICATION OF HOW LONG THE USER HAS
TO WAIT TO OBTAIN SEARCH RESULTS:
When the search queries are entered in the portal, it
takes some time for the information to surface.
Even though the interface displays a measure to show
it’s working, the display is not intuitive enough to #1 Visibility of system
6 warrant a closure. status 2
NO MATCHING OFFERS WITH AN INNOVATIVE
FEATURE OFFERED ON THE SITE
this feature is represented in the pay small small
option, however
users planning for holiday trips and cruises to exotic #7 Flexibility and
7 places who really needed it couldn’t find those offers. efficiency of use. 1
UNNECESSARY ADS
Grossly distracting ads at the lower section of the app: #7 Flexibility and
this is more of a minor problem and was tactfully efficiency of use.
inserted, perhaps with good ROI can be eliminated #3 User control and
8 completely freedom. 1
SOME LINKS NOT ACTIVE
A terms of service consent link, not working properly,
the statement; By proceeding, I acknowledge that i
#10 Help and
have read and agree to wakanow.com's Flight booking
Documentation
terms & conditions, has a link that doesn’t open in a
different window and users have to lose existing work
9 done in a page, to read the terms. 1
NO SHARE THIS FLIGHT OPTION
A feature like Share this flight in your email, or save this
flights till later option is not visible which may affect
the virality of the site: this feature may be useful for
people who are excited about the app or want to share
a flight details with their friends, or rather urge a friend
to book, this may raise a point of privacy in some #7 Flexibility and
10 aspect, and may need to check with international laws. efficiency of use. 1

Page | 55
(Just another cosmetic issue, deliberate on this, if you
have the time).

NO FARE ALERT FUNCTIONALITY


As shown in the task scenarios, this feature is absent
and led to errors, rated as having a lesser impact
mainly because most users can still work on the site #7 Flexibility and
11 without this feature. efficiency of use. 1
NO LOGIN REQUIRED AND NO DETECTABLE COOKIE
SETTINGS
While this functionality is a welcome relief, the pros
should be considered, the option for the site to get first
time users to sign in may become very useful for
sustainable business, the portal as it is now, is more of #9 Help users
a free tool and customer retention may be low, there recognize, diagnose,
was no reference for the user to add an email or for and recover from
those at the back end to track the details of those using errors
the app for preliminary search queries until they book #7: Flexibility and
12 the flight. efficiency of use 1
MISSING INCENTIVES
No student rate or discount options is available to first
time clients, discounts is a proven trade technique
which can boost profit if the feature is properly #7: Flexibility and
13 designed and managed. efficiency of use 1
NOT TOO EXCEPTIONAL COPY EDITING
The words on the page, inducing first time users to use
the service can be better captioned for clarity and #7: Flexibility and
14 impact. efficiency of use 1
CONTRAST, FONTS AND COLOURS TOO BASIC
Not enough white spaces and contrast is used to draw
focus to the key areas of the site, although the site
works quite well but compared to competitors can do #8 Aesthetic and
15 with a boost. Additionally, fonts are not displayed when minimalist design. 1

Page | 56
the system is busy trying to make sense of search
queries.

NOT ENOUGH TASK FILTERS ON SEARCH RESULTS #1 User control and


The design team can brainstorm on more relevant freedom.
16 filters to use for users to sort through the search. #1 Error prevention. 1
REMIND ME OF THIS FLIGHT LATER
This feature is nonexistent, reminders can go a long
way in helping users to fine-tune their search option or
correct their mistakes before engaging customer care
agents or bookings that goes to a real human at the #1 User control and
back end. This can save time for both users and the freedom.
17 staffs. #1 Error prevention. 1

THE END

Page | 57

You might also like