Download as pdf or txt
Download as pdf or txt
You are on page 1of 7

Development of a framework for UX KPIs in Industry - a case

study
ABSTRACT
Tina Øvad∗ Kashmiri Stec Lars Bo Larsen
Preely, Copenhagen, Denmark Bang and Olufsen, Struer, Denmark Aalborg University, Aalborg,
tina@preely.com ksh@bang-olufsen.dk Denmark and Bang & Olufsen, Struer,
Denmark
lbl@es.aau.dk

Lucca Julie Nellemann† Jędrzej Czapla‡


Demant, Copenhagen, Denmark Aalborg University, Denmark
LuccaJNellemann@outlook.dk czaplapapla@gmail.com

ABSTRACT study: ABSTRACT. In 32nd Australian Conference on Human-Computer


This work addresses the needs companies face when assessing and Interaction (OzCHI ’20), December 02–04, 2020, Sydney, NSW, Australia. ACM,
New York, NY, USA, 7 pages. https://doi.org/10.1145/3441000.3441042
tracking the UX quality of their products, which is necessary to en-
sure that the desired level of quality is met, clarify potential areas of
improvement, and compare competitor products via benchmarking. 1 INTRODUCTION
We present a framework for UX KPI assessment at Bang & Olufsen, Every company has a strong need to assess and track the quality
a luxury audio product manufacturer. The framework is inspired of their products. This is necessary to ensure that the desired level
by well-known UX scales such as the UEQ and AttrakDif as well as of quality is met, to clarify potential areas of improvement, as well
more business-oriented measures such as NPS and CES scores. The as to compare products via competitor benchmarking. Quality is
resulting UX KPI framework comprises the test procedures and a a multi-dimensional concept: It can refer to materials, tolerances,
scale containing these and ten unique features such as "Enjoyable”, durability, aesthetics, taste, usability, sustainability, etc. Each di-
“Exciting”, “Sound experience". The paper presents and discusses mension requires diferent measures and standards to be assessed.
the UX KPI framework and the rationales behind it. Results of more Product quality may in some cases be subjective and qualitative, and
than 200 user tests are presented and discussed. Factor analysis has thus difcult to describe and measure both generally and quantita-
been employed to analyse the assumptions behind the UX KPI and tively. Furthermore, it may consist of many individually dependent
identify where improvements can be made. components. A way to make the quality concept more manageable
and operational in day-to-day work is to break it down into sets
CCS CONCEPTS of individual, one-dimensional entities. These can in turn be de-
• Human-centered computing; • Human computer interac- scribed and evaluated, and later combined into an overall estimate
tion (HCI); • HCI design and evaluation methods; • Usability of product quality. In the present study we address the problem of
testing; assessing the quality of user experience (UX) from an industrial
point of view.
KEYWORDS We have exemplifed this via a case study performed at Bang &
UX, User experience evaluation framework, KPI, Key Performance Olufsen (B&O) [1] were we developed our framework for assessing
Indicator, UX benchmarking for industry the quality of the UX for B&O’s [1] audio products, such as head-
ACM Reference Format: phones, earbuds and loudspeakers. The framework also supports
Tina Øvad, Kashmiri Stec, Lars Bo Larsen, Lucca Julie Nellemann, and Jędrzej benchmarking the UX of competitors’ products, enabling us to put
Czapla. 2020. Development of a framework for UX KPIs in Industry - a case tangible numbers on UX for comparisons. With this framework,
∗ At
we can quantitatively measure on-product and in-app UX and map
Bang and Olufsen at the time the work was carried out
† At Bang and Olufsen at the time the work was carried out
strong and weak points of product experiences while also getting
‡ At Bang and Olufsen at the time the work was carried out qualitative information on how products can be improved. This
gives us information of where to focus our UX resources to bring
Permission to make digital or hard copies of all or part of this work for personal or
classroom use is granted without fee provided that copies are not made or distributed most value to the product. By benchmarking competitor products,
for proft or commercial advantage and that copies bear this notice and the full citation we have obtained a competitor overview including “hard numbers”,
on the frst page. Copyrights for components of this work owned by others than ACM which makes it easy for us to know how the products and concepts
must be honored. Abstracting with credit is permitted. To copy otherwise, or republish,
to post on servers or to redistribute to lists, requires prior specifc permission and/or a perform in the competitor UX landscape.
fee. Request permissions from permissions@acm.org. During the development of the framework, we also sat out to
OzCHI ’20, December 02–04, 2020, Sydney, NSW, Australia strengthen stakeholder communication regarding UX. We found it
© 2020 Association for Computing Machinery.
ACM ISBN 978-1-4503-8975-4/20/12. . . $15.00 desirable to be able to share information about UX with access to
https://doi.org/10.1145/3441000.3441042 diferent levels of granularity in UX, depending on project needs.

141
OzCHI ’20, December 02–04, 2020, Sydney, NSW, Australia Tina Øvad et al.

Our goal was that the framework should be truly transparent. We as well as the in-app interactions which are shared across prod-
aim to produce one overall tangible number for the UX of a product, uct apps. More specifcally: Connecting to a mobile phone using
which is the UX KPI (Key Performance Indicator), comprised of Bluetooth or WIFI; Using a companion app to setup the product
a multidimensional set of underlying features with more detailed (may include Google Voice Assistant [2] or Alexa Voice Assistant
information. [3], if supported); and basic player controls such as next, previous,
The KPI framework is developed within a company producing audio play, pause, volume up or down. These functions are carried out on
products. However, the applied principles and methods are context the product and via B&O’s smartphone apps [4, 5] where relevant.
independent and the framework can easily be adapted to other For products supporting phone calls, answering and making calls
product domains, as only a few items in the framework refect the are also included. The user is asked to “think aloud” during these
specifc product categories of the company. These can be replaced tasks and is closely observed by the facilitator. The facilitator takes
or extended to refect other application domains. Thus, we believe note of any relevant observations and flls out a template with in-
our work can be extended to any physical or digital product or a teraction measures using a ten-point scale. The template contains
combination thereof. We have especially felt a high interest from scores for the tasks mentioned above. The mean is computed and
companies working with physical experiences, where this kind of included in UX KPI as the feature “Interaction”, see Figure 4 below.
tool seems to have been missing. The individual interaction measures are not reported directly as
part of the UX KPI but are used to inform the development and
1.1 The Foundation product team of potential problems. For diferent product categories
these tasks and interaction measures may be changed to refect the
Leading up to this work, B&O had a testing scheme in place. This relevant aspects of the product being tested. However, they will al-
focused on “Industry standards”, which only concern the technical ways represent the usability of the products. For each user task the
protocols a product must adhere to; utility aspects; access to help participant is asked three post task questions: “How easy was the
when needed; and which feedback channels are used. The industry task?”; “How confdent did you feel doing the task?”; and “How was
standards form a product category dependent checklist, which your experience?”. The user answers each question on a ten-point
is run through by the test facilitator and not presented to test scale, with 10 indicating a very good experience and 1 indicating
participants or otherwise part of a test. This will not be of further a very poor experience. Further, the facilitator notes down any
concern to this study. Consistent user test protocols were not in relevant observations the user makes while trying to complete a
place and exit interviews depended highly on the concept or product task, and qualitative information about the issues encountered.
being tested. The aim of the UX KPI is to make this streamlined
and systematic, making it easier to track improvements and make 2.1.2 Post Test Qestions. After the user test, it is important to
comparisons between products. capture the participant’s experiences and attitudes about the tested
product. We approached this in two ways:
2 THE COMPONENTS OF THE UX KPI & We employ an exit test questionnaire to capture the user’s expe-
BENCHMARKING FRAMEWORK rience. For this we developed the B&O UX KPI scale, inspired by
three well-known and widely used scales in academia and indus-
2.1 Concept and Product Experience try: The UEQ- [6–8], the SUS- (system Usability Scale) [9] and the
Components Attrakdif [10, 11] scales.
During our initial discussions about our (then future) UX KPI and We frst considered the System Usability Scale (SUS). However,
benchmarking framework, we defned that a B&O concept and it does not capture the UX attributes which were important to our
product experience consists of four components: Industry Standard team, such as attractiveness. We therefore turned to the User Expe-
Experience, Diferentiating (Hero) Experience, Comfortableness, and rience Questionnaire (UEQ) for inspiration. UEQ is a widely used
Polish. and reliable questionnaire for measuring UX of interactive products.
These components refect B&O’s brand and company values and The UEQ captures the diferent areas of the user experience we de-
were therefore a subjective choice which is not based on empirical sire [8]. Attractiveness is measured as an overall value of the tested
data. Diferentiating (Hero) Experiences refers to features that make item, containing pragmatic qualities and hedonic qualities. The
a product unique. Comfortableness refers to how comfortable the pragmatic qualities are perspicuity, efciency, and dependability.
user is while using and/or wearing the product, and Polish refers to The hedonic qualities are stimulation and novelty. UEQ, measures
a design that delights the sense and whether the product lives up to attractiveness with six questions and the other fve qualities are
expectations of the brand. In order to measure this, we needed to measured with four question each, thus 26 questions (items) in
be able to measure interaction with the product, and we therefore total.
include user testing as a basis for the UX KPI. Although these Some of the questions, such as how easy it is to navigate the
components are particular to B&O, they can easily be adapted to ft product or the users’ confdence using a product, are parts of the
the needs of other companies. interaction table and are therefore omitted. Due to the audio do-
main we include the item “Sound Experience”. Other items inspired
2.1.1 User Tests & Post Task Qestions. Each user test comprises from the three scales were divided under the three previously men-
a number of diferent tasks to be carried out by the participant. tioned components or subscales; Diferentiating (Hero) Experience,
These may vary slightly, depending on the nature and capabilities Comfortableness, and Polish. Figure 1 below shows the aspects
of the product, but in most cases include all on-device interaction identifed for each of the three components.

142
Development of a framework for UX KPIs in Industry - a case study OzCHI ’20, December 02–04, 2020, Sydney, NSW, Australia

3.1 Aggregating the scores


The scores are aggregated and inserted in a colour-coded template to
gain a quick overview and facilitate interpretation of the results. The
aggregation is done by calculating the average of the individual item
scores across the test participants for a given product test. These are
in turn combined within the three UX components (Hero, Polish and
Comfortableness) by calculating the mean for each. Finally, a colour
coding scheme is applied to make potential problems and areas for
Figure 1: UX KPI categories and items improvement clearer. The colour scheme and interpretation are
shown in Figure 3, and is inspired by the NPS [14].

The choice of these specifc aspects was made based on internal 3.2 Using the KPI & benchmarking framework
discussions in the UX team and other stakeholders at B&O and re- The framework is used in connection with a user test, as mentioned
fects the issues considered most relevant to the company’s product above, and the UX KPI is calculated and the scheme flled out after
portfolio and brand identity. The aspects were represented as items the completion of a series of tests of a product. Typically, we con-
on a semantic diferential scale with ten points, see Figure 2 below. duct 5 user test sessions on naive users per test following [16, 17].
In addition to the UX KPI, a test report is made including the fa-
2.1.3 Market Measures. Some of the aspects we want to include in cilitator’s observations and the test users’ comments. A post-test
the UX KPI are not covered by the UX scales mentioned above. As interview is also carried out and included in the test report, but
we are employing the KPI in a business environment, we include the that is not the focus in the present study.
following scales in the UX KPI framework: CES (Customer Efort
Score), asking: “How easy was it to do the task? [18] and CSAT 4 RESULTS
(Customer satisfaction), asking: “How would you rate your overall
By April 2020 the UX KPI & benchmarking framework has been
satisfaction of doing the tasks?” [13, 19]. The CES and CSAT scores
used in 228 tests of 36 products. Typically, fve tests are carried
corresponds closely to “ease of use” and user satisfaction”, which
out for a product, which we assume on average will uncover about
are also part of the ISO 9241 defnition of product usability [12].
80% of the usability problems for a product (although with a large
Finally, the NPS (Net promoter score), asking: “How likely are you
margin), as demonstrated in [16, 17]. However, some are tested
to recommend the product to a friend or colleague?” is included, as
by up to 15 participants. The products fall into several categories,
this has been found to be an important predictor for in particular
which are all sound reproduction equipment: fexible (movable, but
net commerce [14]. A discussion of these metrics can be found in
still with a power cord and Wi-Fi; 10 products); mobile (rechargeable
[20].
Bluetooth loudspeakers; 10 products), and wearable (headphones
A further argument for these tools is that CES, CSAT and NPS
and earbuds; 16 products). Test participants were recruited outside
questions were already used by B&O’s customer experience team
of the company, except in some cases, such as unreleased products,
to measure customer satisfaction after-market release. Thus, the
where they may be internal. The age and gender distributions are
inclusion makes it possible to bridge the gap from pre- to post
(Avg. = 28.6 St. Dev = 10.8) and (Male = 61%, Female = 39%); all
launch of a product and has the added beneft of already being
were living in Denmark at the time they participated in user testing.
known through-out the organisation. Initially, we asked the NPS,
Figure 4 shows an example of a UX KPI scoring template, where
CSAT and CES questions is the traditional way, but we found it
the measures described in Figure 2 and the colour coding scheme
was confusing for users to switch between 10 and 7 pt. scales, so
shown in Figure 3 are used. The template comprises a compact,
we converted everything to ten-point scales to make it easier for
easy to overview presentation of the UX KPIs, which can be used
users, as well as to make the responses easier to compare and
for quick impressions, benchmarking, comparisons, tracking of im-
interpret. However, this comes with the cost of less compatibility
provements, discussions, etc. among the UX team and stakeholders
to the original scales.
such as product managers, concept managers and technical teams. It
is accompanied by a more in-depth test report as mentioned above.
3 PUTTING THE UX KPI & BENCHMARKING The testing has been taking place continuously since the second
SCALE TOGETHER half of 2018 and certain adjustments have been made over time.
The UX and market measures are integrated into a single 10-point For example, an item “Meets expectations” was originally included
semantic diferentials questionnaire. The anchor points were chosen in the “Polish” category, but was removed as it is a relative term,
to be identical to the aspects from Table 1 preceded by “Not” and depending on the user’s (unknown) prior expectations to, e.g., the
“Very”. This was done because the terms were considered clear brand, and are thus better captured in a post-test interview. The
and understandable and to avoid an extra step formulating and item “comfortable wearing” is only relevant for wearable products,
validating keywords and phrases for the anchors. such as headphones and earbuds.
This completes the scale. Data is captured simply by converting The boxplot in Figure 5 shows that the items are scored quite
the participant’s selection to a corresponding number between 1-10, uniformly, and the ten-point scale is not fully utilized. Indeed, except
from left to right. The following section describes how the UX KPI for “Creative” and “Materials”, the lower and upper quartiles for all
scale is used in connection with user testing. items lie between 6 and 9 and the median is 8 for 7 of the 10 items.

143
OzCHI ’20, December 02–04, 2020, Sydney, NSW, Australia Tina Øvad et al.

Figure 2: UX-KPI Semantic diferentials scale including CES, CSAT and NPS

Figure 3: Interpretation of the aggregated KPI scores. Inspired by NPS [14, 15]

4.1 Factor Analysis Figure 6 shows the resulting EFA 3-factor model with Oblimin
It is relevant to investigate whether the assumed grouping of the rotation when applied on all data. For clarity, a cut-of has been
items into the Hero, Polish and Comfort themes can be identifed in applied for loadings below 0.3. The resulting model accounts for
the data. Factor Analysis (FA) is used for this. Initially, a Confrma- 71% of the common variance in the data, which is acceptable. Three
tory FA (CFA) is performed across all tests, using the Hero, Polish items are cross-loading (Creative, Convenient and Enjoyable).
and Comfort model. However, this model cannot be confrmed. The F1 accounts for 51% of the common variance and includes Stylish,
recommended number of data points for a CFA is 300 [21], so the Presentable, Inviting, Materials as well as Exciting and Creative
failure may be due to either lack of data, that some parameters (shared with F3). A common heading for F1 could be “Attractive-
are highly correlated, or simply that the model does not match ness”. F2 accounts for 21% of the common variance and includes
the data. To further investigate the relationships, an Exploratory Comfortable Control and Enjoyable (shared with F3) and could be
FA (EFA) is carried out using R [22]. A Scree plot indicates that 3 labelled “Control”. F3 accounts for 27% of the common variance
factors will capture about 70% of the shared variance and that a 4 and includes Sound experience and Convenience, i.e. issues related
or 5 factor model will not signifcantly improve this. The 3 and 4 to “Performance”.
factor models yield signifcant CHI2 values. Based on this a 3-factor
model is chosen. [21] recommends that an oblique rotation should
be used if factor correlations exceed 0.3. In the current case this is 4.2 Correlations
the situation, so the Oblimin rotation is chosen. [21, 23]. The question about “comfortable with the amount of control” can
be expected to show some correlation with the “Interaction” feature,

144
Development of a framework for UX KPIs in Industry - a case study OzCHI ’20, December 02–04, 2020, Sydney, NSW, Australia

Figure 4: Example of a UX KPI scoring template for random product

Figure 5: Boxplot showing the KPI items

as they address similar control issues. A test shows that the corre- application of the UX KPIs over time – on both products under
lation is signifcant (0.58 at p<0.00001 with 95% confdence interval development and market releases by competitors – allows compar-
at [0.48..0.66]). This is an indication of the two concepts overlap isons across numerous products, thus facilitating the application
and is indeed what could be expected. However, it also means of the UX KPIs as a relative measure instead of an absolute one.
we should be careful when including both features in the UX KPI For ease of comparison, all numerical scales (except the NPS) have
calculation. been set to ten points. This has the advantage making internal
comparisons between items and the Hero, Polish and Comfortable-
5 DISCUSSION ness sub-scales very easy and removing any need for re-scaling.
While the UX KPI has not been formally validated, it is built on However, it also makes external comparisons to the original scales
standardised, widely used scales and measures. Therefore, a cer- harder and may tempt unwary readers of the KPI test result reports
tain level of construct validity can be assumed. Furthermore, the to make too direct 1:1 comparison between e.g. sound quality and

145
OzCHI ’20, December 02–04, 2020, Sydney, NSW, Australia Tina Øvad et al.

in an updated model, it could be considered to introduce weights,


e.g. based on the proportional loadings of the factors as weights.
The foundations, methods and analyses applied in the development
of the KPI framework are not dependent of the particular domain
or organisation, where it is applied, but are universally accepted
business and UX measures and statistical methods. Thus, the frame-
work can easily be modifed or extended to domains beyond the
audio context of B&O’s products. Indeed, only the item “Sound
Experience” is directly related to audio products while “Materials”
and “Comfortable Wearing” are only relevant for physical products.
These can be omitted or replaced and new can be added, depending
on the particular domain in which the framework is to be applied.
Likewise, the three categories or subscales can be updated to refect
the needs of the product being tested, or of the larger organizational
context.

6 CONCLUSION
Through inspiration from well-known and widely recognized
Figure 6: 3-Factor EFA with Oblimin Rotation for all data. scales from the UX and business domains, we have derived a UX
KPI framework tailored for industrial use within a specifc do-
main. The purpose has been to provide a consistent and easy-to-
interpret tool for the UX team and stakeholders within the com-
pany to assess and track the UX of products as well as competitor
CES, just because the scales “look similar”. Indeed, the computation
products.
of the “Overall KPI” as an average of the individual items is an
The framework has successfully been employed at B&O for close
example of this and should be used with some caution.
to three years. In this period numerous user tests have been car-
The boxplots shown in Figure 5 above indicate, that the 10-point
ried out on a large variety of products by diferent test facilitators
scale seems to be only partly utilised, with 7 of 10 items having
and environments. By using the framework, the test results are all
the same median. This is a common issue with numerical scales,
highly comparable and are presented and interpreted in a similar
where the granularity becomes less than intended and responses
manner. Thus, it has provided the UX performance tracking and
tend to cluster towards one end of the scale. In the present case
benchmarking capabilities we set out to achieve.
we chose a 10-point scale, which produces a better resolution than
The framework has not been validated in a formal manner and
e.g. a 5- or 7-point scale. However, the issue may be due to the fact
the statistical analyses indicate that a more accurate interpretation
that all tested products are high end audio products, and thus we
may be achieved by adjusting e.g. the underlying factors and the
could expect the ratings to cluster towards the upper end of the
weights (when calculating the overall KPI). However, when consid-
scale. Increasing the granularity to e.g. 20 points (or using a VAS
ering the intended usage, it can be argued that this is less important
scale [24]) will most likely just only result in extra noise, unless
than it would be in a purely academic setting. Internal consistency,
we stick to recruiting expert test participants, or change the test
stakeholder support and acceptance of results are important issues
scheme to direct A/B comparisons. A regular untrained person will
in a company setting, all of which should be carefully considered
not be able to distinguish between 20 levels.
when making decisions about changes.
The EFA analysis indicates no statistical evidence for the Pol-
We have demonstrated how the KPI framework can be modifed
ish, Hero and Comfort categories we introduced at the starting
or extended to suit other products than the present case of audio
point of are framework. Instead the items groups themselves into
equipment.
categories pertaining to Attractiveness, Control and Performance,
with Attractiveness representing the majority of the items. From a
purely statistical viewpoint the identifed categories should thus 6.1 Future work
replace the original ones. However, the categories must also be The next steps will be to consider the results and analyses pre-
viewed in the context of the organization using them. Thus, if it sented here and update to the framework, where appropriate. This
e.g. makes more sense to B&O’s design goals and visions to retain could e.g. to replace the Hero/Polish/Comfortableness subscales
the Polish, Hero and Comfort categories, one should be careful or to introduce a weighting of the items in the UX KPI calcula-
replacing them. The large body of tests already done using the tion. As mentioned above, it is also important to retain backwards
original categories must also be taken into consideration. If they compatibility for future comparisons, so it is not only a decision
are replaced, backward combability is lost and all stakeholders must depending on the analyses, but also on usage within the company -
learn and understand the new ones. to ensure the fndings remain relevant and valuable to cross-org
Currently, the overall UX KPI quotient is computed as an un- stakeholders.
weighted average of the categories, thus assuming an equal impor- Another consideration could be to introduce UX KPIs as design
tance of all items. If an overall KPI quotient is still to be calculated requirements in the future.

146
Development of a framework for UX KPIs in Industry - a case study OzCHI ’20, December 02–04, 2020, Sydney, NSW, Australia

ACKNOWLEDGMENTS you-need-to-grow (retrieved August 2020).


[14] Reichheld, Fred; Markey, Rob (2011). The Ultimate Question 2.0: How Net Pro-
We thank Bang and Olufsen and Aalborg University for their sup- moter Companies Thrive in a Customer-Driven World. Boston, Mass.: Har-
port of this work and most importantly all our test participants for vard Business Review Press. p. 52- ISBN 978-1-4221-7335-0. Available at: https:
//archive.org/details/ultimatequestion00reic_0/page/52 (retrieved August 2020)
their time. [15] Net Promoter Score: https://www.netpromoter.com (retrieved August 2020)
[16] Faulkner, L. Beyond the fve-user assumption: Benefts of increased sample sizes
REFERENCES in usability testing. Behavior Research Methods, Instruments, & Computers 35,
379–383 (2003). https://doi.org/10.3758/BF03195514
[1] Bang and Olufsen: https://www.bang-olufsen.com (retrieved August 2020)
[17] Nielsen, J., Landauer, K. ”A mathematical model of the fnding of usability
[2] Google Voice Assistant https://assistant.google.com/ (retrieved August 2020)
problems” in CHI ’93: Proceedings of the INTERACT ’93 and CHI ’93 Con-
[3] Alexa Voice Assistant https://developer.amazon.com/en-US/alexa/alexa-voice-
ference on Human Factors in Computing Systems. May 1993 Pages 206–213. DOI:
service (retrieved August 2020)
https://doi.org/10.1145/169059.169166
[4] B&O App. Apple IOS version: https://apps.apple.com/us/app/bang-olufsen/
[18] Matthew Dixon, Karen Freeman and Nicholas Toman. Stop Trying to Delight
id1203736217 (retrieved August 2020)
Your Customers, Harvard Business Review Press. July–August 2010. Available
[5] B&O App. Android version: https://play.google.com/store/apps/details?id=com.
from: https://hbr.org/2010/07/stop-trying-to-delight-your-customers (retrieved
bang_olufsen.OneApp&hl=en (retrieved August 2020)
August 2020).
[6] UEQ: Website: https://www.ueq-online.org/ (retrieved August 2020)
[19] Farris, Paul W.; Neil T. Bendle; Phillip E. Pfeifer; David J. Reibstein (2010). Market-
[7] Laugwitz B., Held T., Schrepp M. (2008) Construction and Evaluation of a User
ing Metrics: The Defnitive Guide to Measuring Marketing Performance. Upper
Experience Questionnaire. In: Holzinger A. (eds) HCI and Usability for Education
Saddle River, New Jersey: Pearson Education, Inc. ISBN 0-13-705829-2.
and Work. USAB 2008. Lecture Notes in Computer Science, vol 5298. Springer,
[20] Mittal, Vikas and Frennea, Carly, Customer Satisfaction: A Strategic Review and
Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-89350-9_6
Guidelines for Managers (2010). MSI Fast Forward Series, Marketing Science
[8] Schrepp, M., Hinderks, A., and Thomaschewski, J., 2017. Construction of a Bench-
Institute, Cambridge, MA, 2010, Available at SSRN: https://ssrn.com/abstract=
mark for the User Experience Questionnaire (UEQ). International Journal of
2345469 (retrieved August 2020)
Interactive Multimedia and Artifcial Intelligence, 4(4), pp. 40–44. ISSN 1989-1660.
doi: 10.9781/ijimai.2017.445. [21] Tabachick, Barbara and Fidell Using Multivariate Statistics. 4 th editon. Allyn
[9] SUS: Brooke, J., 1996. SUS - A quick and dirty usability scale. Usability evaluation and Bacon, 2001. ISBN 0-321-05677-9
in industry, 09, 189(194), pp. 4–7. [22] R Core Team (2014). R: A language and environment for statistical computing.
[10] Attrakdif: Website: http://www.attrakdif.de/index-en.html (retrieved August R Foundation for Statistical Computing, Vienna, Austria. URL http://www.R-
2020) project.org/ (retrieved August 2020)
[11] Hassenzahl, M., Burmester, M., and Koller, F., 2008. On the track of user experi- [23] James Dean Brown: “Choosing the Right Type of Rotation in PCA and EFA”, in
ence. Available at: <https://www.uid.com/en/publications/attrakdif>. (retrieved Shiken: JALT Testing & Evaluation SIG Newsletter. 13 (3) November 2009 (p. 20 -
August 2020) 25)
[12] ISO 9241-11:2018. Ergonomics of human-system interaction — Part 11: Usability: [24] Grant, S.; Aitchison, T.; Henderson, E.; Christie, J.; Zare, S.; McMurray, J.; Dargie,
Defnitions and concepts. Available at: https://www.iso.org/standard/63500.html H. (1999). "A comparison of the reproducibility and the sensitivity to change of
(retrieved August 2020) visual analogue scales, Borg scales, and Likert scales in normal subjects during
[13] Reichheld, Frederick F. "One Number You Need to Grow". Harvard Business submaximal exercise". Chest. 116 (5): 1208–17. DOI: https://doi.org/10.1378/chest.
Review December 2003. Available at: https://hbr.org/2003/12/the-one-number- 116.5.1208

147

You might also like