Download as pdf or txt
Download as pdf or txt
You are on page 1of 114

DEGREE PROJECT IN COMPUTER SCIENCE AND ENGINEERING, SECOND CYCLE, 30

CREDITS
STOCKHOLM, SWEDEN 2020

Performance and feature support of


Progressive Web Applications
A performance and available feature comparison between
Progressive Web Applications, React Native applications and
native iOS applications.

ANDERS NILSSON

Stockholm, Sweden 2022


Performance and feature
support of Progressive Web
Applications

A performance and available feature


comparison between Progressive Web
Applications, React Native applications
and native iOS applications.

ANDERS NILSSON

Degree Programme in Computer Science and Engineering


Date: March 10, 2022

Supervisors: Cyrille Artho, Kim Aarnseth, Daniela Attalla


Examiner: Benoit Baudry
School of Electrical Engineering and Computer Science
Host company: Northvolt AB
Swedish title: Prestanda och funktionsstöd för Progressiva
Webbapplikationer
Swedish subtitle: En prestanda och tillgänglig funktionsjämförelse
mellan progressiva webbapplikationer, React Native applikationer
och Native iOS.
© 2022 Anders Nilsson
Abstract | i

Abstract
Mobile platform fragmentation is one of the main challenges of mobile
development today, forcing developers to develop one application for each
targeted platform, which significantly impacts time and cost for application
development and maintenance. The fragmentation has given rise to cross-
platform application development tools and frameworks, making it possible
to develop one single application compatible with several platforms. This
thesis focuses on the web-based approach Progressive Web Applications
(PWAs), which, in contrast to previous approaches, targets both mobile
and desktop devices. We aim to point out the supported features, evaluate
their suitability for QR code scanning, and their performance compared to
alternative approaches on iOS. We specifically cover a feature set of 33 features
and measure response times, CPU and memory utilization, geolocation
accuracy, and QR code scanning correctness. We developed three benchmark
artifacts for the performance analysis: a PWA, a React Native application, and
a native iOS application, and conducted automated run-time experiments using
the tools Xcode and XCUITest.
The performance evaluation shows that native applications performed
best in memory and CPU utilization, whereas React Native achieved the
shortest response times. The feature evaluation shows that the majority of the
features are supported or partially supported for PWAs, and that the support
continues to grow. Still, PWAs lack support for crucial mobile features such as
push notifications and background synchronization, making PWAs insufficient
for advanced mobile application development on iOS. Nevertheless, this
study shows that PWAs are well worth considering for applications with low
requirements.
ii | Abstract

Keywords
Mobile application development, Cross-platform tools, Progressive Web
Applications, Performance analysis, iOS, QR code scanning
Sammanfattning | iii

Sammanfattning
Fragmentering av mobilplattformar är en av de största utmaningarna inom
mobilutveckling, vilket tvingar utvecklare att utveckla en applikation för
varje specific plattform, vilket avsevärt påverkar tid och kostnad för
applikationsutveckling och underhåll. Fragmenteringen har gett upphov
till plattformsoberoende applikationsutvecklingsverktyg och ramverk, vilka
möjliggör utveckling av en enda applikation kompatibel med flertalet
plattformar. Det här examensarbetet fokuserar på det webbaserade tillvä-
gagångssättet Progressiva Webb Applikationers (PWAs), som till skillnad
från tidigare tillvägagångssätt, riktar sig till både mobila och stationära
enheter. Den här studien syftar till att reda ut vilka funktioner som stöds av
PWAs, utvärdera PWAs lämplighet för QR-kodskanning och deras prestanda
jämfört med alternativa tillvägagångssätt på iOS. Mer specifikt täcker
den här stiduen en evaluering av 33 essentiella mobilfunktioner samt en
prestandaanalys genom mätning av svarstid, CPU- och minnesanvändning,
geolokaliseringsnoggrannhet och QR-kodsskanning korrekthet. Vi utvecklade
tre benchmark-artefakter för prestandaanalysen: en PWA, en React Native-
applikation och en inbyggd iOS-applikation, och genomförde automatiserade
experiment med verktygen Xcode och XCUITest.
Prestandautvärderingen visar att inbyggda applikationer presterade bäst
i minne och CPU-användning, medan React Native uppnådde de kortaste
svarstiderna. Funktionsutvärderingen visar att majoriteten av funktionerna
stöds eller delvis stöds för PWAs, och att stödet fortsätter att växa. Ändå
saknar PWAs stöd för viktiga mobila funktioner som push-meddelanden och
bakgrundssynkronisering, vilket gör PWAs otillräckliga för utveckling av
avancerade iOS mobilapplikationer. Däremot är PWAs väl värda att överväga
för applikationer med lägre krav.

Nyckelord
Mobilappsutveckling, plattformsoberoende verktyg, progressiva webbappli-
kationer, prestandaanalys, iOS, QR-skanning
iv | Sammanfattning
Acknowledgments | v

Acknowledgments
First, I would like to thank all the people who helped me conduct my thesis.
I want to thank my examiner Benoit Baudry, who helped me define my
thesis’s focus points and scope. Then, I want to thank Northvolt and my
supervisors, Kim Aarnseth and Daniela Attalla, for being supportive and
genuinely interested in my work. Finally, I want to thank my academic
supervisor, Cyrille Artho, who has given me great support and valuable
guidance throughout the whole project.

Stockholm, March 2022


Anders Nilsson
vi | Acknowledgments
Contents | vii

Contents

1 Introduction 1
1.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.3 Hypothesis . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.4 Purpose and sustainability . . . . . . . . . . . . . . . . . . . 5
1.5 Goals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.6 Research Methodology . . . . . . . . . . . . . . . . . . . . . 6
1.7 Delimitations . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.8 Structure of the thesis . . . . . . . . . . . . . . . . . . . . . . 7

2 Background 9
2.1 Mobile application development . . . . . . . . . . . . . . . . 9
2.1.1 Native applications . . . . . . . . . . . . . . . . . . . 10
2.1.2 Web applications . . . . . . . . . . . . . . . . . . . . 11
2.1.3 Hybrid applications . . . . . . . . . . . . . . . . . . . 12
2.1.4 Interpreted applications . . . . . . . . . . . . . . . . . 12
2.2 Progressive Web Applications . . . . . . . . . . . . . . . . . 13
2.2.1 Service Workers . . . . . . . . . . . . . . . . . . . . 14
2.2.2 Web app manifest . . . . . . . . . . . . . . . . . . . . 16
2.2.3 Application shell . . . . . . . . . . . . . . . . . . . . 16
2.3 Frameworks . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.3.1 Swift & SwiftUI . . . . . . . . . . . . . . . . . . . . 17
2.3.2 React JS . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.3.3 React Native . . . . . . . . . . . . . . . . . . . . . . 17
2.4 PWA feature detection tools . . . . . . . . . . . . . . . . . . 18
2.4.1 Steiner’s PWA feature detector . . . . . . . . . . . . . 19
2.4.2 What Web can do today . . . . . . . . . . . . . . . . 19
2.5 XCode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.5.1 XCTest . . . . . . . . . . . . . . . . . . . . . . . . . 19
viii | Contents

2.5.2 Xcode instruments . . . . . . . . . . . . . . . . . . . 20


2.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

3 State of the art 21


3.1 Feature evaluation . . . . . . . . . . . . . . . . . . . . . . . . 21
3.2 Feature support . . . . . . . . . . . . . . . . . . . . . . . . . 22
3.3 Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
3.3.1 Response times and application size . . . . . . . . . . 22
3.3.2 CPU and memory consumption . . . . . . . . . . . . 23
3.3.3 Energy consumption . . . . . . . . . . . . . . . . . . 24
3.4 User experience . . . . . . . . . . . . . . . . . . . . . . . . . 25
3.5 Code quality . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
3.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

4 Method 27
4.1 Research questions . . . . . . . . . . . . . . . . . . . . . . . 27
4.2 RQ1: Feature support and detection . . . . . . . . . . . . . . 28
4.2.1 Feature support . . . . . . . . . . . . . . . . . . . . . 28
4.2.1.1 Accessibility . . . . . . . . . . . . . . . . . 28
4.2.1.2 Installation and Storage . . . . . . . . . . . 29
4.2.1.3 Display and Screen control . . . . . . . . . 29
4.2.1.4 Background tasks and Notifications . . . . . 30
4.2.1.5 Device hardware and Surroundings . . . . . 30
4.2.2 Feature detection . . . . . . . . . . . . . . . . . . . . 31
4.3 RQ2: QR code scanning . . . . . . . . . . . . . . . . . . . . 32
4.3.1 Requirements . . . . . . . . . . . . . . . . . . . . . . 32
4.3.2 QR code scanning experiment . . . . . . . . . . . . . 32
4.4 RQ3: Performance . . . . . . . . . . . . . . . . . . . . . . . 33
4.4.1 Geolocation experiment . . . . . . . . . . . . . . . . 34
4.4.2 Navigation experiment . . . . . . . . . . . . . . . . . 34
4.4.3 Scrolling experiment . . . . . . . . . . . . . . . . . . 34
4.5 Experimental setup . . . . . . . . . . . . . . . . . . . . . . . 35
4.6 Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
4.6.1 Scanning correctness . . . . . . . . . . . . . . . . . . 36
4.6.2 Clock monotonic time . . . . . . . . . . . . . . . . . 36
4.6.3 CPU time . . . . . . . . . . . . . . . . . . . . . . . . 36
4.6.4 Memory consumption . . . . . . . . . . . . . . . . . 37
4.6.5 Geolocation accuracy . . . . . . . . . . . . . . . . . . 37
4.7 Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
Contents | ix

4.7.1 Analysis of Variance . . . . . . . . . . . . . . . . . . 38


4.7.2 Post-hoc analysis . . . . . . . . . . . . . . . . . . . . 38
4.8 Benchmark artifacts . . . . . . . . . . . . . . . . . . . . . . . 39
4.8.1 QR code scanning . . . . . . . . . . . . . . . . . . . 40
4.8.2 Geolocation . . . . . . . . . . . . . . . . . . . . . . . 41
4.8.3 Navigation . . . . . . . . . . . . . . . . . . . . . . . 41
4.8.4 Scroll View . . . . . . . . . . . . . . . . . . . . . . . 42

5 Results 43
5.1 RQ1: Feature support . . . . . . . . . . . . . . . . . . . . . . 43
5.1.1 Accessibility . . . . . . . . . . . . . . . . . . . . . . 45
5.1.2 Installation and Storage . . . . . . . . . . . . . . . . . 45
5.1.3 Display and Screen control . . . . . . . . . . . . . . . 46
5.1.4 Background tasks and Notifications . . . . . . . . . . 47
5.1.5 Device hardware and Surroundings . . . . . . . . . . 47
5.1.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . 48
5.2 ANOVA assumptions analysis . . . . . . . . . . . . . . . . . 49
5.3 RQ2: QR code scanning . . . . . . . . . . . . . . . . . . . . 50
5.3.1 Scanning requirements . . . . . . . . . . . . . . . . . 50
5.3.2 Scanning correctness . . . . . . . . . . . . . . . . . . 51
5.3.3 Response time . . . . . . . . . . . . . . . . . . . . . 51
5.3.4 CPU usage . . . . . . . . . . . . . . . . . . . . . . . 52
5.3.5 Memory consumption . . . . . . . . . . . . . . . . . 53
5.3.6 Statistical analysis . . . . . . . . . . . . . . . . . . . 54
5.3.7 Summary . . . . . . . . . . . . . . . . . . . . . . . . 55
5.4 RQ3: Performance . . . . . . . . . . . . . . . . . . . . . . . 56
5.4.1 Response time . . . . . . . . . . . . . . . . . . . . . 56
5.4.2 CPU usage . . . . . . . . . . . . . . . . . . . . . . . 58
5.4.3 Memory consumption . . . . . . . . . . . . . . . . . 61
5.4.4 Geolocation accuracy . . . . . . . . . . . . . . . . . . 65
5.4.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . 67

6 Discussion 69
6.1 RQ1: Feature support . . . . . . . . . . . . . . . . . . . . . . 69
6.2 RQ2: QR code scanning . . . . . . . . . . . . . . . . . . . . 70
6.3 RQ3: Performance . . . . . . . . . . . . . . . . . . . . . . . 71
6.4 Threats to validity . . . . . . . . . . . . . . . . . . . . . . . . 72
x | Contents

7 Conclusions and Future work 75


7.1 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
7.2 Future work . . . . . . . . . . . . . . . . . . . . . . . . . . . 76

References 79

A Common application features 91

B Bartlett’s test results 92


List of Figures | xi

List of Figures

2.1 Mobile development approaches. An categorization of dif-


ferent mobile application development approaches (adapted
from [9]). . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.2 This figure illustrates the service worker and how it handles
incoming user requests. . . . . . . . . . . . . . . . . . . . . . 15

4.1 The benchmark artifacts’ accessibility list view. . . . . . . . . 39

5.1 Example QQ plots from the normality assumption analysis.


Figure 5.1a passed the normality test and its data is assumed
to be normality distributed whereas Figure 5.1b failed the test. 50
5.2 Box plot describing the clock monotonic time in seconds (s)
per device for the scanning experiment. . . . . . . . . . . . . 52
5.3 Box plot describing the CPU time in seconds (s) per device for
the scanning experiment. . . . . . . . . . . . . . . . . . . . . 52
5.4 Box plot describing the RAM in mebibytes (MiB) per device
for the scanning experiment. . . . . . . . . . . . . . . . . . . 53
5.5 Box plot describing the ComputedRAM in mebibytes (MiB)
per device for the scanning experiment. . . . . . . . . . . . . 54
5.6 Box plots describing the clock monotonic time in seconds (s)
per device and experiment. . . . . . . . . . . . . . . . . . . . 57
5.7 Box plots describing the CPU time in seconds (s) per device
and experiment. . . . . . . . . . . . . . . . . . . . . . . . . . 60
5.8 Box plots describing the RAM used in mebibytes (MiB) per
device and experiment. . . . . . . . . . . . . . . . . . . . . . 62
5.9 Box plots describing the ComputedRAM in mebibytes (MiB)
per device and experiment. . . . . . . . . . . . . . . . . . . . 64
5.10 Box plots describing the horizontal accuracy achieved in
meters (m) per device and location. . . . . . . . . . . . . . . . 66
xii | List of Figures
List of Tables | xiii

List of Tables

4.1 Feature implementation summary for Apple App Store’s 25


top free applications in Sweden [68]. The utilization of
navigation and scroll views were manually reviewed by testing
the applications. Usage of geolocation was derived from the
applications app integrity section at Apple App Store. . . . . . 33
4.2 Overview of the physical devices and important specification
details. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
4.3 Overview of the statistical methods used depending on the
violated assumption. . . . . . . . . . . . . . . . . . . . . . . 39
4.4 Web-based QR code scanning libraries. . . . . . . . . . . . . 40

5.1 Feature support for PWAs and React Native applications.


Supported features are notated with a check mark (3),
unsupported with a cross (7), and partly supported features
with a triangle (△). . . . . . . . . . . . . . . . . . . . . . . . 44
5.2 Normality and variance analysis overview for the metrics. . . . 49
5.3 Classification of QR code scanning requirements. . . . . . . . 51
5.4 Percentage of correct scanning attempts per framework and
code. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
5.5 The scanning experiment results per metric and framework.
The Mean columns display the means for each application,
over 100 runs for the clock monotonic time metric, and over
30 runs for the other metrics. The Welch’s ANOVA column,
the p-value from the Welch’s ANOVA analysis, and the Rank
columns ranks the frameworks from 1 (best) to 3 (worst). . . . 55
5.6 A summary of the framework’s run-time performance ranks in
scanning. Ranks are provided per metric, where 1 represent
the most performant framework and 3 the worst. . . . . . . . . 55
xiv | List of Tables

5.7 Clock monotonic time results per experiment and framework.


The Mean columns display the means for each application over
100 runs, the Welch’s ANOVA column, the p-value from the
Welch’s ANOVA analysis, and the Rank columns ranks the
frameworks from 1 (best) to 3 (worst). . . . . . . . . . . . . . 58
5.8 CPU time results per experiment and framework. The Mean
columns display the means for each application over 30 runs,
the Welch’s ANOVA column, the p-value from the Welch’s
ANOVA analysis, and the Rank columns ranks the frameworks
from 1 (best) to 3 (worst). . . . . . . . . . . . . . . . . . . . . 60
5.9 RAM results per experiment and framework. The Mean
columns display the means for each application over 30
runs, the Welch’s ANOVA column, the p-value from the
Welch’s ANOVA analysis, and the Rank columns ranks the
frameworks from 1 (best) to 3 (worst). . . . . . . . . . . . . . 63
5.10 ComputedRAM results per experiment and framework. The
Mean columns display the means for each application over
30 runs, the Welch’s ANOVA column, the p-value from the
Welch’s ANOVA analysis, and the Rank columns ranks the
frameworks from 1 (best) to 3 (worst). . . . . . . . . . . . . . 65
5.11 Horizontal min, mean and max accuracy achieved in meters (m). 67
5.12 Kruskal-Wallis analysis of the collected geolocation accuracy
data per device and location. . . . . . . . . . . . . . . . . . . 67
5.13 A summary of the framework’s run-time performance ranks.
The rank is given per metric, where 1 represent the most
performant framework and 3 the worst. . . . . . . . . . . . . . 68

A.1 A table describing the feature implementation status for


Apple App Store’s top 25 free applications in Sweden. The
feature are notated with a check mark (3) if the application
implements it and with a cross (7) if not. . . . . . . . . . . . . 91

B.1 Bartlett’s p-values per device, experiment and metric. Entries


are notated with a hyphen (—) if the metric is not measured
in the experiment, or a cross (7) if the all collected values are
equal. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
Listings | xv

Listings

2.1 PWA offline mode support check . . . . . . . . . . . . . . . . 18


xvi | Listings
List of acronyms and abbreviations | xvii

List of acronyms and abbreviations

ANOVA Analysis of Variance


API Application programming interface

CPU Central Processing Unit


CSR Client-side rendering
CSS Cascading Style Sheets

FPS Frames per second

GPS Global Positioning System


GUI Graphical User Interface

HCI Human-computer interaction


HTML HyperText Markup Language

IDE integrated development environment

JSON JavaScript Object Notation

NFC Near-Field Communication

OS Operating system

PreRAM Idle-state memory consumption


PWA Progressiva Webb Applikationer
PWA Progressive Web Application

RAM Random access memory

SDK Software Development Kit


SLOC Source Lines of Code
SSR Server-side rendering

TTC Time to completion

UI User Interface
xviii | List of acronyms and abbreviations
Introduction | 1

Chapter 1

Introduction

The growing popularity and rapid development of mobile device technology


have increased the demand for services’ accessibility, performance, and user
experience. This forces many companies to develop several platform-specific
applications for mobile and desktop devices for one single service, resulting
in a significant increase in time and cost for development, deployment, and
maintenance [1, 2, 3]. This is a great challenge for today’s mobile developers
and has been appointed one of the biggest challenges of mobile development
today [4].
Luckily, there have been vital advancements in alternative techniques,
such as cross-platform application development tools and frameworks. The
primary purpose of cross-platform frameworks is to provide tools that let the
developer create one application targeting several platforms while retaining
the native performance [1]. One example of such a framework is React
Native, a widely used mobile application framework developed by Facebook.
With React Native, developers can create mobile applications compatible with
Android and iOS. A more recent approach is Progressive Web Applications
(PWAs) built upon standard web technologies, extended with additional
features like offline support, enabling native behavior when running on mobile
devices. PWAs are accessed through the browser, making it available not
only across mobile devices but to all devices with a browser. Furthermore,
in contrast to native development that requires applications to be distributed
once for each platform, PWAs are only distributed once.
This chapter describes the specific problem that this thesis addresses, the
context of the problem, and the goals of this thesis project. Finally, it outlines
the structure of the thesis.
2 | Introduction

1.1 Background
When smartphones started to emerge and mobile applications became popular,
there was mainly one technique used for mobile development, the native
approach. Although there were a few attempts to create web-based mobile
applications at the time, they often failed due to inadequate hardware access,
poor user experience, and extensive load times [5]. For instance, web
applications were not capable of accessing the device’s gyroscope or camera,
providing offline support, or sending push notifications [5].
With the increased popularity and accessibility demands on mobile
applications and the increased fragmentation across platforms, platform-
specific development started to become one of the greatest challenges of
mobile application development [4]. This gave rise to cross-platform
application development tools and frameworks, making it possible to develop
one single application compatible with several platforms. An early approach
was hybrid applications, combining native functionality and web technologies
to enable development with web technologies while still having access to
hardware functionality [6]. Another was the interpreted approach, where
source code gets interpreted into native components in a separate run-time
environment [7]. Although these approaches have been popular in mobile
application development recently, especially the interpreted approach, they
still suffer from limitations [1, 6].
Although native-specific features such as offline mode, push notifications,
and background synchronization were for long unthinkable to be supported
by the web, the great advancements in web technology have started to make
this reality [5]. With more widespread support for device-specific hardware
and software Application programming interfaces (APIs), a new kind of
web applications have started to emerge, PWAs. PWAs are an extension of
web applications, with additional features such as offline mode, background
synchronization, and push notifications. As for web applications, PWAs can
be used on any device with a browser, adding another layer to recent cross-
platform approaches. PWAs strive to overcome the weaknesses of former
approaches by providing access to both web and native application features to
get the best of the two worlds [8]. Another advantage with PWA development
is that it only requires developers with knowledge of one programming
language, making it easier to find, hire and manage the development team.
This is particularly important for companies such as Northvolt, where all cost
reductions are crucial since they enable faster development and maintenance
with the same resources and more spending in areas where it matters.
Introduction | 3

Several companies have started to embrace this new technology, and


several successful applications have been developed, such as Twitter Lite,
OLA, Expedia, Tinder, Financial Times, and Forbes [9]. An interesting
fact about these companies is that they belong to different markets, making
it possible to believe that PWAs can be applicable for various types of
applications and markets. However, since PWAs run in the browser, access
to the device’s hardware and software APIs is dependent on what the browser
supports, making browser support crucial for the advancement of PWAs.
Thus, the question arises, how suitable are PWAs for mobile application
development, and can PWAs be the future approach of cross-platform
development?

1.2 Problem
To investigate the suitability of PWAs for mobile application development, it
is appropriate to consider it in the context of the requirements for a successful
cross-platform framework. For a cross-platform development framework to
be regarded as a valid substitute for native applications, it must have sufficient
support of built-in native features [1, 10, 11]. The importance of this was
shown as early as one and a half-decade ago when developers were trying to
build web-based applications as an alternative for native applications [5].
Another crucial factor is the resource consumption [1, 10]. Mobile devices
are limited in Central Processing Unit (CPU), memory, and battery capacity,
making efficient applications in these aspects an indispensable factor. CPU
and memory usage directly impact the application’s energy consumption and
the device’s ability to run separate processes simultaneously. A long-lasting
battery time and efficient applications in these terms are essential for the user
experience of the device [3, 12] and environmental sustainability. Another
critical metric for the user experience is response times [3].
For the most part, previous research concerning mobile applications and
PWAs only targets the Android platform. A reason for this could be that iOS
is lagging behind Android in PWA feature support. Another reason could be
that the Android profiling experience is better and provide more insightful
data [12]. Although Android accounts for the largest share of the Mobile
Operating System market shares, iOS is a major player reaching above 26 %
of mobile users worldwide [13] and 52 % of mobile users in Sweden [14],
making application support for the iOS platform important. As shown in
previous research, mobile application performance and feature availability
depend heavily on the evaluated framework and the targeted platform and
4 | Introduction

OS [3, 5, 12]. For this reason, assessing available features and performance on
both Android and iOS is essential for the ongoing academic discussion about
PWAs.
Nonetheless, there is a lack of knowledge in CPU and memory
consumption and what native features are available on iOS for PWA. Only one
researcher, Steiner, has evaluated native feature support on iOS [5]. However,
this work only includes 15 features and was conducted in 2018. In addition, the
thesis by Fournier and the work by Biørn-Hansen et al. stand out as the only
studies that evaluate the CPU and memory performance of PWAs [15, 16].
Nevertheless, these research projects only cover evaluations on the Android
platform. Furthermore, work with similar characteristics as this research
project has also been suggested by Fournier [15], Johannsen [17], and Biørn-
Hansen [16].
Finally, Willocx et al. suggested the performance overhead of accessing
device-specific resources from cross-platform tools as another important topic
to investigate [3]. QR code scanning is an interesting feature in that sense since
it requires hardware access to the camera. QR code scanning is also a critical
feature for Northvolt, making it an excellent fit for this work. To gain a better
understanding of these areas and how PWAs stands out compared to other
approaches, this research aims to answer the following:

RQ1 What set of native iOS features are supported for PWAs and React Native
applications?

RQ2 How well is QR code scanning and recognition supported for PWAs and
are available tools as performant as for native iOS applications?

RQ3 How does PWAs’ performance in memory consumption, CPU usage,


response times, and geolocation accuracy compare to React Native and
native iOS applications?

1.3 Hypothesis
The results of previous work indicate that web applications and PWAs
perform slightly better than cross-platform applications such as React Native
applications in terms of performance but are limited in native features support
and performance compared to native Android applications [3, 6, 7, 12, 16,
18, 19]. Although Android and iOS are different platforms, and as Biørn-
Hansen et al. concluded differ significantly in memory consumption and CPU
Introduction | 5

usage [12], it is reasonable to believe that the native iOS application is the
most performant followed by the PWA and the React Native application.
PWAs are probably the most restricted regarding native features support,
followed by the React Native application. Fredrikson’s work showed that React
Native provides comprehensive support for most features on Android [7].
Thus, we believe that the same applies to iOS.
Fransson discovered that camera access was faster on the Android native
application than the PWA [20]. A reasonable scenario is that we achieve a
similar result for native iOS and that the native iOS application is the most
performant in QR code scanning and recognition. In summary, this research
will have the hypotheses:

• PWAs are limited in native iOS features support.

• There is wide support for web-based QR code scanners, but not as


extensive as for native iOS applications.

• PWAs are comparable with cross-platform applications in performance


but perform worse than native iOS applications.

1.4 Purpose and sustainability


The purpose of this study is to find out if PWAs are supported enough on
the iOS device to be served as a substitute for native mobile applications on
iOS, especially considering applications with QR code scanning. The aim
is to create a basis for comparison, which could be used as guidance for
developers when choosing frameworks and technologies for future mobile
applications, but possibly even as a foundation for reconstructing existing
applications in different technologies. Furthermore, this study aims to answer
if the performance of PWAs on iOS is similar to results achieved on Android.
As sustainability is at the heart of Northvolt’s mission and competitive
advantage, and the company is continuously working towards reducing the
environmental impact of the products, it is vital to consider the sustainability
aspects for all software developed at the company. Since resource usage,
such as CPU usage, significantly affects the product’s energy consumption in
production, this project could implicitly be used as a basis to make judgments
from an environmental sustainability perspective.
6 | Introduction

1.5 Goals
This section presents the goals we strive to achieve with this thesis. The goals
are as follows:

G1 Examine the current support of native features for PWAs and provide a
table describing it.

G2 Compare and examine available QR code scanning libraries for PWA


technologies.

G3 Collected quantitative performance data from experiments and statisti-


cally evaluate it.

G4 Analyze and compare the achieved results and discuss it in context of


related work.

1.6 Research Methodology


This study has a positivism philosophy where knowledge is derived from
experiments and comparative analysis [21]. The data for answering RQ1
is collected qualitatively by examining available APIs for PWAs and React
Native applications. The features for PWA are further tested to prove a correct
API behavior for each feature, to mitigate the risk of classifying features
as supported when the API exists but has incorrect behavior, as discussed
by Steiner [5]. The data for RQ3 is collected quantitatively, and RQ2 is
qualitatively in choosing QR code scanning libraries and quantitatively by
running experiments. The combination of approaches gives this work a
balance between the experimental and the qualitative approach, as suggested
by Majchrzak et al. [9].
The Hypothetico-deductive method is applied for analysis, combining
inductive and deductive reasoning [21]. The Hypothetico-deductive method
consists of five parts, problem identification, inductive hypothesis develop-
ment, charting their implications by deduction, testing of the hypothesis, and
rejecting or refining it in the light of the results [21].
The pre-study aims to give an understanding of the current state of
the art. This constitutes a basis for problem identification, hypothesis
development, and implication deduction. The pre-study also incorporates
finding the most adequate QR code scanner library for PWAs, native iOS, and
cross-platform applications. Three comparable applications are implemented
Introduction | 7

and constitute the experiment benchmark used to test the hypothesis. We


implement the QR code libraries found in the literature study as part of
the benchmark applications, one for each framework: a React PWA, a
React Native application, and a native iOS application. Finally, we perform
the experiments and statistically analyze the results to reject or refine the
hypothesis for RQ3.

1.7 Delimitations
Since much work has been done evaluating mobile applications on Android,
this project will not include experiments on the Android platform. This work
will also only include a subset of the native features available, selected after
importance to stay within the scope of a Master’s thesis. Furthermore, while
exterminating native features, features will be classified as not supported if
there does not exist a well-known API which provides an opportunity to use
the feature. This implies that workarounds might exist that enable the feature,
even if it is classified as not supported in this study.
Henceforth, the benchmark and the experimental setup are limited in
numerous ways. A more extensive benchmark would have created more
reliable and comparable results, especially for performance comparison. Due
to time constraints, we only evaluated one cross-platform approach. In
addition, we only evaluate one framework for the PWA. Obviously, including
more frameworks and deeper analysis would deliver more reliable results.
Furthermore, we only examined two different devices. More extensive
experimentation, including various devices of different kinds, might impact
the outcome.

1.8 Structure of the thesis


Chapter 2 presents relevant background information about mobile application
development. Chapter 3 presents academic research related to PWAs and
cross-platform frameworks. Chapter 4 presents the methods used to solve the
problem. Chapter 5 presents and analyzes the results. Chapter 6 discuss the
results in a greater context. Chapter 7 summarizes conclusions and propose
potential future work.
8 | Introduction
Background | 9

Chapter 2

Background

This chapter introduces the concepts and technologies used in this study. It
covers basic background information about mobile application development,
including native development and different approaches for cross-platform
applications and their differences. Additionally, we discuss important
concepts and cornerstones of PWAs, the frameworks for the benchmark
applications, and tools for experiment automation and profiling on iOS.

2.1 Mobile application development


The development of mobile applications is very similar to the development
of other embedded applications but comes with a few additional technical
requirements. Wasserman listed eight technical requirements crucial for
mobile application development, potential interaction with other applications,
sensor handling, native and web-specific features, families of hardware and
software platforms, security, user interfaces, the complexity of testing, and
power consumption [22].
There exist several different strategies in mobile applications development,
broadly divided into three approaches: native, run-time environments,
and generative. The native approach is used to develop platform-specific
applications, whereas the others target multiple platforms using the same code
base, typically referred to as cross-platform applications.
Applications using the run-time environments approach run inside a run-
time environment that interprets the source at run-time [6]. In this approach,
application source is platform-independent while the run-time environment
could differ among mobile platforms [6]. The run-time environment approach
can further be divided into web applications, PWAs, hybrid applications, and
10 | Background

interpreted applications. Web applications and PWAs run in the Web browser,
a hybrid application in a combination of web and native components, and
interpreted applications in a self-contained environment [6].

Figure 2.1: Mobile development approaches. An categorization of different


mobile application development approaches (adapted from [9]).

The generative paradigm consists of the model-driven approach, and


the cross-compiled approach [6]. The model-driven approach is based on
Model-Driven Architecture, where the key concept is to provide a framework
enabling developers to describe an application on a high level, without the
demand of dealing with low-level technical issues [1]. In the cross-compiled
approach, the source code is written in common programming languages and
transformed to native code by cross compilers [1]. The most popular ones
are flutter and Xamarin [23]. The generative approach is appealing but will
not be covered more in this work due to the limited timeframe for application
development. Figure 2.1 gives an overview of the key differences between the
different approaches.

2.1.1 Native applications


Native development was the starting point for mobile application development
and is used to develop platform-specific applications [9]. The source code is
typically developed using the platforms respective Software Development Kits
(SDKs) and frameworks. The source code is compiled into executable artifacts
and distributed to platform-specific application stores for users to download [6,
24]. This enables complete access to device-specific hardware and software
Background | 11

such as the camera, the microphone, and the Global Positioning System (GPS)
of the device [24, 25].
The closeness to the device hardware results in great performance and
the usage of native components a pleasant user experience [6, 17, 24, 26].
However, since the source code is not shareable between platforms, developers
will have to repeat coding, testing, maintenance, and distribution using
different SDKs, frameworks, tools, and programming languages for each
specific platform, leading to high development and maintenance costs [9, 24,
26, 27]. In addition, the wide range of programming languages, tools, SDKs,
and frameworks used sets higher demands on the development team, which
must have platform-specific knowledge [6].
Native Android applications are typically developed in Java or Kotlin using
Android SDK, and native iOS applications are written in Swift or Objective-C
using Xcode.

2.1.2 Web applications


Web applications are developed using standard web technologies, and utilize
the browser as their run-time environment [6]. Applications are developed
with responsive mobile-adapted design, making them usable on various
devices with different screen sizes. Web applications behave like regular
websites adapted for mobile use and therefore differ in appearance and
behavior compared to native applications [6].
The main advantages of this approach are the wide compatibility, and
the reduced tech stack [6, 24, 25]. Web applications can be used on any
device with a browser. Furthermore, since the web server that hosts the
application code provides the latest version, the user will always access the
most recent version. This facilitates maintenance [24], and ensure that all
users are provided a similar experience across platforms [27]. However, web
applications lack access to device-specific hardware and software APIs, cannot
be distributed through App stores, nor used offline [1, 6, 24, 25, 27]. Also,
browser artifacts such as the address cannot be hidden, which deteriorates the
user experience, and the native feel [7].
Despite the flaws of this approach, performance and support for web
technologies are constantly getting better [5, 9, 25]. With new functionality
and APIs available, a new type of web-based applications have emerged,
PWAs [25], presented in detail in Section 2.2.
12 | Background

2.1.3 Hybrid applications


The hybrid approach combines native functionality with web technologies.
This approach equips developers with an environment where the source code
is similar to web applications, with the addition of APIs for platform-specific
hardware and software access [6].
The run-time environment consists of two major components, the web
rendering engine and the native engine, where the web rendering engine
is wrapped inside the native engine [6]. Application content is displayed
inside a Web View, where calls to hardware and software APIs are forwarded
to the native wrapper [6]. Examples of such views are UIWebView for
iOS and WebView for Android. The native wrapper does not only enables
hardware and software access on the device but it also facilitates packaging
and distribution to several platform-specific App stores [27].
The greatest advantages of the hybrid approach are the reuse of source code
across platforms, access to device hardware, and the native-like distribution
through application stores [6, 24, 27, 28]. Despite, hybrid applications
suffer in user experience and performance due additional calls to the native
wrapper [1, 6, 24, 26, 29]. Hybrid applications are also limited to the API
support provided to the native wrapper [17]. Popular hybrid development
frameworks are Apache Cordova, Ionic, and PhoneGap [1, 6, 24, 30, 31].

2.1.4 Interpreted applications


Interpreted applications do not depend on the Web View, as hybrid
applications, but instead, use a custom run-time environment that interprets
the source into native components [7]. The vendor of the framework needs to
create the run-time layer for each targeted platform, which provides a common
API for the developers using the framework [16]. The custom run-time
environment increases the flexibility and freedom with the cost of increasing
workload for the developer [6]. Interpreted applications are typically written
in common languages like JavaScript, and utilize a bridge that translates the
source at run-time, enabling hardware and software access on the device [7].
The primary advantage of the interpreted applications is their look and feel,
mainly due to the usage of native components [1, 6]. However, interpreting
the source at run-time harms application performance [1, 6]. Another
disadvantage is that the development environment needs to be kept up-to-
date by the framework maintainer [1]. More specifically, when new platform-
specific features are released or old deprecated by the vendors, the interpreted
framework must be updated to reflect these changes [1]. Hence, native features
Background | 13

are only available if they are supported by the development environment [1].
Popular interpreted application frameworks are React Native, Native-
Script, Titanium [7, 24, 31]. React Native is the most commonly used and
did significantly increase the interest for the interpreted approach in mobile
application development [23, 31].

2.2 Progressive Web Applications


PWAs constitute a new set of applications, aiming to overcome the weaknesses
of former cross-platform approaches [27]. PWAs are developed using
modern web technologies and could be considered an extension of web
applications, including additional features such as offline mode, background
synchronization, and push notifications. PWAs strive to provide access
to platform-specific APIs from the native paradigm, combined with web-
specific abilities such as allowing the user to test the application without
requirements of installation [8]. PWAs run, like web applications, in the
browser environment, but with the ability to hide typical browser artifacts,
such as the address bar and menus, making the PWA look and feel more native-
like [9].
A key disadvantage of PWAs is their dependence on the API support
provided by the browsers. The standard browser for iOS devices, Safari,
is lagging behind browsers available for the Android platform making the
possibilities of PWA much more prominent on the Android platform [5, 30].
However, there has been a lot of progress in the area, and we are likely to see
broader support in newer browser versions [5, 30].
PWAs utilize service workers, manifests, and other web-platform features
combined with progressive enhancement to give users a good and more native-
like experience [8]. These key artifacts enable most of the new functionality
added with PWAs. Despite that there is no clear definition of PWAs [5], some
key principles apply [8, 32]:
• Progressively enhanced — Provided PWA features are strongly depen-
dent on the browser and their support of common web standards [26].
PWAs should be functional and accessible independently on browser
choice, even though modern browsers deliver a richer experience,
including more features.
• Installable — The applications should be installable on the home screen
and feel appropriately integrated with the OS by, for instance, hiding the
URL bar or shifting the browser’s rendering orientation.
14 | Background

• Responsively designed — The user experience should be of high


standard independently of the device and browser used.

• Discoverable — Search engines should be able to find application


content, enabled by manifests and service worker registration scope.

• Linkable — It should be easy to share the application via the URL


without requiring the receiver to perform any complex application
installation.

• Network independent — The application should be usable even if it


experiences poor or no network access.

• Re-engageable — It should be easy to notify users whenever new


content is available.

• Secure — PWAs must be served over HTTPS, preventing third parties


from accessing or modify any data shared between the application,
users, and the remote server.

• Up-to-date — The application should stay up-to-date whenever a new


version is available. [8, 32]

PWAs provide accessibility on all platforms with a browser, allowing


developers to develop and maintain only one codebase targeting all different
platforms and devices. This reduces time, and costs for application
development, maintenance, and deployment [15].

2.2.1 Service Workers


The service worker is the most important artifact of the PWA. The service
worker is defined in a JavaScript file and runs on a separate application
thread. The service worker acts like a proxy server in between the application,
the browser, and the network [33]. Service workers are responsible for
intercepting and modifying navigation and resource requests and cache
resources, giving the developer complete control over the application’s
behavior [9].
Caching provided by the service worker can be divided into two parts, pre-
caching and dynamic caching. Pre-caching refers to caching of static assets,
providing offline and fast access to the application shell [34]. However, pre-
cache alone does not provide any dynamic data to the user while offline. This
is the purpose of dynamic caching. Dynamic caching provides a fallback
Background | 15

for offline mode by caching complete response/request pairs [34], facilitating


application usage even when a network connection is missing [33].

Figure 2.2: This figure illustrates the service worker and how it handles
incoming user requests.

Furthermore, service workers facilitate access to push notifications and


background synchronization, which increase the native-like experience. For
security reasons, service workers are only allowed to run over HTTPS [19].
The reason is not to leave modified network requests wide open to man-in-
the-middle attacks [33].
There exist several open-source libraries that facilitate handy APIs to
access the Service Worker API, where a widely used one is Workbox.
Workbox is an open-source project created by Google, providing a framework
independent API enabling caching of application assets [35]. The developer
can choose to generate a complete service worker by specifying a configuration
file or by developing a custom using the Workbox provided API. Workbox
offers the opportunity to specify the files to cache, which are efficiently
maintained and kept up to date.
16 | Background

2.2.2 Web app manifest


The web app manifest is a JavaScript Object Notation (JSON) file with
metadata about the web application, including application name, icons, launch
URL, author, version, description, and more [36]. This makes the application
installable and behave similarly to the native app on the home screen when
installed. The web app manifest also includes behavioral information such as
background color, launch screen, scope, orientation, and display, empowering
a richer user experience. For instance, this enables the developer to hide the
URL bar or force the application to launch in landscape mode. The complete
list of manifest properties can be found at MDN Web Docs [36].

2.2.3 Application shell


A web application can either be generated on the client using the Client-
side rendering (CSR) technique, be rendered server-side using the Server-side
rendering (SSR) technique, or by using a combination of both. The application
shell technique utilizes a mix of both and is the most used PWAs rendering
technique [37]. The application shell is the minimal set of HyperText Markup
Language (HTML), Cascading Style Sheets (CSS) and JavaScript required
to power the user interface of the application [38]. Compared to native
applications, one could think of the application shell like the bundle of code
published to app stores [38].
The application shell architecture has three core features: provide fast
loading times, the ability to be cached, and the ability to dynamically display
content [38]. By pre-caching the static content on the user’s first visit,
only dynamic data needs to be fetched from the remote server for future
requests, and application visits of the user [39]. Pre-caching also enables a
fast first content rendering from the second time the application is visited [39].
Furthermore, caching of the application shell allows the user to get some
content rendered even when the device has no internet connection [39].

2.3 Frameworks
A native iOS application and an interpreted application in React Native will
serve as the baseline for comparison to the React PWA in this study. Swift
and SwiftUI together offer the most modern way of developing native iOS
applications, motivating their usage. React Native was chosen due to its
popularity, its vicinity to PWAs in web technologies utilization, and promising
Background | 17

results in previous studies [7, 12, 30, 31, 40, 41]. React JS was primarily
chosen due to its vast popularity and similarity with React Native [42]. This
section briefly introduces the frameworks for the benchmark applications.

2.3.1 Swift & SwiftUI


Swift is the most modern programming language for iOS development and
can be used to create applications for all different Apple devices. SwiftUI is
a user interface framework, making it easy for developers to design, develop,
and control the application’s user interface. Swift and SwiftUI are the most
modern ways to create Apple applications, compared to the older alternatives
Objective-C and UIKit. By using SwiftUI together with Swift, developers can
bring even better user experiences than before by only using one set of tools
and APIs [43].

2.3.2 React JS
React JS is an open-source user interface framework created by Facebook. It
provides developers with tooling to build front-end web applications. React
JS only provides state management and rendering functionality, inferring that
React developers usually need to utilize additional libraries when developing
applications. React JS is currently the most popular framework worldwide for
front-end development [42].

2.3.3 React Native


React Native is an interpreted mobile application framework built upon React
JS, allowing developers to create mobile applications compatible with both
Android and iOS. React Native combines natively rendered user interfaces
with a run-time for JavaScript-based logic. The latter is enabled through
on-device language interpreters who interpret the JavaScript-based code to
platform-specific interface components [16]. Examples of such on-device
language interpreters are JavaScriptCore and V8. React Native is together
with Flutter the most popular cross-platform framework for mobile application
development [23, 31].
18 | Background

2.4 PWA feature detection tools


Feature detection for web-based applications can be exhaustive since several
browsers and browser versions exist, with substantially different support for
web-based APIs. PWA feature detection tools automate the feature support
detection process and can be used to test any browser and version. Web-
based feature detection tools are typically built using web technologies and
provided as a standard web application. To check the feature support for a
specific browser version, one can visit the website using that particular browser
version. When the page has loaded, they check for the existence of the service
worker container object, a read-only property providing access to registration,
removal, upgrade, and communication with the service worker [44]. If the
property exists, they will typically register a new service worker using the
service worker container object. If the registration is successful, the tools
proceed by checking the existence of feature-specific APIs and API methods.
Finally, when all checks have been performed, the tools commonly provide
the list of features and their support status for the tested browser version in the
application’s Graphical User Interface (GUI).

Listing 2.1: PWA offline mode support check


1 window.addEventListener(”load”, () => {
2 if (navigation.serviceWorker) {
3 navigator.serviceWorker.register(”sw.js”)
4 .then((swRegistration) => {
5 updateUI({features: {
6 ”Offline mode” : ”caches” in window,
7 // Other feature checks goes here...
8 }})
9 });
10 } else {
11 updateUI({error: ”Service Workers Are Not Supported”})
12 }
13 });

For instance, to check the browser’s offline support, a detection tool


typically checks the service worker container objects and registers a service
worker as described above. Then, it proceeds by checking whether the Caches
API is available by checking the existence of the window’s read-only property
caches. The Cache API provides a storage mechanism for request/response
pairs enabling the application to be used offline [45]. If the property exists, it
will be classified as supported, and as not supported if not. A code example
for this flow is provided in Listing 2.1.
Background | 19

2.4.1 Steiner’s PWA feature detector


Steiner’s PWA feature detector is an open-source application that provides an
easy way to test browser support of PWA features [46]. The tool first tries
to register a Service Worker. If it succeeds, it executes the included feature
detection tests. The result is finally presented in a table in the application’s
user interface. The tool performs method existence checks for desired feature
APIs. Steiner’s PWA feature detector was created as part of his research in
2018 and currently supports the existence testing of 18 features [5].

2.4.2 What Web can do today


What Web Can Do Today is another open-source application for PWA feature
detection [47]. This tool provides support checks of 49 different features. As
in Steiner’s PWA feature detector, this tool only checks for method existence
of desired feature API. However, the existence check is more detailed than
in Steiner’s tool. For instance, when checking for push notification support,
it does not only check for the existence of the window.PushManager, it also
checks for existence of the non-standard proprietary solution available for OS
X, window.safari.pushNotification [48]. In addition, What Web Can Do Today
also implements some of the features, allowing the user to test their behavior.

2.5 XCode
Xcode is Apple’s integrated development environment (IDE) used to build
applications for Apple devices. Xcode provides various tools to manage the
entire development workflow, from developing, to testing, optimizing, and
releasing it to the App Store [49]. Xcode provides the testing framework
XCTest and the profiling tool Xcode instruments used for experiments
automation and profiling in this study.

2.5.1 XCTest
XCTest is a testing framework provided by Apple that can be used to create
and run automated unit tests, performance tests, and User Interface (UI) tests
for Xcode projects [50]. XCUITest is the UI testing tool provided by XCTest.
It is a multiprocess testing tool that utilizes a separate test runner process to
drive interactions with the application. Whenever a UI action is performed
in XCUITest, the framework makes sure to synchronize with the application’s
20 | Background

state by checking that the application is idle before proceeding with the next
action [51].
XCTMetric is the protocol used in XCTest for performance measurements.
The protocol defines methods that objects must provide when gathering
metrics during performance tests [52]. XCTest supplies six different metrics
classes adopting the XCTMetric protocol, for which XCTClockMetric is used
in this study [53].

2.5.2 Xcode instruments


Xcode instruments is Xcode’s profiling and can be used to gather performance
data from running applications [54]. Xcode instruments supplies performance
data such as memory usage, CPU activity, disk activity, and graphics
operations. In contrast to XCTest, Xcode instruments can profile all
running processes on the device. This ability is vital for PWA performance
measurements on iOS since the functionality of its run-time environment
WebKit API is segregated between the UIProcess (MobileSafari) and the three
auxiliary processes: WebContent, Networking, and Storage [55]. In addition,
application interactions might impact other running processes, so getting the
complete picture of the performance footprint is crucial for fair comparisons.

2.6 Summary
As discussed in this section, there exist numerous approaches and frameworks
for mobile applications development today. This study targets PWAs and the
native and the interpreted paradigms. Since we target the iOS platform in
this study, it is sufficient to develop a native iOS application as a reference
for comparison. Having a native application as a baseline is important
since performance-oriented comparison without a baseline is intrinsically
difficult [12]. The interpreted application is developed using React Native and
the PWAs using React JS, because of their wide popularity and similarity. The
similarity between the frameworks ease benchmark development and results
in a better comparison [7, 15]. Another reason for choosing these framework is
that they are widely used within the company, where many web applications
are written in React JS and the current mobile factory application in React
Native. The open-source project Workbox is used to develop the service
worker. Workbox was selected due to its vast popularity and to save time,
since writing custom service workers can be time-consuming [17].
State of the art | 21

Chapter 3

State of the art

Although PWAs have created an interest in mobile and web development, the
academic involvement is still low [9]. Majchrzak et al. tried to increase the
academic interest with their work analyzing the foundations of PWAs in cross-
platform development [9]. They concluded that it is too early to determine
whether PWAs can replace existing cross-platform development approaches
but emphasized that PWAs can contribute to a richer development experience
and maybe even better apps. Therefore, they urge academia to perform more
work in the field, balancing experimental and qualitative research.
This chapter gives an overview of the most important work related to PWAs
and other cross-platform frameworks. It is divided into four main topics:
feature evaluation, performance, user experience, and code quality. It ends
with a summary, including the key takeaways from this chapter.

3.1 Feature evaluation


Heitkötter et al. created a set of criteria to compare mobile development
approaches and used it to evaluate different cross-platform frameworks [6].
The evaluation consists of 14 criteria categorized by infrastructure and
development. Rieger et al. extended this work with two successive papers,
providing a weighted evaluation framework including over 30 criteria divided
into four categories [56, 57].
Delía et al. conducted two proceeding works analyzing cross-platform
features divided into three different categories: non-functional features,
developer features, and software project management features [24, 58]. The
results altogether show that native applications are preferred over alternative
approaches, followed by PWAs and the interpreted approach.
22 | State of the art

3.2 Feature support


Steiner examined the feature support of PWAs in different Web Views,
which are native application views displaying web content [5]. The greatest
contribution of this work is the open-source application, PWA feature
detector [46], providing an easy way of testing feature support for in-app or
stand-alone browsers. Another contribution is the evaluation of the level of
PWA feature support on different devices, Web Views, and browser versions.
The study shows a significant difference between different Web Views and
browser engines, that feature support constantly increases for newer browser
versions, and that feature support on iOS is much lower than on Android. The
low PWA feature support on iOS was also discovered by Adetunji et al. [27],
and mentioned in a series of works [7, 20, 30, 31]. Steiner’s feature detector
tool was also used by Adetunji et al., and Tandel et al. [27, 59].
Fredrikson used another PWA feature detector tool, What Web Can Do
Today [60], and concluded that PWAs lacks access to a few of the features
on Android, but emphasize that they still have access to the most common
ones. This conclusion is strengthened by the works by Fransson [20] and
Adetunji et al., [27], which also used What Web Can Do Today [60] to detect
PWA feature support. In addition, Fredrikson discovered that React Native
applications are widely supported on Android [7].

3.3 Performance
Several studies have been conducted evaluating the performance of cross-
platform frameworks. This section presents work related to response times
and application size, CPU and memory usage, and energy consumption.

3.3.1 Response times and application size


Majchrzak et al. conducted two subsequent experimental studies, where the
latter extends the first [30, 31]. The authors compared the performance of
five applications, a native Android application, a hybrid Ionic application, an
interpreted application in React Native, a cross-compiled Xamarin application,
and a React PWA. They collected three metrics: application launch time,
installation size, and time from app-icon tap to toolbar render, all measured
on the Android platform using a Google Nexus 5X device. The authors
showed that PWAs have the smallest application size, the shortest launch
time, and a shorter app-icon to toolbar render time if the browser is already
State of the art | 23

running in the background, but slower if not. Furthermore, they showed that
React Native applications perform well compared to the other cross-platform
applications [30, 31].
Several works strengthen the conclusion of PWAs’ small application
size [24, 58, 59, 61, 62], and their fast application launch time and short first
paint metric compared to native and other cross-platform applications [19, 24,
34, 58, 59, 61, 63]. However, Kerssen states that the launch time is slightly
faster for the native application than the PWA on the iOS platform [61].
In 2017, Fransson performed experiments to compare the response time of
camera and geolocation access for PWAs and native Android applications on
the Android platform [20]. Fransson performed experiments on two devices,
a Huawei Honor 8 and a OnePlus X, and statistically evaluated the result using
a t-test. Fransson concluded that native Android applications have a shorter
response time for accessing the camera than PWAs, whereas the opposite is
true for geolocation [20].
Biørn-Hansen et al. showed that React native applications perform well
compared to other cross-platform approaches when it comes to Time to
completion (TTC) for access to software and hardware APIs on Android, but
not as good as native Android applications [16]. Willocx et al. measured
the response time of in-app navigation and showed that some cross-platform
frameworks, including the interpreted framework Titanium, perform slightly
worse than native applications [3].

3.3.2 CPU and memory consumption


Fournier evaluated the performance of three different mobile applications, a
React PWA, a React Native application, and a native Android application using
the metrics smoothness, CPU usage, and memory consumption [15]. Four
different experiments, changing a text, scrolling a text, changing an image, and
scrolling an image view, were performed on devices and emulators targeting
the Android platform. The author showed that native Android applications
perform better than PWAs in all criteria and that PWA perform better in CPU
and memory usage than React Native applications.
Dorfer et al. performed geolocation and networking experiments to com-
pare the resource and energy efficiency of React Native and native Android
applications [18]. The authors showed that React Native applications have
substantially higher CPU and memory usage than Android applications [18].
Furthermore, the authors claim that the increased resource consumption
impacted the energy consumption where React Native consumes between 6%
24 | State of the art

and 8% more energy than native Android [18].


Biørn-Hansen et al. performed experiments on native Android, native
iOS, and cross-platform applications using the frameworks React Native,
Ionic, and Xamarin [12]. The experiments were performed on high-end
mobile devices on both Android and iOS, focusing on device hardware impact
and penalties caused by transitions and animations. Multiple performance
profiling tools were used to measure CPU usage, device memory usage, GPU
memory usage, and Frames per second (FPS). The researchers performed three
different experiments: a complex animation consisting of multiple elements,
the opening sequence of a side menu navigation pattern, and a transition
animation during in-app page navigation. The authors concluded that React
Native is the most performant cross-platform framework of the ones tested.
The authors also uncovered essential differences in device hardware utilization
during animations among the cross-platform technologies and that Android
and iOS differ significantly in memory consumption and CPU usage where
Android devices have lower CPU consumption but use more memory than
iOS devices across all application types.
Applications high resource consumption on Android is strengthened by
the two works by Willocx et al. [2, 3]. The authors also showed that all
cross-platform applications use more persistent memory than their native
counterparts and that JavaScript frameworks are the least performant in CPU
usage, memory consumption [3].
In addition, Biørn-Hansen et al. performed a study evaluating the
performance overhead of the bridge between the cross-platform code and the
underlying operating system, and hardware APIs [16]. The experiments were
conducted on the Android platform using a benchmark of a native application
and five cross-platform development frameworks, including React Native. The
authors measured CPU usage, Idle-state memory consumption (PreRAM),
Random access memory (RAM), the difference between RAM and PreRAM,
and the TTC for accessing geolocation, contacts, the file system, and the
accelerometer. The results indicate that native Android applications perform
better than the cross-platform alternatives in general but that some cross-
platform tools can perform better than native Android applications in specific
metrics [16].

3.3.3 Energy consumption


Kerssen compared the performance and energy consumption of a PWA, two
native applications, and a web application [61]. Kerssen concluded that
State of the art | 25

PWAs perform slightly better in energy consumption than native applications.


In addition, two works assessed the impact of service workers and caching
on the energy consumption of PWAs [63, 64]. The authors found no
significant difference in energy consumption either when utilizing service
workers or caching. Contradictory, Ciman et al. concluded that cross-platform
applications always consume more energy than native applications [65].
However, Ciman et al. study is older and evaluated web applications rather
than PWAs, which might be a reason for the ambiguous result.

3.4 User experience


Two previous works examined the user experience of PWAs. Fredrikson
evaluated how close a React PWAs and a React Native application could
emulate the native experience on the Android platform and concluded that
PWAs and React Native applications can emulate and, in some respects,
even outperform the native experience [7]. Andrade Cardieri et al. utilized a
Human-computer interaction (HCI) specialist to compare the user experience
between PWAs, web applications, and native Android applications [66].
The authors concluded that the overall user experience is comparable, and
nothing indicates that any of the tested applications offer a more enjoyable
interaction [66].
In addition, a few other works examined the user experience of cross-
platform frameworks. Both Axelsson and Carlström, and Hansson and
Vidhall showed that React Native offers a similar user experience as native
applications [40, 41]. The only noticeable difference is in the animations,
which the authors had problems implementing [40]. Conversely, Charland
et al. claim that web applications lack user experience compared to native
applications [67] and Dalmasso et al. concluded that the user experience of
cross-platform tools is not as good as for native apps [10]. However, the
two latter studies were conducted in 2011 and 2013 and could be considered
outdated compared to the other studies.

3.5 Code quality


Johannsen measured the structural complexity of the code inferred by
upgrading an Angular web application to an Angular PWA [17]. The author
used different metrics such as the cyclomatic complexity, Halstead effort, and
Source line of code Source Lines of Code (SLOC). He concluded that the
26 | State of the art

added complexity is low, especially when using frameworks with automated


PWA tooling such as Angular PWA tooling. Delía et al. strengthen this
conclusion with their work showing that the maintainability and code reuse
of PWAs are better compared to the interpreted approach and much better
compared to native [24, 58].
Abrahamsson and Berntsen evaluated the modifiability between a React
Native application and two native applications, one for iOS and one for
Android [25]. The authors concluded that React Native seems to be more
stable and modifiable than native applications, even though they emphasize
the benchmark used as a vital limitation. Consistently, the work by Hansson
and Vidhall shows that about 75% of the React Native code can be reused over
both the Android and iOS platforms while adding complementary platform-
specific code is easy [41].

3.6 Summary
The largest body of related work focuses on different kinds of performance
criteria were the most recurring are application size, response times, and
resource consumption. Previous work indicates that PWAs are comparable
to and in some criteria, even more performant than native applications [31,
61]. Nevertheless, native applications surpass PWAs in memory and CPU
performance on Android [15]. React Native applications perform well
compared to other cross-platform applications, but are often outperformed by
native applications [15, 16, 31]. For the most part, previous work only targets
the Android platform.
Several studies have evaluated what kind of features that are important
and how their support can be be detected with tools. PWAs and React Native
applications, have adequate feature support on Android, but the feature support
is more limited on iOS [5, 7].
Finally, both PWAs and React Native applications provide a similar user
experience as native applications [7, 66].
Method | 27

Chapter 4

Method

This chapter provides an overview of the research method used in this thesis.
Section 4.1 recalls the research questions and motivates their relevance with
respect to previous work. Section 4.2 introduces the evaluated feature set
and the methodology used for feature detection and support classification.
Section 4.3 focuses on the methodology used to answer RQ2, and Section 4.4
the methodology used to answer RQ3. Section 4.5 presents the experimental
setup. Section 4.6 explains the metrics recorded during the experiments.
Section 4.7 introduces the statistical methods used to analyze the result.
Section 4.8 presents the benchmark artifacts.

4.1 Research questions


As introduced in Section 1.2, this thesis aims to answer the following:

RQ1 What set of native iOS features are supported for PWAs and React Native
applications?

RQ2 How well is QR code scanning and recognition supported for PWAs and
are available tools as performant as for native iOS applications?

RQ3 How does PWAs’ performance in memory consumption, CPU usage,


response times, and geolocation accuracy compare to React Native and
native iOS applications?

Although iOS reaches about one-quarter of the mobile users worldwide,


most previous work only targets Android, which has resulted in a knowledge
gap between iOS and Android [13]. To minimize this gap, we focus on iOS in
this study.
28 | Method

For a cross-platform development framework to be regarded as a valid


substitute for native applications, it must have sufficient support of built-
in native features [1, 10, 11]. However, only one researcher, Steiner, has
evaluated native feature support on iOS [5]. Steiner’s work only covered 15
features and was conducted several years ago. Due to mobile application
development’s rapid progress, these results should be considered obsolete.
Thus, a modern and more comprehensive iOS feature study is essential for
the ongoing PWAs discussion.
Mobile devices are limited in CPU, memory, and battery capacity,
and slow response times can drastically worsen the user experience,
making efficient applications in these aspects an indispensable factor [3].
Additionally, we evaluate geolocation accuracy, as it is crucial for Northvolt’s
in-factory localization, and QR code scanning as it is a critical feature for
Northvolt. Previous works by Fournier, Biørn-Hansen et al. and Fransson
are the only studies evaluating PWAs’ CPU, memory performance, and in-app
response times [15, 16, 20]. Nevertheless, these studies only cover Android.

4.2 RQ1: Feature support and detection


This section consists of two parts. The first part is dedicated to presenting the
features used to answer RQ1, and the second part focuses on the methodology
used for feature detection and support classification.

4.2.1 Feature support


The feature set consists of 33 features, divided into five categories. We started
with a feature set of frequently recurring features from previous work, most
targeting Android, and revised it during discussions with domain experts
at Northvolt. In this way, we created a feature set of interest for mobile
application developers in general and Northvolt in specific.

4.2.1.1 Accessibility
Apple provides several features for their users which customize the appearance
of UI elements on the screen. This thesis tests eight accessibility
features, evaluating the framework’s ability to retrieve the user’s accessibility
preferences on the device.

• Appearance — Sets the system-wide default appearance. Valid targets


are dark and light modes, where the latter is the default.
Method | 29

• Bold text — Enables the user to display fonts in bold style.

• Increased contrast — Used to present content with increased contrast.

• Inverted colors — Inverts all colors displayed on the screen.

• Preferred languages — A list of the user’s preferred languages.

• Reduce motions — Reduces non-essential motions and animations.

• Reduce transparency — Minimizes transparency across elements.

• Text size — Enables the user to customize the systems default text size
on the screen.

4.2.1.2 Installation and Storage


The installation and storage category consists of five features, evaluating the
framework’s installation, storage, and file access capabilities.

• App market availability — Whether the application can be uploaded


to and installed from Apple’s App Store.

• File access — The ability to access files, documents, images, and videos
on the device.

• Installability — Whether the application can be installed, including


app-specific metadata.

• Offline mode — Whether the application can be used without internet


access.

• Persistent Storage — The application’s capability to persistently store


data with guarantees to not be deleted without the user’s consent.

4.2.1.3 Display and Screen control


The display and screen control category evaluates the framework’s ability to
control the application’s appearance and detect changes in the application life
cycle state.

• Fullscreen mode — The run-time environment’s ability to present the


application in fullscreen mode, without run-time environmental artifacts
such as the URL bar in WebKit.
30 | Method

• Life cycle detection — The framework’s capability to detect system-


related life cycle transitions between the iOS application states
unattached, active and inactive foreground, background, and suspen-
sion.

• Launch screen — The propensity to customize the launch screen.

• Orientation lock — Whether application rendering can be locked into


landscape or portrait mode.

• Theme color — The capability to control the application’s theme color,


customizing the application’s surrounding user interface such as the
status bar.

4.2.1.4 Background tasks and Notifications


This category is based on four features and evaluates the framework’s ability
to send notifications and perform tasks while running in the background.

• Background fetch — The capability to manage downloads of large files


and software when the application is not running in the foreground.

• Background synchronization — The proficiency to synchronize data


when the application is running in the background. For instance,
deferring a user request, like sending data to a server, when the user
lacks internet access to the point when it recommences.

• Local notifications — The capability to send notifications to attract the


user’s attention. Local notifications are notifications triggered locally
from the application.

• Push notifications — The capability to re-engage subscribing users


with push notification. In contrast to local notifications, push
notifications refers to remotely sent messages.

4.2.1.5 Device hardware and Surroundings


The hardware and surroundings category consists of 11 features and evaluates
the framework’s ability to interact with the device’s hardware APIs and
neighboring devices.

• Accelerometer — The ability to access data from the device’s


accelerometer.
Method | 31

• AR — The ability to display USDZ files of virtual objects in 3D or AR.

• Bluetooth — The ability to pair with Bluetooth peripheral devices


nearby.

• Camera — The ability to access and use the device’s camera.

• Geolocation — The ability to access the device’s geographical location.

• Gyroscope — The ability to access data from the device’s gyroscope.

• Magnetometer — The ability to access data from the device’s


magnetometer.

• Microphone — The ability to access and record audio with the device’s
microphone.

• Near-Field Communication (NFC) — The ability to interact with


nearby NFC devices.

• QR code scanning — The ability to scan and interpret QR codes,


including the symbology types: QR code and data matrix.

• Vibrations — The ability to access the device’s built-in vibration motor.

4.2.2 Feature detection


Feature detection for React Native was performed by checking relevant
API documentation for respective technology. Most native functionality is
achievable in React Native applications, but the core library only provides a
subset of the features. Therefore, besides finding out whether the features
are inaccessible or partly supported due to the nature of the application
framework, feature detection for the React Native application also incorporates
finding out whether the feature is provided by the core library or by a third
party.
Feature detection for the PWA is a bit more difficult since web APIs target
several different browsers, and the support among them differ substantially.
PWA existence checks could be performed with Steiner’s PWA feature
detector [46] and the tool What Web can do today [47]. However, these
tools do not cover all features of this work. Furthermore, Steiner’s PWA
feature detector only checks for API methods’ existence and does not invoke
them [46]. This is problematic when examining web-based application
support since the tool might incorrectly classify features as supported.
32 | Method

Subsection 2.4.1 describes an example of an incorrect classification due to


this.
To face the limitations of previous approaches, we implemented a new
feature detection tool in React JS as part of the PWA benchmark application.
The application extends the detection functionality of previous approaches by
including additional features and feature implementations for features passing
the API existence check. The feature implementations let us test the actual
behavior of the APIs, mitigating the risk of classifying features as supported
when the API exists but with undesired behavior.

4.3 RQ2: QR code scanning


The methodology used to answer RQ2 is twofold. First, we assessed PWAs’
suitability as QR code scanning applications on iOS by evaluating the set of
requirements presented in Subsection 4.3.1. Then, we conducted run-time a
experiment to assess PWAs scanning performance compared to React Native
and native iOS. The run-time experiment is described in Subsection 4.3.2.

4.3.1 Requirements
The requirements consist of four requirements, supposed to give an insight of
whether PWAs are enough supported for iOS factory scanning applications.
Firstly, support for the symbology types Data matrices, and QR codes are
crucial since they frequently occur in the factory. Secondly, we check for
support of advanced camera features such as the torch and zoom. The
advanced camera capabilities are not mandatory but preferred.

4.3.2 QR code scanning experiment


The QR code scanning experiment tests the frameworks’ ability to detect and
interpret QR codes and their performance meanwhile. We experimented with
two key symbology types: QR codes and data matrices and measured the
metrics: scanning correctness, clock monotonic time, CPU time, RAM, and
ComputedRAM. These metrics are described in Section 4.6. Two different
codes for each symbology type were created with varying levels of complexity,
resulting in four different setups. The codes were printed on white paper
and with a size of 5 cm2 . The device was set up in a fixed position 15 cm
above the code, with the code in the center of the device’s recording area.
The experiment was performed with the same lighting. The fixed setup is
Method | 33

appropriate since it creates equal conditions for all applications to succeed


and increases the reproducibility of the experiments.
The experiment profiles the interval between when the application’s ”QR
code scanning” button is pressed in the Hardware view until the application
has detected the code, recognized the data, and printed a ”Scan succeeded!”
message to the application’s UI. The attempt is classified as failed if the
application does not succeed within five seconds.

4.4 RQ3: Performance


RQ3 was evaluated with experiments, where the frameworks’ run-time
performance was measured in different scenarios. Specifically, we conducted
experiments testing three different scenarios: locating the user’s position,
navigating between views, and scrolling a view, while collecting the perfor-
mance metrics: clock monotonic time, CPU time, RAM, ComputedRAM, and
geolocation accuracy. The following subsections describe the experiments,
and Section 4.6 provides details about the metrics.
Navigation and Scroll views are prevalent UI components in popular
mobile applications. These features are implemented in all of Apple App
Store’s top 25 applications in Sweden, while 20 implement geolocation.
Table 4.1 summarizes these numbers. Furthermore, all these components
have diverse behavior and characteristics. Thus, they should be considered
generic and diverse enough to cover general application behavior. Finally, all
scenarios have been evaluated in previous research [3, 16, 18, 20], making
comparisons possible, which further strengthens them as suitable components
to test. Together, they are intended to examine the possible penalties
introduced by the cross-platform frameworks.

Feature implemented App Fraction App Percentage


Geolocation 20 / 25 80 %
Navigation 25 / 25 100 %
Scroll View 25 / 25 100 %

Table 4.1: Feature implementation summary for Apple App Store’s 25 top free
applications in Sweden [68]. The utilization of navigation and scroll views
were manually reviewed by testing the applications. Usage of geolocation was
derived from the applications app integrity section at Apple App Store.

Appendix A provides the complete list of the evaluated applications


34 | Method

including the component implementation status.

4.4.1 Geolocation experiment


The geolocation experiment evaluates the performance for accessing the
device’s geolocation. The experiment profiles the interval from when the
application’s ”Watch current location” button is pressed until the first position
is displayed in the application’s UI. This experiment was conducted at two
different locations for each application. For this experiment, we also collected
the horizontal accuracy.

4.4.2 Navigation experiment


Navigation was evaluated with one experiment consisting of a series of
navigation transitions between application views. The navigation experiment
includes transitions of the bottom navigation and the stack navigation
components. The navigation series consists of the following ten transitions:
1. Press Bold text in the Accessibility screen’s list view.
2. Press the back button of the Bold text screen’s view.
3. Press Text size in the Accessibility screen’s list view.
4. Press the back button of the Text size screen’s view.
5. Press Background Tasks in the bottom navigation component.
6. Press Hardware in the bottom navigation component.
7. Press Installation in the bottom navigation component.
8. Press App market availability in the Installation screen’s list view.
9. Press Screen Control in the bottom navigation component.
10. Press Accessibility in the bottom navigation component.

4.4.3 Scrolling experiment


The scrolling performance was tested with one experiment. This experiment
evaluates the scrolling of a view filled with images. The experiment measures
the interval from when the scrolling starts until the application idles. The
scrolling experiment is similar to the scrolling experiment conducted in
Fournier’s work [15].
Method | 35

4.5 Experimental setup


The experiments were automated with the testing framework XCTest and
profiled with Xcode instruments using the Activity monitor template. All
experiments were repeated 100 times. For the geolocation experiment, this
means 50 repetitions per location.
From a generalization perspective, it is essential to consider more than
one device when evaluating the performance [12]. Furthermore, conducting
performance experiments on physical devices rather than virtual emulators
is vital for a trustworthy result since performance metrics for virtual
and physical environments follow different distributions and have different
correlations [69]. Therefore, we conducted the experiments using two physical
iOS devices, a low-end iPhone 8 and a high-end iPhone 13. Important
specification details for these devices are listed in Table 4.2.

Model Released OS Version Memory (RAM) Processor (CPU)


iPhone 8 2017 iOS 15.2 2 GB Hexa-Core, 2 processors:
2.39GHz Dual-Core Monsoon
1.6GHz Quad-Core Mistral
iPhone 13 2021 iOS 15.2 8 GB Hexa-Core, 2 processors:
3.23GHz Dual-Core Avalanche
1.9GHz Quad-Core Blizzard

Table 4.2: Overview of the physical devices and important specification


details.

Before conducting the experiments, we turned off background app


activities on the devices and updated their Operating system (OS) to the latest
version available, iOS 15.2. Furthermore, to minimize the impact of previous
experiments, we terminated all applications and restarted the device before
each experiment run [12]. XCTest was set in relaunch mode, meaning that
the experiment setup and tear-down including application termination and
relaunch were performed in-between each repetition.

4.6 Metrics
The XCTest class XCTClockMetric’s metric clock monotonic time was used
to record the elapsed time during the experiments. CPU and memory
measurements were collected with the Activity monitor in Xcode instruments.
Xcode instruments is great for profiling but forces the user to extract
36 | Method

the results manually. Thus, CPU and memory data extraction was only
performed for 30 repetitions per device and experiment. In addition to
these metrics, we captured the horizontal geolocation accuracy during the
geolocation experiment and scanning correctness during the QR code scanning
experiment.
The functionality of the modern WebKit API is segregated between the
UIProcess (MobileSafari) and the auxiliary processes: WebContent, Network-
ing, and Storage [55]. Out of these, only WebContent significantly impacts
the performance. Therefore, we include the app-specific WebContent process
for PWA CPU and memory measurements. For the scanning experiment,
we also incorporate the mediaserverd process for memory measurements and
mediaserverd plus applecamerad (iPhone 8) or appleh13camerad (iPhone 13)
for CPU measurements since these processes were highly impacted. Note that
only CPU measurements of the camera processes were included since their
memory impact was negligible.

4.6.1 Scanning correctness


The scanning correctness metric was collected during the scanning experiment
and refers to the fraction of succeeded scanning attempts divided by the
number of scanning attempts. High scanning correctness is vital for a reliable
and performant scanning application. The formula for the correctness metric
is given below:

Scanning attempts succeeded


Scanning correctness =
Scanning attempts

4.6.2 Clock monotonic time


The clock monotonic time was measured during all experiments using the
XCTest metric class XCTClockMetric [53]. XCTest measures the time
monotonically, meaning that it considers the time spent for executing the
experiment, including the time when the CPU is idle or running instructions
belonging to another process or thread [53]. Response times are crucial since
slow responses can result in a negative user experience [3].

4.6.3 CPU time


The CPU time was measured during all experiments and refers to the sum
of time that the CPUs are active and execute instructions for the performed
Method | 37

experiment [70]. The CPU time metric does not include time for when the
CPU is idle or has switched context to execute instructions of a different
process or thread [70]. The formula for the CPU metric is given below, where
ti refers to the time elapsed for one specific CPU. High CPU usage results
in higher battery consumption and could negatively impact other processes
running on the platform and so on the experience of using them [3, 15].


CPUs
ti
i=1

4.6.4 Memory consumption


The memory consumption was collected during all experiments and refers to
the applications’ physical memory utilization. Similar to the work conducted
by Biørn-Hansen et al., we collect RAM and ComputedRAM [16]. RAM
refers to the memory peak during the experiment, and ComputedRAM
constitutes the difference between the memory peak and the memory occupied
by the application before experiment execution [16]. The memory footprint is
particularly essential for low-end devices since the platform’s performance can
be drastically affected if the majority of the available memory is allocated [3].

4.6.5 Geolocation accuracy


The geolocation accuracy metric was collected during the geolocation
experiment and refers to the horizontal accuracy for the provided geolocation
data. The geolocation API provides the location accuracy each time the
location is updated, which are collected and compared during the geolocation
experiment. The accuracy is essential for reliable location tracking.

4.7 Analysis
The collected data were evaluated using statistical analysis, where character-
istics of the collected data constituted the basis for the choice of statistical
method. First, we used variance analysis to determine whether there was
a difference in mean or median between the collected data per application.
Then, we performed post-hoc analysis to determine the specific groups that
are significantly different.
38 | Method

4.7.1 Analysis of Variance


The statistic method Analysis of Variance (ANOVA) analyzes the variance
between data groups to determine if the means from three or more groups
are significantly different [71]. ANOVA comes with three vital assumptions
that must hold to achieve accurate models. If these do not hold, more relaxed
statistical methods exist that are more robust under the conditions. The three
primary assumptions behind the ANOVA model is as follows [72]:

1. Each group have a normal population distribution.

2. These distributions have a common variance.

3. All data samples are drawn independently.

The third assumption is fulfilled for all samples, but assumptions 1-2 must
be verified. The normality assumption was confirmed visually by plotting
QQ-plot of the residuals and checking that the residuals follow the 45-degree
line [73]. If the normality assumption holds, we continued by checking the
common variance assumption with Bartlett’s test [73].
If assumption 1 holds but assumption 2, the assumption of homogeneity
of variances is violated, there is an alternative to ANOVA called Welch’s
ANOVA [74]. As for ANOVA, Welch’s ANOVA compares two means to see if
they are equal. Furthermore, we use the Kruskal-Wallis method if assumption
1, the normality assumption, is violated. Kruskal-Wallis is a non-parametric
alternative to ANOVA, without the data normality requirement [75]. In
contrast to ANOVA and Welch’s ANOVA that test the difference between
means, Kruskal-Wallis evaluate the median value [74].

4.7.2 Post-hoc analysis


While variance analysis determines whether there is a significant difference
between groups, post-hoc analysis determines which groups are significantly
different. The Tukey-HSD post-hoc method is optimal together with ANOVA,
while the post-hoc method Games-Howell is optimal together with Welch
ANOVA [76]. For Kruskal-Wallis, it is appropriate to use Dunn’s test [77],
which p-values we adjust with Bonferroni correction. Table 4.3 summarizes
the methods used depending assumption violation.
Method | 39

Violated Assumption Analysis Method Post-hoc method


— ANOVA Tukey HSD
1 Kruskal-Wallis Dunn-Bonferroni
2 Welch’s ANOVA Games Howell

Table 4.3: Overview of the statistical methods used depending on the violated
assumption.

4.8 Benchmark artifacts


Three benchmark artifacts were developed: a React Native application, a React
PWA and a native iOS application built using Swift and SwiftUI. The PWA
was used for feature detection and the run-time experiments, while the others
primarily were developed for the run-time experiments. Similar to the work
conducted by Biørn-Hansen et al. no optimizations were done to any of the
applications [12]. The reason for not optimizing was to capture the expected
performance behavior for developers directly after starting to work on a new
project [12].

(a) Progressive Web App (b) React Native App (c) Native iOS App

Figure 4.1: The benchmark artifacts’ accessibility list view.


40 | Method

The artifacts are feature exploration applications. They have a bottom


navigation component with five tabs, one for each feature category:
Accessibility, Background tasks, Hardware, Installation, and Screen control.
Each tab points to a separate view presenting a list of features belonging to that
specific feature category. Each element in the list are clickable and consist of
the feature’s name and an icon visualizing its support status. A detailed view
pops up when the user presses one of the feature elements. The detailed view
contains a short description of the specific API used to test the feature support.
For some features, this view has a simple implementation of the feature. For
the PWA, this applies to all features that passed the detection, as motivated in
Subsection 4.2.2.

4.8.1 QR code scanning


A GitHub search was conducted to find a sufficient and well-used QR
code scanning library for the PWAs and the React Native application. The
exploration was restricted to repositories with at least one release and more
than 500 stars. A complementary search was also performed on npm. The
following libraries were found:

Library Artifact QR Code Data Matrix Size Dependencies Weekly Actively


downloads maintained
jsQR PWA 3 7 0.28 MB 13 100 389 3
ZXing PWA 3 3 10.80 MB 36 69,010 3
react-qr-reader PWA 3 7 0.18 MB 936 36 935 7
Html5-XRCode PWA 3 3 2.57 MB 18 5 140 3
React Native Camera React Native 3 3 1.19 MB 26 98 569 3
react-native-qrcode-scanner React Native 3 3 0.04 MB 292 23 759 3

Table 4.4: Web-based QR code scanning libraries.

The jsQR1 library has the second smallest size, the smallest number
of dependencies, and the highest number of weekly downloads. It is also
actively maintained. However, jsQR and react-qr-reader2 does not support
data matrices, which is required. The only JS libraries supporting data
matrices are ZXing3 and Html5-XRCode4 , where ZXing is the most popular
and well-used. Thus, we decided to implement the PWA scanner using the
ZXing library.
Only two sufficient React Native libraries was found in the GitHub search,
1
https://github.com/cozmo/jsQR
2
https://github.com/JodusNodus/react-qr-reader
3
https://github.com/zxing-js/library
4
https://github.com/mebjas/html5-qrcode
Method | 41

where react-native-qrcode-scanner5 extends React Native Camera6 . The React


Native Camera library encompass the required functionality so we found no
reason to implement the scanner using the react-native-qrcode-scanner library.
The QR code libraries found in were implemented as part of three benchmark
artifacts.

4.8.2 Geolocation
For the native application, we used the Standard Location service of the Core
Location API. The Standard Location service is a general-purpose solution
for tracking the user’s location and is the most accurate and immediate API
location provider out of Core Location’s three services [78]. The disadvantage
of using this API is the high power consumption compared to the other location
services [78]. Despite the high power consumption, there is a trade-off with
the accuracy, where high accuracy was determined as most important.
For the PWA artifact, geolocation was implemented with the Geolocation
API. The third-party library React Native Geolocation, an extension of the
web-based Geolocation API, was used to retrieve the device’s geolocation
in the React Native artifact [79]. For location tracking in the React Native
application, we set the flag enableHighAccuracy, meaning that the API
provides the location using the GPS. If the flag is not set, the API only utilize
the Wi-Fi [79].

4.8.3 Navigation
The artifacts utilize two different navigation components bottom Navigation
and stack Navigation. For the native iOS artifact, bottom Navigation was
implemented with the TabView component [80], and stack navigation with
the NavigationView component [81].
Navigation in the React Native artifact was implemented using the React
Navigation library [82]. Bottom Navigation was implemented with the Bottom
Tabs Navigator component, and Stack navigation was implemented with the
Stack Navigator component.
Even though open-source libraries exist for native-like Navigation in Web
applications and PWAs, they are simple to implement, which is why we
decided to implement them from scratch. The custom-built components for
the PWA mimic the appearance of the native components.
5
https://github.com/moaazsidat/react-native-qrcode-scanner
6
https://github.com/react-native-camera/react-native-camera
42 | Method

4.8.4 Scroll View


The scrolling view was implemented with the ScrollView component for the
native iOS artifact [83], and the React Native ScrollView component for the
React Native artifact [84]. As for the PWA, scrolling is already apparent by
the nature of the safari WebView, the iOS run-time environment for PWAs.
Thus, no extra wrapping component had to be implemented to achieve a scroll
view in the PWA.
Results | 43

Chapter 5

Results

In this chapter, we present the results and analyze them. Section 5.1 covers
RQ1 and contains the results of the feature evaluation. Section 5.2 provides
an overview of the ANOVA assumption analysis. Section 5.3 focuses on RQ2
and includes PWAs’ scanning suitability and performance compared to React
Native and native iOS. Section 5.4 presents the results and analysis of the run-
time experiments associated with RQ3.

5.1 RQ1: Feature support


This section presents the results of the feature evaluation for PWAs and React
Native applications on iOS. Table 5.1 presents the feature classification where
each feature is classified as either supported, supported by a third-party library,
partly supported, or not supported. This is followed by five subsections,
one for each category, providing the APIs evaluated and details about the
classification.
As can be seen in Table 5.1, it is clear that PWAs are more limited than
React Native applications. Of 33 features, only 13 are classified as supported,
one as supported by a third-party library and ten as partly supported. In
contrast, React Native offers support for all features but two, classified as partly
supported. Nevertheless, it should be emphasized that 19 of the features are
not provided by the React Native core API, but rather from additional third-
party libraries.
44 | Results

Feature PWA React Native


Accessibility
Appearance 3 3
Bold text 3 3
Increased contrast 3 3
Inverted colors 3 3
Preferred languages △ 3 (3rd party)
Reduce motions 3 3
Reduce transparency 7 3
Text size 3 3
Installation and Storage
App market availability △ 3
File access 3 3 (3rd party)
Installability △ 3
Offline mode 3 3
Persistent storage 7 3 (3rd party)
Display and Screen control
Fullscreen mode △ 3
Life cycle detection △ △
Launch screen △ 3 (3rd party)
Orientation lock 7 3 (3rd party)
Theme color 3 △
Background tasks and Notifications
Background fetch 7 3 (3rd party)
Background synchronization △ 3 (3rd party)
Local notifications 7 3 (3rd party)
Push notifications 7 3 (3rd party)
Device hardware and Surroundings
Accelerometer △ 3 (3rd party)
AR 3 3 (3rd party)
Bluetooth 7 3 (3rd party)
Camera 3 3 (3rd party)
Geolocation 3 3 (3rd party)
Gyroscope △ 3 (3rd party)
Magnetometer △ 3 (3rd party)
Microphone 3 3 (3rd party)
NFC 7 3 (3rd party)
QR code scanning 3 (3rd party) 3 (3rd party)
Vibrations 7 3

Table 5.1: Feature support for PWAs and React Native applications. Supported
features are notated with a check mark (3), unsupported with a cross (7), and
partly supported features with a triangle (△).
Results | 45

5.1.1 Accessibility
PWAs can access the user preferred language by calling the navigator’s
language or languages properties.1 However, PWAs can only access the
primary language, even though several languages are specified. PWAs cannot
access the device’s reduce transparency setting. The functionality is provided
by the CSS setting prefers-reduced-transparency2 , but it is not compatible with
any browser yet. All other accessibility features: bold text and text size3 ,
contrast4 , dark mode5 , inverted colors6 , and reduce motions7 are supported
and can be adapted with CSS rules.
React Native applications access the device’s dark mode setting via
the Appearance module8 and the bold text setting, inverted colors setting,
reduce motions setting, and reduce transparency via the AccessibilityInfo
API9 . React Native utilizes the iOS platform-specific DynamicColorIOS API10
to customize colors when the user’s increased contrast setting is enabled.
Automatic font scaling is enabled by default on React Native Text elements
but can be turned off with the allowFontScaling property11 . React Native
applications can access the preferred languages via React Native Localize12 .

5.1.2 Installation and Storage


Apple’s App Store does not support PWAs. However, there are web-based
directories from where users can download PWAs and web applications to
the home screen. A popular application market is Appscope, which provides
hundreds of web-based applications13 . PWAs provide a feature called Add
to home screen. The desired behavior is a triggered modal popup when the
user first visits the application URL asking the user to install the application.
This functionality is not supported by iOS. However, the user can add the
application to the home screen by pressing the share button and then Add to
1
https://developer.mozilla.org/docs/Web/API/Navigator
2
https://docs.w3cub.com/css/@media/prefers-reduced-transparency
3
https://webkit.org/blog/3709/using-the-system-font-in-web-content/
4
https://developer.mozilla.org/docs/Web/CSS/@media/prefers-contrast
5
https://developer.mozilla.org/docs/Web/CSS/@media/prefers-color-scheme
6
https://developer.mozilla.org/docs/Web/CSS/@media/inverted-colors
7
https://developer.mozilla.org/docs/Web/CSS/@media/prefers-reduced-motion
8
https://reactnative.dev/docs/appearance
9
https://reactnative.dev/docs/accessibilityinfo
10
https://reactnative.dev/docs/dynamiccolorios
11
https://reactnative.dev/docs/text#allowfontscaling
12
https://github.com/zoontek/react-native-localize
13
https://appsco.pe
46 | Results

home screen, enabling installation of the application on the device.


PWAs can access files, documents, images and videos stored on the device
or in the cloud through the File API14 , and Cache API15 provides offline usage.
The Cache API is browser-managed, inferring that the device’s OS can delete
data without the user’s consent. For instance, when the OS encounters low
device disk space. Storage API16 deals with this issue by providing fully
reliable data storage, which does not delete data without the user’s consent.17
However, Storage API is not supported on iOS.
React Native applications are, in contrast to PWAs, supported by Apple’s
App Store. Dynamic data can be stored persistently via third-party libraries
such as React Native Async Storage18 , or React Native MMKV Storage19 .
React Native applications can access images and videos via React Native
Image Picker20 , and documents via React Native Document Picker21 .

5.1.3 Display and Screen control


PWAs cannot programmatically request fullscreen mode via the Fullscreen
API22 . but browser artifacts can be hidden for a native-like appearance by
setting the manifest file’s display property to standalone. Worth mentioning
is that the display property can be set to fullscreen, but only standalone is
supported on iOS. Orientation lock is not supported on PWAs due to its
fullscreen feature dependency. The theme color is set with the HTML meta tag
theme-color23 , where the preferred color is specified in the content property.
Developers can add launch screen images by specifying iOS-specific
HTML link tags. However, specific launch images for all screen sizes must
be generated to support all devices.
Google Developers provides the library PageLifecycle.js24 , which makes
it easier for developers to deal with and observe application state changes
independently on browser choice. For installed PWAs on iOS, the library
14
https://developer.mozilla.org/docs/Web/API/File
15
https://developer.mozilla.org/docs/Web/API/Cache
16
https://developer.mozilla.org/docs/Web/API/Storage_API
17
https://whatwebcando.today/storage.html
18
https://github.com/react-native-async-storage/async-storage
19
https://github.com/ammarahm-ed/react-native-mmkv-storage
20
https://github.com/react-native-image-picker/react-native-image-picker
21
https://github.com/rnmods/react-native-document-picker
22
https://developer.mozilla.org/docs/Web/API/Fullscreen_API
23
https://developer.mozilla.org/docs/Web/HTML/Element/meta/name/theme-color
24
https://github.com/GoogleChromeLabs/page-lifecycle
Results | 47

can detect all life cycle state transitions, except for application start-up and
termination.
React Native applications can access the application state via the AppState
API25 . React Native applications cannot, as for PWAs, detect start-up and
termination transitions. The library React Native Orientation Locker26
provides orientation lock, and the launch screens are, as for native iOS
applications, added in Xcode. Fullscreen mode is provided by the Status Bar
API27 . By default, the theme color is translucent on iOS devices and cannot
be changed directly through the Status Bar API. Nevertheless, embedding the
status bar component inside another results in the same appearance.

5.1.4 Background tasks and Notifications


Although PWAs cannot synchronize requests in the background, they can
cache and postpone failed requests and trigger them when the service worker is
started again. Workbox implements this behavior in the Workbox Background
Sync API28 . PWAs does not support any of the other features in this category,
although the following APIs exist and are supported on Android: Background
Fetch API29 , Background Sync API30 , Periodic Background Sync API31 ,
Notifications API32 , and Push API33 .
Conversely, React Native supports all features. React Native Background
Fetch34 provides background fetch and synchronization, and OneSignal35
provides local notifications and push notifications.

5.1.5 Device hardware and Surroundings


PWAs can access the device’s camera and microphone, and QR code utilities
via Media Devices API36 . Apple’s AR Quick Look37 provides visibility of
25
https://reactnative.dev/docs/appstate
26
https://github.com/wonday/react-native-orientation-locker
27
https://reactnative.dev/docs/statusbar
28
https://developers.google.com/web/tools/workbox/modules/workbox-background-s
29
https://developer.mozilla.org/docs/Web/API/Background_Fetch_API
30
https://developer.mozilla.org/docs/Web/API/SyncManager
31
https://developer.mozilla.org/docs/Web/API/Web_Periodic_Background_Synchroniz
32
https://developer.mozilla.org/docs/Web/API/Notification
33
https://developer.mozilla.org/docs/Web/API/Push_API
34
https://github.com/transistorsoft/react-native-background-fetch
35
https://github.com/OneSignal/react-native-onesignal
36
https://developer.mozilla.org/docs/Web/API/MediaDevices
37
https://developer.apple.com/augmented-reality/quick-look/
48 | Results

USDZ files of virtual objects in 3D or AR.


PWAs retrieve accelerometer, gyroscope and magnetometer data from the
Device Motion API38 and the Device Orientation API39 . Geolocation infor-
mation is retrieved through the Geolocation API40 . The Device Orientation
API provides the device’s orientation, which is an aggregation of gyroscope
and magnetometer data. There exist a new web-based APIs, the Sensor API41
which provides access to raw accelerometer, gyroscope and magnetometer
data. However, this API is not supported on iOS. PWAs cannot connect to
Bluetooth peripherals via the Bluetooth API42 , exchange data over NFC via
the Web NFC API43 , nor interact with vibrations via the Vibration API44 .
React Native applications can exchange data over NFC via React Native
NFC Manager45 , connect to Bluetooth peripherals via React Native Ble
Plx46 , and trigger vibrations on iOS devices via the Vibration API47 . React
Native applications can access data from the accelerometer, the magnetometer,
and the gyroscope via React Native Sensors48 , and geolocation data via the
React Native Geolocation API49 . Camera access and QR code scanning are
provided via React Native Camera50 , the microphone via React Native Audio
Recorder51 , and AR via React Native Arkit52 and Viro53 .

5.1.6 Summary
In summary, PWA supports 24 of the 33 evaluated features, where 14 are
fully supported, ten partly supported and nine unsupported. Three crucial
native features and key principles behind PWAs are installability, network
independence and re-engageability. Of these, only network independence is
fully supported, and installability partly supported by PWAs.
38
https://developer.mozilla.org/docs/Web/API/DeviceMotionEvent
39
https://developer.mozilla.org/docs/Web/API/DeviceOrientationEvent
40
https://developer.mozilla.org/docs/Web/API/Geolocation_API
41
https://developer.mozilla.org/docs/Web/API/Sensor_APIs
42
https://developer.mozilla.org/docs/Web/API/Web_Bluetooth_API
43
https://developer.mozilla.org/docs/Web/API/Web_NFC_API
44
https://developer.mozilla.org/docs/Web/API/Vibration_API
45
https://github.com/revtel/react-native-nfc-manager
46
https://github.com/dotintent/react-native-ble-plx
47
https://reactnative.dev/docs/vibration
48
https://github.com/react-native-sensors/react-native-sensors
49
https://github.com/react-native-geolocation/react-native-geolocation
50
https://github.com/react-native-camera/react-native-camera
51
https://github.com/hyochan/react-native-audio-recorder-player
52
https://github.com/react-native-ar/react-native-arkit
53
https://github.com/viromedia/viro
Results | 49

React Native fully supports all features but two, which are partly supported.
The core library provides 21 of these features, whereas third-party libraries
provide the remaining 12.

5.2 ANOVA assumptions analysis


This section gives an overview of the ANOVA assumption analysis and
presents the statistical methods used to analyse the results in the following
sections. As described in Section 4.7, we checked the normality assumption
with QQ plots and the common variance assumptions with Bartlett’s test.
Bertlett’s test was performed with a significance level of 5 %.

Metric ∼ Normally distributed Common variance Analysis method Post-hoc method


Geolocation accuracy 7 — Kruskal-Wallis Dunn-Bonferroni
Scanning correctness 7 — — —
Clock monotonic time 3 7 Welch’s ANOVA Games Howell
CPU time 3 7 Welch’s ANOVA Games Howell
ComputedRAM 3 7 Welch’s ANOVA Games Howell
RAM 3 7 Welch’s ANOVA Games Howell

Table 5.2: Normality and variance analysis overview for the metrics.

Figure 5.1 illustrates two examples for when the collected data passed
and failed the normality test, and Table 5.2 summarizes the analysis results.
Bartlett’s test results are located in Appendix B.

(a) Passed normality test.


50 | Results

(b) Failed normality test.

Figure 5.1: Example QQ plots from the normality assumption analysis.


Figure 5.1a passed the normality test and its data is assumed to be normality
distributed whereas Figure 5.1b failed the test.

5.3 RQ2: QR code scanning


This section focuses on RQ2 and presents the classification of the PWA
scanning requirements and the results and analysis of the scanning run-time
performance experiment. The first part covers the requirements classification.
This is followed by metric-specific subsections, including the collected data
visualized with box plots. The last two parts are responsible for analyzing and
summarizing the results.

5.3.1 Scanning requirements


As pointed out in Section 5.1, PWA support camera access via Media
Devices API. In addition, the library ZXing provides support for scanning and
recognizing various symbology types, including Data matrices and QR Codes.
ImageCapture API54 provides advanced camera features such as the torch and
zoom. However, iOS does not support the ImageCapture API. The QR code
scanning requirements and their classification are summarized in Table 5.3.
54
https://developer.mozilla.org/en-US/docs/Web/API/ImageCapture
Results | 51

Requirement Type Supported


Data matrices Mandatory 3
QR Codes Mandatory 3
Advanced camera features Preferred 7

Table 5.3: Classification of QR code scanning requirements.

5.3.2 Scanning correctness


There was no noticeable difference in scanning correctness between the
applications. All frameworks had 100 % scanning correctness on iPhone 13.
On iPhone 8, all but the PWA achieved 100 % in scanning correctness, whereas
the PWA had two failed attempts on Data Matrix 2 and one failed scanning
attempt on QR Code 1. All these values are summarised in Table 5.4. Thus,
for low-end devices, we can see that the PWA performed slightly worse than
the other applications in scanning correctness, although it only failed three
attempts out of 400 in total.

Code PWA React Native Native iOS


iPhone 8 iPhone 13 iPhone 8 iPhone 13 iPhone 8 iPhone 13
Data Matrix 1 100 % 100 % 100 % 100 % 100 % 100 %
Data Matrix 2 98 % 100 % 100 % 100 % 100 % 100 %
QR Code 1 99 % 100 % 100 % 100 % 100 % 100 %
QR Code 2 100 % 100 % 100 % 100 % 100 % 100 %

Table 5.4: Percentage of correct scanning attempts per framework and code.

5.3.3 Response time


From the observations of Figure 5.2, it is noticeable that React Native
performed best in clock monotonic time. We can also identify a more
prominent fluctuation between measurements for the PWA, while the other
frameworks were more stable in general.
52 | Results

Figure 5.2: Box plot describing the clock monotonic time in seconds (s) per
device for the scanning experiment.

5.3.4 CPU usage

Figure 5.3: Box plot describing the CPU time in seconds (s) per device for the
scanning experiment.
Results | 53

From a visual assessment of Figure 5.7, we perceive a similar pattern regarding


fluctuating values for the PWA. Figure 5.3 shows that the PWA consumed the
least CPU time for the scanning experiment. We can also observe that React
Native performed better than native iOS on iPhone 8 but worse on iPhone 13.

5.3.5 Memory consumption


The RAM and ComputedRAM measurements are visualized in Figure 5.4 and
Figure 5.5 respectively. Especially remarkable are the PWA’s low memory
utilization compared to the other frameworks. React Native performed
slightly better than native iOS, although the performance difference is not as
outstanding as for PWA.

Figure 5.4: Box plot describing the RAM in mebibytes (MiB) per device for
the scanning experiment.
54 | Results

Figure 5.5: Box plot describing the ComputedRAM in mebibytes (MiB) per
device for the scanning experiment.

5.3.6 Statistical analysis


The domination of the PWAs becomes apparent when inspecting Table 5.5,
where it is classified as the most performant framework in all metrics but clock
monotonic time. Welch’s ANOVA shows a difference between means to a
significance level of 1 % for all metrics. Moreover, Games Howell results in
a p-value of 1.0 × 10−3 for all metrics but RAM on iPhone 13, where the p-
value is 0.835 between the React Native and native iOS application. Thus,
there is a significant difference between the frameworks in all metrics except
for RAM used between React Native and native iOS on iPhone 13.
Results | 55

Experiment Mean Welch’s ANOVA Rank


PWA React Native Native p-value PWA React Native Native
iPhone 8
Clock monotonic time (s) 2.2 1.3 2.4 5.9 × 10−223 2 1 3
CPU time (s) 2.2 2.9 3.3 3.0 × 10−25 1 2 3
RAM (MiB) 159.9 243.1 273.2 3.3 × 10−53 1 2 3
ComputedRAM (MiB) 48.3 177.2 229.3 3.3 × 10−63 1 2 3
Score 5 7 12
iPhone 13
Clock monotonic time (s) 1.8 1.0 1.5 5.4 × 10−150 3 1 2
CPU time (s) 1.4 2.8 2.0 1.3 × 10−37 1 3 2
RAM (MiB) 246.1 473.4 476.7 4.8 × 10−53 1 2 2
ComputedRAM (MiB) 61.5 376.4 400.9 6.5 × 10−66 1 2 3
Score 6 8 9
Total score 11 15 21

Table 5.5: The scanning experiment results per metric and framework. The
Mean columns display the means for each application, over 100 runs for the
clock monotonic time metric, and over 30 runs for the other metrics. The
Welch’s ANOVA column, the p-value from the Welch’s ANOVA analysis, and
the Rank columns ranks the frameworks from 1 (best) to 3 (worst).

5.3.7 Summary
PWAs fulfill the mandatory requirements for QR code scanning, although
they suffer from access to advanced camera features on iOS. In addition, the
PWA outperformed the other frameworks in CPU and memory utilization
but performed worse in scanning correctness and clock monotonic time.
Table 5.13 summarizes the frameworks’ performance ranks from the run-time
experiments.

Metric PWA React Native Native iOS


Scanning correctness 3 1 1
Clock monotonic time 2 1 2
CPU time 1 2 2
RAM 1 2 3
ComputedRAM 1 2 3
Total 8 8 11

Table 5.6: A summary of the framework’s run-time performance ranks


in scanning. Ranks are provided per metric, where 1 represent the most
performant framework and 3 the worst.
56 | Results

5.4 RQ3: Performance


This section presents the results from the run-time experiments used to answer
RQ3 and the statistical analysis of the results. It consists of metric-specific
subsections, including the collected data and analysis. The collected data are
visualized with box plots and summarized in tables in each part.

5.4.1 Response time


From the observations of Figure 5.6, it is noticeable that the React Native
application performed better than expected and even outperformed both the
PWA and the native application in clock monotonic time. As for scanning, we
can see more significant fluctuations between measurements for the PWA.
The domination of the React Native application becomes even more
apparent when inspecting Table 5.7, where it is classified as the most
performant application in all experiments except for navigation on iPhone 13.
Welch’s ANOVA analysis shows a difference between the means of the groups
in all experiments to a significance level of 1 %. Furthermore, Games Howell
results in a p-value of 1.0 × 10−3 for all experiments but the Geolocation
experiment on iPhone 13, where the p-value is 0.9 between the native iOS and
the React Native application. Thus, there is a significant difference between
all applications for all experiments but the Geolocation experiment on iPhone
13 for native iOS and React Native.

(a) Geolocation
Results | 57

(b) Navigation

(c) Scrolling

Figure 5.6: Box plots describing the clock monotonic time in seconds (s) per
device and experiment.
58 | Results

Experiment Mean (s) Welch’s ANOVA Rank


PWA React Native Native p-value PWA React Native Native
iPhone 8
Geolocation 1.38 0.39 0.46 8.7 × 10−82 3 1 2
Navigation 6.04 5.26 8.12 2.3 × 10−145 2 1 3
Scanning 2.15 1.25 2.43 5.9 × 10−223 2 1 3
Scrolling 3.01 2.88 3.13 2.0 × 10−170 2 1 3
Score 9 4 11
iPhone 13
Geolocation 1.21 0.43 0.43 2.2 × 10−54 3 1 1
Navigation 5.00 5.46 7.15 1.3 × 10−187 1 2 3
Scanning 1.76 1.04 1.45 5.4 × 10−150 3 1 2
Scrolling 2.82 2.76 3.16 3.0 × 10−259 2 1 3
Score 9 5 9
Total score 18 9 20

Table 5.7: Clock monotonic time results per experiment and framework. The
Mean columns display the means for each application over 100 runs, the
Welch’s ANOVA column, the p-value from the Welch’s ANOVA analysis, and
the Rank columns ranks the frameworks from 1 (best) to 3 (worst).

5.4.2 CPU usage


From a visual assessment of Figure 5.7, we can identify that the native iOS
performed best in the geolocation and the navigation experiments, while
React Native performed best in scrolling. As for the clock monotonic time
measurements, we perceive a similar pattern regarding fluctuating values for
the PWA.
Results | 59

(a) Geolocation

(b) Navigation
60 | Results

(c) Scrolling

Figure 5.7: Box plots describing the CPU time in seconds (s) per device and
experiment.

Experiment Mean (s) Welch’s ANOVA Rank


PWA React Native Native p-value PWA React Native Native
iPhone 8
Geolocation 0.78 0.28 0.19 2.2 × 10−29 3 2 1
Navigation 2.13 2.15 1.62 6.9 × 10−41 2 2 1
Scrolling 1.27 0.39 0.71 4.4 × 10−54 3 1 2
Score 8 5 4
iPhone 13
Geolocation 0.57 0.15 0.11 1.2 × 10−30 3 2 1
Navigation 1.59 1.78 1.15 5.6 × 10−45 2 3 1
Scrolling 1.06 0.28 0.75 2.2 × 10−54 3 1 2
Score 8 6 4
Total score 16 11 8

Table 5.8: CPU time results per experiment and framework. The Mean
columns display the means for each application over 30 runs, the Welch’s
ANOVA column, the p-value from the Welch’s ANOVA analysis, and the Rank
columns ranks the frameworks from 1 (best) to 3 (worst).
Results | 61

An even more translucent picture of the CPU performance is given in


Table 5.8. Welch’s ANOVA results in a difference between means to a
significance level of 1 % for all experiments. Moreover, the Games Howell
analysis results in a p-value of 1.0 × 10−3 for all experiments but navigation
on iPhone 8, where the p-value is 0.491 between the PWA and the React Native
application. Thus, there is a significant difference between all applications and
experiments but PWAs and React Native for navigation on iPhone 8.

5.4.3 Memory consumption


The RAM measurements are visualized in Figure 5.8 and ComputedRAM in
Figure 5.9. The visualizations indicate a great difference between RAM used
depending on experiment and framework measured, where especially scrolling
resulted in high memory consumption.
The RAM used for the React Native application was not particularly
affected by any of the experiments. The Native application’s RAM
usage follows a similar pattern in the geolocation and navigation scanning
experiments, but it consumes significantly more RAM during the scrolling
experiment. Considering the PWA, we can identify a high RAM utilization in
all experiments. Table 5.9 summarizes these values and ranks the frameworks
in achieved performance.

(a) Geolocation
62 | Results

(b) Navigation

(c) Scrolling

Figure 5.8: Box plots describing the RAM used in mebibytes (MiB) per device
and experiment.

As can be seen in Table 5.9, Welch’s ANOVA results in a difference


between means to a significance level of 1 % for all experiments. Moreover,
Games Howell results in a p-value of 1.0 × 10−3 for all experiments. Thus,
Results | 63

there is a significant difference between all applications and experiments.

Experiment Mean (MiB) Welch’s ANOVA Rank


PWA React Native Native p-value PWA React Native Native
iPhone 8
Geolocation 113.1 39.5 18.1 5.6 × 10−71 3 2 1
Navigation 132.8 37.8 20.6 1.6 × 10−64 3 2 1
Scrolling 167.2 41.5 230.7 3.0 × 10−85 2 1 3
Score 8 5 5
iPhone 13
Geolocation 146.6 40.1 18.8 3.0 × 10−61 3 2 1
Navigation 202.8 38.4 21.6 3.6 × 10−57 3 2 1
Scrolling 266.1 42.3 232.0 1.3 × 10−97 3 2 1
Score 9 6 3
Total score 17 11 8

Table 5.9: RAM results per experiment and framework. The Mean columns
display the means for each application over 30 runs, the Welch’s ANOVA
column, the p-value from the Welch’s ANOVA analysis, and the Rank columns
ranks the frameworks from 1 (best) to 3 (worst).

(a) Geolocation
64 | Results

(b) Navigation

(c) Scrolling

Figure 5.9: Box plots describing the ComputedRAM in mebibytes (MiB) per
device and experiment.

We can observe that the React Native application has a very low
ComputedRAM for all experiments. The same applies to native iOS in
geolocation and navigation, but the results are worse for scrolling. The PWA
Results | 65

performed worst in ComputedRAM.


For ComputedRAM, Welch’s ANOVA results in a difference between
means to a significance level of 1 % and Games Howell in a p-value of
1.0 × 10−3 for all experiments. Thus, there is a significant difference in means
between all applications and experiments.

Experiment Mean (MiB) Welch’s ANOVA Rank


PWA React Native Native p-value PWA React Native Native
iPhone 8
Geolocation 4.36 1.13 0.35 1.9 × 10−20 3 2 1
Navigation 37.97 1.60 5.29 6.2 × 10−38 3 1 2
Scrolling 17.36 0.16 82.17 1.2 × 10−86 2 1 3
Score 8 4 6
iPhone 13
Geolocation 15.86 1.35 0.49 7.6 × 10−29 3 2 1
Navigation 75.96 1.15 5.59 2.4 × 10−44 3 1 2
Scrolling 56.55 0.00 12.86 7.1 × 10−58 3 1 2
Score 9 4 5
Total score 17 8 11

Table 5.10: ComputedRAM results per experiment and framework. The Mean
columns display the means for each application over 30 runs, the Welch’s
ANOVA column, the p-value from the Welch’s ANOVA analysis, and the Rank
columns ranks the frameworks from 1 (best) to 3 (worst).

5.4.4 Geolocation accuracy


As can be seen in Figure 5.10 and Table 5.11, the iPhone 8 device achieved
a horizontal accuracy of 35 m independently of location and application. A
similar pattern applies to iPhone 13 at location 2, where the accuracy was 10.7
for all but four collected measurements. It is noticeable that all frameworks
had a much wider spread between the measurements at location 1 compared
to location 2. Since there are only a few deviating measurements for the React
Native application on iPhone 13 at Location 2, it is reasonable to believe that
it was due to a temporary interference that possibly could have impacted any
of the applications.
66 | Results

(a) Location 1

(b) Location 2

Figure 5.10: Box plots describing the horizontal accuracy achieved in meters
(m) per device and location.
Results | 67

Device Framework Location Min Mean Max


1 35.0 35.0 35.0
PWA
2 35.0 35.0 35.0
1 35.0 35.0 35.0
iPhone 8 React Native
2 35.0 35.0 35.0
1 35.0 35.0 35.0
Native iOS
2 35.0 35.0 35.0
1 10.7 28.4 93.9
PWA
2 10.7 10.7 10.7
1 8.4 17.4 70.1
iPhone 13 React Native
2 10.7 14.3 179.3
1 10.7 19.7 44.5
Native iOS
2 10.7 10.7 10.7

Table 5.11: Horizontal min, mean and max accuracy achieved in meters (m).

A Kruskal-Wallis test was used to determine if the medians of the accuracy


achieved for all frameworks are equal. The test was performed per device and
location. Kruskal-Wallis was only performed for iPhone 13 since iPhone 8 had
no differences in horizontal accuracy. The resulting p-values of the Kruskal-
Wallis test are presented in Table 5.12. Since the p-values are greater than 0.01,
we cannot reject that the medians of all groups are equal to a significance level
of 1 %, meaning that we cannot conclude that there is a significant difference
in horizontal geolocation accuracy between the frameworks.

Device Location P value < 0.01


1 0.758 7
iPhone 13
2 0.017 7

Table 5.12: Kruskal-Wallis analysis of the collected geolocation accuracy data


per device and location.

5.4.5 Summary
Surprisingly, React Native was the most performant framework, tightly
followed by native iOS. The PWA performed worse overall in all metrics but
clock monotonic time and geolocation accuracy. Table 5.13 summarizes the
frameworks’ performance ranks from the run-time experiments.
68 | Results

Metric PWA React Native Native iOS


Clock monotonic time 2 1 3
CPU time 3 2 1
RAM 3 2 1
ComputedRAM 3 1 2
Geolocation accuracy 1 1 1
Total 12 7 8

Table 5.13: A summary of the framework’s run-time performance ranks. The


rank is given per metric, where 1 represent the most performant framework
and 3 the worst.
Discussion | 69

Chapter 6

Discussion

This chapter is divided into four parts and discusses the results in a broader
context. The first concerns the results of the RQ1 regarding feature support,
the second covers RQ2, the suitability of using PWA as factory scanning
applications, and the third considers the run-time performance related to RQ3.
Finally, the fourth part discusses the validity of the results.

6.1 RQ1: Feature support


Web-based APIs for most native features exist, at least experimental APIs.
Thus, feature examination is more a question of whether Apple supports it or
not. As presented in Section 5.1, PWAs are limited in native feature support on
iOS, where only 24 of the 33 evaluated features were classified as supported
or partially supported. This is in line with the hypothesis and results from
previous work [5, 27]. Nevertheless, the feature support for PWAs continues
to increase with new browser releases. In 2018, Steiner showed that only three
of 15 evaluated features in his study were supported on iOS Safari version
11.1. However, in addition to the features supported in version 11.1, version
15.2 provides support for media capabilities, media sessions, and web-share,
enabling support for the camera, the microphone, and QR code scanning.
Despite the increased support, its pace is low, and several critical native
features are still not supported on iOS. Furthermore, it is not clear whether
they will ever be. For instance, developers have, since early 2018, requested the
critical reengaging feature push notifications without Apple taking action [85].
Thus, it is not clear whether Apple strives to support web-based APIs and
PWA. We can probably expect that the feature support for PWAs will continue
to increase over time. However, we cannot tell whether Apple will add support
70 | Discussion

for all the most crucial native features, enabling PWAs to be classified as a
valid substitute for native development. One reason for Apple to refuse wider
PWA support is that Apple’s application ecosystem may suffer if there exist
alternative ways to publish and download iOS applications [9].
React Native provides support for most features and is more reliable than
PWA in that sense. However, one significant problem with React Native is
the dependency on third-party libraries. The maintenance of these libraries
might stop or be substituted for new ones. For instance, the React Native
Camera library was the go-to library for camera implementations in React
Native at the beginning of this study. As for today, the library is deprecated
in favor for React Native Vision Camera [86]. Substituting libraries is,
in one sense, great since it comes with benefits such as improved API,
performance improvements, improved stability, and new features adapted to
modern devices. However, it forces project and application maintainers to
migrate to new APIs, which depending on the differences, can be both costly
and time-consuming. Thus, the strength of the large React Native community
providing wide feature support is also a great weakness since maintenance
over time is not guaranteed.

6.2 RQ2: QR code scanning


This work’s hypothesis was that there is extensive support for web-based QR
code scanners, but they are not as feature-rich as tools available for native
iOS applications. Considering the required symbology types, PWAs are as
suitable for QR code scanning as React Native and native iOS. Regarding
the performance, it is interesting that the PWA was the most performant in
CPU and memory usage compared to the other frameworks. A possible
explanation is that the advanced feature for the other frameworks adds
overhead to the resource utilization. For instance, the native application has
the automatic zoom feature, which probably adds overhead to the performance
measurements.
Considering scanning correctness, we spot a slightly worse performance
by the PWA, which failed in three scanning attempts out of 400 in total.
Although this difference is considerably low, it might be a limitation indication
for more challenging conditions. The scanning experiment in this study was
conducted in perfect conditions. The QR codes were printed on white paper,
the camera was placed close by, and the room had delightful lightning. The
advantage of this approach is that it creates a controlled environment, for which
all frameworks have the same possibilities to succeed. This enforces a fair
Discussion | 71

comparison and a reliable result. It also improves the reproducibility of the


experiments.
Nevertheless, when used in factories, QR codes are often printed on
products of different materials. For instance, the products for where the code
is printed can have rounded, glossy surfaces, the code could be small and
have varying colors and contrast. These conditions make it much harder for
the applications to recognize and interpret the codes. Furthermore, although
PWA can access the camera, it cannot, in contrast to React Native and native
iOS, access advanced camera features such as the torch and the zoom. So, a
more demanding environment might make it much harder for the PWA than
for the other frameworks. Thus, the scanning correctness results of this study
might have differed if conducted in a more challenging and more realistic
environment.
Conclusively, PWAs QR code scanning support and performance on iOS
are good enough, although more research is required to determine whether the
conclusions hold in a more challenging environments.

6.3 RQ3: Performance


The hypothesis for this work was that PWAs are comparable with cross-
platform applications in performance but perform worse than native iOS
applications. However, the results in Section 5.4 show that the PWA was the
least performant and far behind the React Native and native iOS applications.
The performance overhead for the PWA compared to native iOS was expected,
but the React Native application performed better than expected, especially
in response times. Nevertheless, the results show that the CPU and RAM
measurements, in general, are lower for the native application than for React
Native. This was expected, as the React Native bridge that enables hardware
access by translating the source at run-time is expected to add performance
overhead to the application. Although there might be essential differences
in device hardware utilization between Android and iOS [12], the CPU and
memory performance results are in line with previous works on Android
devices [15, 16, 18].
One explanation for low resource utilization as well as high response
times could be explained by other simultaneously running processes with high
resource demands. This is not particularly likely, though. When examining the
overall CPU usage in Xcode instruments for the experiments, it is noticeable
that the total CPU use is far from reaching its maximum capacity during
the experiments, so complete resource utilization is not the reason. A more
72 | Discussion

likely explanation for slow native response times could be derived from the
components’ default animations. For instance, the transition animations for
the native navigation components. These animations could have an impact on
application idling time, which directly delays succeeding UI interactions in
XCUITest. That would also explain why the navigation experiment with the
largest number of animation transitions had the most significant difference in
response times.
Remarkable is the high ComputedRAM measurements for the native iOS
application, where especially the scrolling experiment stands out. The high
scrolling ComputedRAM is probably due to the use of the native LazyGrid
component, which seems to have a significant memory impact. The images
have a total size of 39MB, so a mean application memory peak above 230
MiB is substantial, indicating that native applications and components are not
always the most performant. The lazy loading is probably also the reason for
the high native CPU utilization for the scrolling experiment. Due to the high
ComputedRAM and relatively low RAM measurements for the native iOS
application, a reasonable explanation is that the native run-time environment
does not load the complete application into memory on start-up but rather lazy
loads components while interacting with the application.

6.4 Threats to validity


As discussed in Section 1.7, the benchmark and experimental setup used is
limited in numerous ways. It would have been interesting to experiment
with fully-fledged applications developed by domain experts that have been
maintained for a long time. An example of a work conducting performance
comparisons on fully functional applications is Willocx et al. [3]. They used
an application called PropertyCross [87], available in various native and cross-
platform technologies. However, no similar open-source database was found,
including the application technologies used in this study. In addition, the time
constraints of a Master’s thesis are too limited for developing three production-
ready applications. Thus, the approach used should be considered reasonable,
but with the remark that the benchmark and experimental setup might impact
the validity of the results.
We planned to use the XCTMetric classes XCTClockMetric, XCTCPU-
Metric, and XCTMemoryMetric to record performance during the experi-
ments. These classes provide the metrics CPU time, memory consumption,
and clock monotonic time. XCTest’s measure method provides the possibility
to measure the performance of a block of code [88]. However, the
Discussion | 73

measure method does not measure the correct interval for CPU and memory
measurements. The measured CPU and memory interval begins at application
start-up rather than when the block starts. This infers that physical memory
measurement reports the memory utilization at the end of the block rather
than the ComputedRAM as stated in the documentation [89]. It also implies
including experiment setup overhead for CPU measurements, like starting the
application and navigating to the correct view. Another problem with XCTest
is that the memory peak measurement does not work correctly and returns 0
independently of the experiment or application tested.
Another limitation of XCTest is the inability to profile other processes
than the primary application process. This is problematic since the
applications in some scenarios significantly impact other processes’ resource
consumption—for instance, the scanning experiment had a significant impact
on the mediaserverd process. Most importantly, mediaserverd was affected
differently depending on the application tested, so including the overhead
from significantly impacted processes is crucial for a trustworthy result. The
good part is that one can use Xcode instruments instead to solve both of
these issues. Xcode instruments have been successfully used in isolation
in previous research profiling cross-platform and native applications on iOS
devices [3, 12]. However, one limitation of using Xcode instruments is that the
user must manually extract the values from the timeline, which is very time-
consuming and limits the number of feasible repetitions. Another is that the
measurements are given in second intervals, meaning that the captured period
could be shorter than the interval itself. We solved this problem by padding the
experiments with a couple of seconds in the idle state while measuring CPU
and memory usage, resulting in a much more accurate result. Nevertheless, it
is vital to be aware of this for future studies. Especially when profiling short
and isolated experiments.
74 | Discussion
Conclusions and Future work | 75

Chapter 7

Conclusions and Future work

This chapter concludes our study about Progressive Web Applications (PWAs)
on the iOS platform. Firstly, we summarize the conclusions drawn in this
study. Then, we suggest possible future research related to PWAs.

7.1 Conclusions
This thesis provides an overview of PWAs’ feature support, suitability for QR
code scanning, and performance by evaluating 33 features, and measuring
response times, CPU usage, and memory utilization during four different
experiments. The feature evaluation covers important application features
crucial for mobile applications in general and Northvolt in specific. The
performance evaluation includes important metrics which impact the user
experience and the device’s energy consumption.
Many native features are supported for PWAs, and the support continues to
grow. However, the missing support for crucial native features makes PWAs
insufficient for more advanced applications with higher demands, for instance,
applications requiring support for push notifications, background tasks, and
integration with specific hardware. In addition, we are not sure whether iOS
will ever support these features. Although the requirements for an application
are relatively low in the beginning, it could be devastating to choose the
PWA approach since we generally don’t know if more advanced features are
required later on. Nonetheless, PWAs are rigorous enough for simple iOS
use cases, and their substantial advantages make them well worth considering
when requirements are met.
This work shows that PWAs are suitable for QR code scanning. PWAs
offer comprehensive support for different symbology types and perform better
76 | Conclusions and Future work

in CPU and memory utilization than native iOS and React Native applications.
The downsides with PWAs are the slow response times, the slightly worse
scanning correctness, and the missing support for advanced camera features.
In general, native iOS performed best in memory and CPU utilization,
while React Native performed best in response times. Note that the response
time results should be interpreted with care, as discussed in Section 6.4.

7.2 Future work


Most previous work about PWAs has focused on the Android platform,
probably due to the lacking PWAs feature support on iOS. Nonetheless, due
to the extensive mobile platform fragmentation and iOS large market share, it
is vital to include iOS in future research.
This study does not tell us whether the frameworks’ variation in
performance are significant enough for users to notice a difference. We
recommend future work to focus on the correlation between performance and
user experience, including the frameworks used in this work and possibly
others, such as the upcoming cross-platform framework Flutter. In addition,
we advise inspecting possible differences in users’ perception of quality and
developing high-quality user interfaces.
Studies focusing on maintainability and application reliability would
provide great value. As argued for in this thesis, PWAs require, in contrast
to native, only one deployment workflow to target all platforms. We propose
investigating the impact this has on production workflows and application
testing. In addition, we suggest evaluating differences in available application
testing tools and frameworks. For instance, what tools and frameworks exist
for automated user interface testing, and are the tools available for PWAs
reliable enough for testing across several devices and platforms?
Another crucial area in mobile application development is security, but
existing security research of PWAs is minimal. For instance, application
distribution platforms such as Apple’s App Store scan applications for
malicious code, creating a safe application market. On the web, however, no
such controls exist. We propose investigating how application content can be
controlled on the web, the consequences of a free application market, and how
possible negative consequences can be mitigated.
As accounted for in Section 6.2, we performed the scanning experiment
in this thesis in a controlled environment with perfect conditions. We
suggest a study reconstructing the scanning experiment in a more challenging
environment with authentic products and more challenging conditions.
Conclusions and Future work | 77

Moreover, suppose more severe conditions affect the accuracy of the scan.
In that case, we suggest investigating whether advanced camera features such
as the torch and zoom can maintain the performance.
Finally, we suggest extending the geolocation experiment. This thesis
performs geolocation accuracy testing on a microscopic scale. More extensive
experimentation, including updating frequency, movement, and accuracy over
a more comprehensive set of times, would amplify the conclusions drawn in
this study.
78 | Conclusions and Future work
References | 79

References

[1] M. Latif, Y. Lakhrissi, E. H. Nfaoui, and N. Es-Sbai, “Cross platform


approach for mobile application development: A survey,” in 2016
International Conference on Information Technology for Organizations
Development (IT4OD), 2016. doi: 10.1109/IT4OD.2016.7479278 pp. 1–
5.

[2] M. Willocx, J. Vossaert, and V. Naessens, “A Quantitative


Assessment of Performance in Mobile App Development Tools,”
in 2015 IEEE International Conference on Mobile Services, 2015.
doi: 10.1109/MobServ.2015.68 pp. 454–461. [Online]. Available:
https://ieeexplore.ieee.org/document/7226724

[3] ——, “Comparing Performance Parameters of Mobile App Development


Strategies,” in Proceedings of the International Conference on Mobile
Software Engineering and Systems, ser. MOBILESoft ’16. New
York, NY, USA: Association for Computing Machinery, 2016. doi:
10.1145/2897073.2897092. ISBN 9781450341783 pp. 38–47. [Online].
Available: https://doi.org/10.1145/2897073.2897092

[4] M. E. Joorabchi, A. Mesbah, and P. Kruchten, “Real Challenges


in Mobile App Development,” in 2013 ACM / IEEE International
Symposium on Empirical Software Engineering and Measurement, 2013.
doi: 10.1109/ESEM.2013.9 pp. 15–24.

[5] T. Steiner, “What is in a Web View: An Analysis of Progressive


Web App Features When the Means of Web Access is Not a Web
Browser,” in Companion Proceedings of the The Web Conference
2018, ser. WWW ’18. Republic and Canton of Geneva, CHE:
International World Wide Web Conferences Steering Committee, 2018.
doi: 10.1145/3184558.3188742. ISBN 9781450356404 pp. 789–796.
[Online]. Available: https://doi.org/10.1145/3184558.3188742
80 | References

[6] H. Heitkötter, S. Hanschke, and T. A. Majchrzak, “Evaluating Cross-


Platform Development Approaches for Mobile Applications,” in Web
Information Systems and Technologies, J. Cordeiro and K.-H. Krempels,
Eds. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. ISBN 978-
3-642-36608-6 pp. 120–138.

[7] R. Fredrikson, “Emulating a Native Mobile Experience with Cross-


platform Applications,” Master’s thesis, KTH Royal Institute of
Technology, 2018. [Online]. Available: http://urn.kb.se/resolve?urn=
urn:nbn:se:kth:diva-234312

[8] MDN Web Docs. (2021) Introduction to progressive web apps.


Accessed: 2021-09-17. [Online]. Available: https://developer.mozilla.
org/en-US/docs/Web/Progressive_web_apps/Introduction

[9] T. A. Majchrzak, A. Biørn-Hansen, and T.-M. Grønli, “Progressive


Web Apps: the Definite Approach to Cross-Platform Development?” in
HICSS, 01 2018. doi: 10.24251/HICSS.2018.718. [Online]. Available:
https://www.researchgate.net/publication/323380596_Progressive_
Web_Apps_the_Definite_Approach_to_Cross-Platform_Development

[10] I. Dalmasso, S. K. Datta, C. Bonnet, and N. Nikaein, “Survey,


comparison and evaluation of cross platform mobile application
development tools,” in 2013 9th International Wireless Communications
and Mobile Computing Conference (IWCMC), 2013. doi: 10.1109/I-
WCMC.2013.6583580 pp. 323–328.

[11] A. Ahmad, K. Li, C. Feng, S. M. Asim, A. Yousif, and S. Ge, “An


Empirical Study of Investigating Mobile Applications Development
Challenges,” IEEE Access, vol. 6, pp. 17 711–17 728, 2018. doi:
10.1109/ACCESS.2018.2818724

[12] A. Biørn-Hansen, T.-M. Grønli, and G. Ghinea, “Animations in Cross-


Platform Mobile Applications: An Evaluation of Tools, Metrics and
Performance,” Sensors, vol. 19, no. 9, 2019. doi: 10.3390/s19092081.
[Online]. Available: https://www.mdpi.com/1424-8220/19/9/2081

[13] Statcounter. Statcounter Global Stats - Mobile Operating System


Market Share Worldwide. Accessed: 2021-09-06. [Online]. Available:
https://gs.statcounter.com/os-market-share/mobile/worldwide
References | 81

[14] Statcounter. Statcounter Global Stats - Mobile Operating System


Market Share Sweden. Accessed: 2021-09-06. [Online]. Available:
https://gs.statcounter.com/os-market-share/mobile/sweden
[15] C. Fournier, “Comparison of Smoothness in Progressive Web Apps and
Mobile Applications on Android,” Master’s thesis, KTH Royal Institute
of Technology, 2020. [Online]. Available: http://urn.kb.se/resolve?urn=
urn:nbn:se:kth:diva-283653
[16] A. Biørn-Hansen, C. Rieger, T.-M. Grønli, T. A. Majchrzak, and
G. Ghinea, “An empirical investigation of performance overhead
in cross-platform mobile development frameworks,” Empirical
Software Engineering, 07 2020. doi: 10.1007/s10664-020-09827-
6. [Online]. Available: https://link.springer.com/article/10.1007/
s10664-020-09827-6
[17] F. Johannsen, “Progressive Web Applications and Code Complexity
- An analysis of the added complexity of making a web application
progressive,” Master’s thesis, Linköping University, 2018. [Online].
Available: http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-149496
[18] T. Dorfer, L. Demetz, and S. Huber, “Impact of mobile
cross-platform development on cpu, memory and battery of
mobile devices when using common mobile app features,”
Procedia Computer Science, vol. 175, pp. 189–196, 2020. doi:
https://doi.org/10.1016/j.procs.2020.07.029 The 17th International
Conference on Mobile Systems and Pervasive Computing
(MobiSPC),The 15th International Conference on Future Networks
and Communications (FNC),The 10th International Conference on
Sustainable Energy Information Technology. [Online]. Available:
https://www.sciencedirect.com/science/article/pii/S1877050920317099
[19] V. Yberg, “Native-like Performance and User Experience with
Progressive Web Apps,” Master’s thesis, KTH Royal Institute of
Technology, 2018. [Online]. Available: http://urn.kb.se/resolve?urn=
urn:nbn:se:kth:diva-235389
[20] R. Fransson and A. Driaguine, “Comparing Progressive Web
Applications with Native Android Applications: An evaluation
of performance when it comes to response time,” Bachelor’s
thesis, Linnaeus University, 2017. [Online]. Available: http:
//urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-64764
82 | References

[21] N. Walliman, Research Methods: The Basics, 01 2011. ISBN


9780415489942

[22] A. I. Wasserman, “Software Engineering Issues for Mobile Application


Development,” in Proceedings of the FSE/SDP Workshop on
Future of Software Engineering Research, ser. FoSER ’10. New
York, NY, USA: Association for Computing Machinery, 2010.
doi: 10.1145/1882362.1882443. ISBN 9781450304276 p. 397–400.
[Online]. Available: https://doi.org/10.1145/1882362.1882443

[23] Statista. Cross-platform mobile frameworks used by software


developers worldwide from 2019 to 2021. Accessed: 2021-09-
24. [Online]. Available: https://www.statista.com/statistics/869224/
worldwide-software-developer-working-hours/

[24] L. Delía, P. Thomas, L. Corbalan, J. F. Sosa, A. Cuitiño, G. Cáseres,


and P. Pesado, “Development Approaches for Mobile Applications:
Comparative Analysis of Features,” in Intelligent Computing, K. Arai,
S. Kapoor, and R. Bhatia, Eds. Cham: Springer International
Publishing, 2019, pp. 470–484. [Online]. Available: https://link.
springer.com/chapter/10.1007/978-3-030-01177-2_34

[25] R. Abrahamsson and D. Berntsen, “Comparing modifiability of React


Native and two native codebases,” Master’s thesis, Linköping University,
2017. [Online]. Available: http://urn.kb.se/resolve?urn=urn:nbn:se:liu:
diva-139228

[26] I. Malavolta, “Beyond Native Apps: Web Technologies to the Rescue!


(Keynote),” in Proceedings of the 1st International Workshop on Mobile
Development, ser. Mobile! 2016. New York, NY, USA: Association
for Computing Machinery, 2016. doi: 10.1145/3001854.3001863. ISBN
9781450346436 p. 1–2. [Online]. Available: https://doi.org/10.1145/
3001854.3001863

[27] O. Adetunji, C. Ajaegbu, and O. Nzechukwu, “Dawning of Progressive


Web Applications (PWA): Edging Out the Pitfalls of Traditional Mobile
Development,” American Scientific Research Journal for Engineering,
Technology, and Sciences, vol. 68, pp. 85–99, 05 2020. [Online].
Available: https://asrjetsjournal.org/index.php/American_Scientific_
Journal/article/view/5812
References | 83

[28] P. R. M. Andrade, A. Albuquerque, O. F. Frota, R. V. Silveira, and


F. A. D. Silva, “Cross platform app: a comparative study,” ArXiv, vol.
abs/1503.03511, 2015.

[29] C. Rahul Raj and S. B. Tolety, “A study on approaches to build


cross-platform mobile applications and criteria to select appropriate
approach,” in 2012 Annual IEEE India Conference (INDICON), 2012.
doi: 10.1109/INDCON.2012.6420693 pp. 625–629. [Online]. Available:
https://ieeexplore.ieee.org/document/6420693

[30] A. Biørn-Hansen., T. A. Majchrzak., and T. Grønli., “Progressive Web


Apps: The Possible Web-native Unifier for Mobile Development,” in
Proceedings of the 13th International Conference on Web Information
Systems and Technologies - WEBIST, INSTICC. SciTePress, 2017. doi:
10.5220/0006353703440351. ISBN 978-989-758-246-2. ISSN 2184-
3252 pp. 344–351.

[31] A. Biørn-Hansen, T. A. Majchrzak, and T. Grønli, “Progressive


Web Apps for the Unified Development of Mobile Applications,”
in Web Information Systems and Technologies, T. A. Majchrzak,
P. Traverso, K.-H. Krempels, and V. Monfort, Eds. Cham:
Springer International Publishing, 2018. ISBN 978-3-319-93527-0
pp. 64–86. [Online]. Available: https://link.springer.com/chapter/10.
1007/978-3-319-93527-0_4

[32] A. Osmani. (2015) Getting Started with Progressive Web Apps.


Accessed: 2021-09-16. [Online]. Available: https://developers.google.
com/web/updates/2015/12/getting-started-pwa

[33] MDN Web Docs. (2021) Service Worker API. Accessed: 2021-09-
17. [Online]. Available: https://developer.mozilla.org/en-US/docs/Web/
API/Service_Worker_API

[34] A. Gambhir and G. Raj, “Analysis of Cache in Service Worker


and Performance Scoring of Progressive Web Application,” 2018
International Conference on Communications (COMM), pp. 01–
06, 2018. [Online]. Available: https://ieeexplore.ieee.org/document/
8484832

[35] Google Developers. Workbox. Accessed: 2021-09-17. [Online].


Available: https://developers.google.com/web/tools/workbox/
84 | References

[36] MDN Web Docs. (2021) Web app manifests. Accessed: 2021-
09-17. [Online]. Available: https://developer.mozilla.org/en-US/docs/
Web/Manifest

[37] ——. (2021) Structural overview of progressive web apps. Accessed:


2021-09-25. [Online]. Available: https://developer.mozilla.org/en-US/
docs/Web/Progressive_web_apps/Structural_overview

[38] A. Osmani and M. Gaunt. (2020) Instant Loading Web Apps


with an Application Shell Architecture. Accessed: 2021-09-25.
[Online]. Available: https://developers.google.com/web/updates/2015/
11/app-shell

[39] K. Behl and G. Raj, “Architectural Pattern of Progressive Web and


Background Synchronization,” in 2018 International Conference on
Advances in Computing and Communication Engineering (ICACCE),
2018. doi: 10.1109/ICACCE.2018.8441701 pp. 366–371.

[40] O. Axelsson and F. Carlström, “Evaluation Targeting React Native in


Comparison to Native Mobile Development,” Master’s thesis, Lunds
University, 2016, student Paper.

[41] T. Hansson, Niclas. Vidhall, “Effects on performance and usability


for cross-platform application development using React Native,”
Master’s thesis, Linköpings universitet, 2016. [Online]. Available:
http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-130022

[42] Statista. Most used web frameworks among developers


worldwide, as of 2021. Accessed: 2021-10-13.
[Online]. Available: https://www.statista.com/statistics/1124699/
worldwide-developer-survey-most-used-frameworks-web/

[43] Apple. SwiftUI. Accessed: 2021-10-15. [Online]. Available: https:


//developer.apple.com/xcode/swiftui/

[44] MDN Web Docs. (2021) Navigator.serviceWorker. Accessed: 2021-


11-29. [Online]. Available: https://developer.mozilla.org/en-US/docs/
Web/API/Navigator/serviceWorker

[45] ——. (2021) Cache. Accessed: 2021-11-29. [Online]. Available:


https://developer.mozilla.org/en-US/docs/Web/API/Cache
References | 85

[46] T. Steiner. (2018) PWA Feature Detector. Accessed: 2021-09-16.


[Online]. Available: https://github.com/tomayac/pwa-feature-detector

[47] Adam Bar. What Web Can Do. Accessed: 2021-09-27. [Online].
Available: https://github.com/NOtherDev/whatwebcando

[48] ——. What Web Can Do Today?. Accessed: 2021-11-25. [Online].


Available: https://whatwebcando.today/push-notifications.html

[49] A. Developers. Xcode Overview: Xcode essentials. Accessed


2022-01-27. [Online]. Available: https://developer.apple.com/library/
archive/documentation/ToolsLanguages/Conceptual/Xcode_Overview/
index.html#//apple_ref/doc/uid/TP40010215-CH24-SW1

[50] Apple Inc. XCTest — Create and run unit tests, performance tests,
and UI tests for your Xcode project. Accessed: 2021-12-14. [Online].
Available: https://developer.apple.com/documentation/xctest

[51] A. A. Tirodkar and S. S. Khandpur, “EarlGrey: IOS UI Automation


Testing Framework,” in Proceedings of the 6th International Conference
on Mobile Software Engineering and Systems, ser. MOBILESoft ’19.
IEEE Press, 2019, p. 12–15.

[52] Apple Inc. XCTMetric. Accessed: 2021-11-18. [Online]. Available:


https://developer.apple.com/documentation/xctest/xctmetric

[53] ——. XCTClockMetric. Accessed: 2021-11-18. [Online]. Available:


https://developer.apple.com/documentation/xctest/xctclockmetric

[54] A. Developers. Xcode Overview: Measuring Performance.


Accessed 2022-01-27. [Online]. Available: https:
//developer.apple.com/library/archive/documentation/ToolsLanguages/
Conceptual/Xcode_Overview/MeasuringPerformance.html

[55] WebKit. Debugging WebKit. Accessed 2022-01-10. [Online]. Available:


https://webkit.org/debugging-webkit/#processes

[56] C. Rieger and T. A. Majchrzak, “Weighted Evaluation Framework


for Cross-Platform App Development Approaches,” in Information
Systems: Development, Research, Applications, Education, S. Wrycza,
Ed. Cham: Springer International Publishing, 2016, pp. 18–
39. [Online]. Available: https://link.springer.com/chapter/10.1007/
978-3-319-46642-2_2
86 | References

[57] ——, “Towards the definitive evaluation framework for cross-platform


app development approaches,” Journal of Systems and Software, vol.
153, pp. 175–199, 2019. doi: https://doi.org/10.1016/j.jss.2019.04.001.
[Online]. Available: https://www.sciencedirect.com/science/article/pii/
S0164121219300743

[58] V. Aguirre, L. Delía, P. Thomas, L. Corbalán, G. Cáseres, and J. F.


Sosa, “PWA and TWA: Recent Development Trends,” in Computer
Science – CACIC 2019, P. Pesado and M. Arroyo, Eds. Cham:
Springer International Publishing, 2020. ISBN 978-3-030-48325-8
pp. 205–214. [Online]. Available: https://link.springer.com/chapter/10.
1007/978-3-030-48325-8_14

[59] S. Tandel and A. Jamadar, “Impact of Progressive Web Apps


on Web App Development,” Ijirset, pp. 9439–9444, September
2018. doi: 10.15680/IJIRSET.2018.0709021. [Online]. Available:
http://www.ijirset.com/upload/2018/september/21_Impact.pdf

[60] Adam Bar. [Online]. Available: https://whatwebcando.today/

[61] T. Kerssens, “Applicability of Progressive Web Apps in Mobile


Development,” Master’s thesis, University of Amsterdam, 2019.
[Online]. Available: https://staff.fnwi.uva.nl/a.s.z.belloum/MSctheses/
MScthesis_Tjarco.pdf

[62] A. I. Khan, A. Al-Badi, and M. Al-Kindi, “Progressive Web Application


Assessment Using AHP,” Procedia Computer Science, vol. 155, pp. 289–
294, 2019. doi: https://doi.org/10.1016/j.procs.2019.08.041 The 16th
International Conference on Mobile Systems and Pervasive Computing
(MobiSPC 2019),The 14th International Conference on Future Networks
and Communications (FNC-2019),The 9th International Conference on
Sustainable Energy Information Technology. [Online]. Available: https:
//www.sciencedirect.com/science/article/pii/S187705091930955X

[63] I. Malavolta, K. Chinnappan, L. Jasmontas, S. Gupta, and K. A. K.


Soltany, “Evaluating the Impact of Caching on the Energy Consumption
and Performance of Progressive Web Apps,” ser. MOBILESoft ’20.
New York, NY, USA: Association for Computing Machinery, 2020.
doi: 10.1145/3387905.3388593. ISBN 9781450379595 p. 109–119.
[Online]. Available: https://dl-acm-org.focus.lib.kth.se/doi/10.1145/
3387905.3388593
References | 87

[64] I. Malavolta, G. Procaccianti, P. Noorland, and P. Vukmirović,


“Assessing the Impact of Service Workers on the Energy Efficiency of
Progressive Web Apps,” ser. MOBILESoft ’17. IEEE Press, 2017.
doi: 10.1109/MOBILESoft.2017.7. ISBN 9781538626696 p. 35–45.
[Online]. Available: https://doi.org/10.1109/MOBILESoft.2017.7

[65] M. Ciman and O. Gaggi, “An empirical analysis of energy consumption


of cross-platform frameworks for mobile development,” Pervasive
Mob. Comput., vol. 39, pp. 214–230, 2017. [Online]. Available:
https://www.sciencedirect.com/science/article/pii/S1574119216303170

[66] G. de Andrade Cardieri and L. M. Zaina, “Analyzing User Experience


in Mobile Web, Native and Progressive Web Applications: A
User and HCI Specialist Perspectives,” ser. IHC 2018. New
York, NY, USA: Association for Computing Machinery, 2018. doi:
10.1145/3274192.3274201. ISBN 9781450366014. [Online]. Available:
https://doi.org/10.1145/3274192.3274201

[67] A. Charland and B. Leroux, “Mobile Application Development:


Web vs. Native,” Commun. ACM, vol. 54, no. 5, pp. 49–53,
May 2011. doi: 10.1145/1941487.1941504. [Online]. Available:
https://doi.org/10.1145/1941487.1941504

[68] Similar Web. Top Apps Ranking. Accessed: 2021-11-23. [Online].


Available: https://www.similarweb.com/apps/top/apple/store-rank/se/
all/top-free/iphone/

[69] M. M. Arif, W. Shang, and E. Shihab, “Empirical Study on the


Discrepancy between Performance Testing Results from Virtual and
Physical Environments,” Empirical Softw. Engg., vol. 23, no. 3, p.
1490–1518, Jun. 2018. doi: 10.1007/s10664-017-9553-x. [Online].
Available: https://doi.org/10.1007/s10664-017-9553-x

[70] Apple Inc. XCTCPUMetric. Accessed: 2021-11-18. [Online]. Available:


https://developer.apple.com/documentation/xctest/xctcpumetric

[71] T. P. S. University. Introduction to ANOVA. Accessed: 2022.01.24.


[Online]. Available: https://online.stat.psu.edu/stat500/lesson/10

[72] ——. ANOVA Assumptions. Accessed: 2022.01.24. [Online].


Available: https://online.stat.psu.edu/stat500/lesson/10/10.2/10.2.1
88 | References

[73] R. Bedre. Anova using Python. Accessed: 2022.01.24. [Online].


Available: https://www.reneshbedre.com/blog/anova.html

[74] S. H. To. Welch’s ANOVA: Definition, Assumptions. Accessed:


2022.01.24. [Online]. Available: https://www.statisticshowto.com/
welchs-anova/

[75] ——. Kruskal Wallis H Test: Definition, Examples,


Assumptions, SPSS. Accessed: 2022.01.24. [Online].
Available: https://www.statisticshowto.com/probability-and-statistics/
statistics-definitions/kruskal-wallis/

[76] Statology. Welch’s ANOVA in Python. Accessed: 2022.01.24. [Online].


Available: https://www.statology.org/welchs-anova-in-python/

[77] R. Bedre. Kruskal-Wallis test in R. Accessed: 2022.01.24. [Online].


Available: https://www.reneshbedre.com/blog/kruskal-wallis-test.html

[78] Apple Developers. Getting the User’s Location. Accessed 2021-12-


16. [Online]. Available: https://developer.apple.com/documentation/
corelocation/getting_the_user_s_location

[79] React Native Community. @react-native-community/geolocation.


Accessed 2021-01-10. [Online]. Available: https://github.com/
react-native-geolocation/react-native-geolocation

[80] A. Developers. TabView | Apple Developer Documentation.


Accessed 2021-12-16. [Online]. Available: https://developer.apple.
com/documentation/swiftui/tabview

[81] ——. NavigationView | Apple Developer Documentation. Accessed


2021-12-16. [Online]. Available: https://developer.apple.com/
documentation/swiftui/navigationview

[82] React Navigation. Bottom Tabs Navigator. Accessed 2021-12-16.


[Online]. Available: https://reactnavigation.org/

[83] Apple Developers. ScrollView | Apple Developer Documentation.


Accessed 2021-12-16. [Online]. Available: https://developer.apple.
com/documentation/swiftui/scrollview

[84] React Native. ScrollView. Accessed 2021-12-16. [Online]. Available:


https://reactnative.dev/docs/scrollview
References | 89

[85] WebKit Bugzilla. Bug 182566: Feature Request: Add support for the
ServiceWorkerRegistration’s PushManager interface. Accessed: 2022-
01-31. [Online]. Available: https://bugs.webkit.org/show_bug.cgi?id=
182566

[86] Marc Rousavy. React Native Camera. Accessed 2022-02-01. [Online].


Available: https://github.com/react-native-camera/react-native-camera

[87] PropertyGross - Helping developers select a framework for cross-


platform mobile development. Accessed: 2021-10-07. [Online].
Available: https://github.com/tastejs/PropertyCross

[88] Apple Inc. measure. Accessed: 2022-01-21. [Online].


Available: https://developer.apple.com/documentation/xctest/
xctestcase/1496290-measure

[89] ——. XCTMemoryMetric. Accessed: 2021-11-18. [Online]. Available:


https://developer.apple.com/documentation/xctest/xctmemorymetric
90 | References
Appendix A: Common application features | 91

Appendix A

Common application features

Rank Application Version Navigation Geolocation Scroll View


1. Kivra 4.32.0 3 7 3
2. HBO Max: Stream TV and Movies 50.60.1 3 3 3
3. Clash Mini Unknown 3 3 3
4. BestSecret 6.95.1 3 7 3
5. Microsoft Teams 3.19.0 3 3 3
6. Klarna | Shop now. Pay later 21.46.85 3 3 3
7. BankID Security App 7.25.0 3 3 3
8. WhatsApp Messenger 2.21.221 3 3 3
9. Swish 5.4.1 3 7 3
10. Google Maps 5.85 3 7 3
11. Freja eID — My ID in an app 8.10.0 3 3 3
12. PostNord — Track your parcels 8.8.1 3 3 3
13. Budbee 2.0.9 3 7 3
14. YouTube: Watch, Listen, Stream 16.45.4 3 3 3
15. Gmail — Email by Google 6.0.211101 3 3 3
16. Instagram 214.0 3 3 3
17. Facebook 345.0 3 3 3
18. Spotify New Music and Podcasts 8.6.82 3 3 3
19. Microsoft Outlook 4.2146.0 3 3 3
20. Messenger 339.0 3 3 3
21. TikTok 22.0.0 3 3 3
22. Snapchat 11.55.0.39 3 3 3
23. EasyPark — Parking made easy 15.30.0 3 3 3
24. Anyfin 1.43.0 3 3 3
25. Google 187.0 3 3 3

Table A.1: A table describing the feature implementation status for Apple
App Store’s top 25 free applications in Sweden. The feature are notated with
a check mark (3) if the application implements it and with a cross (7) if not.
92 | Appendix B: Bartlett’s test results

Appendix B

Bartlett’s test results

Experiment Accuracy Clock monotonic time CPU time RAM ComputedRAM


iPhone 8
Geolocation 7 1.0 × 10−199 7.3 × 10−45 1.9 × 10−18 3.9 × 10−14
Navigation — 8.9 × 10−92 0.04 2.7 × 10−34 4.5 × 10−51
Scanning — 2.7 × 10−116 2.1 × 10−11 0.03 0.04
Scrolling — 1.7 × 10−8 1.6 × 10−8 1.3 × 10−37 1.6 × 10−37
iPhone 13
Geolocation 0 8.2 × 10−160 1.3 × 10−46 6.2 × 10−49 2.1 × 10−64
Navigation — 4.1 × 10−24 1.5 × 10−10 1.8 × 10−55 1.9 × 10−53
Scanning — 6.9 × 10−127 1.7 × 10−6 3.6 × 10−18 2.0 × 10−18
Scrolling — 0.01 9.9 × 10−17 8.0 × 10−44 0

Table B.1: Bartlett’s p-values per device, experiment and metric. Entries are
notated with a hyphen (—) if the metric is not measured in the experiment, or
a cross (7) if the all collected values are equal.
TRITA – EECS-EX-2022:66

www.kth.se

You might also like