Simulation Modelling Practice and Theory: Mona Eisa, Muhammad Younas, Kashinath Basu, Irfan Awan T

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 17

Simulation Modelling Practice and Theory 103 (2020) 102108

Contents lists available at ScienceDirect

Simulation Modelling Practice and Theory


journal homepage: www.elsevier.com/locate/simpat

Modelling and Simulation of QoS-Aware Service Selection in Cloud


T
Computing
Mona Eisaa, Muhammad Younasa, , Kashinath Basua, Irfan Awanb

a
Oxford Brookes University, Oxford, UK
b
University of Bradford, Bradford, UK

ARTICLE INFO ABSTRACT

Keywords: Cloud service selection process is significantly challenging and complicated as there are various
QoS modelling QoS factors to consider when selecting cloud services. This paper proposes a new QoS-aware
multi criteria decision making model selection model systematically and succinctly representing QoS attributes which cloud consumers
simulation of cloud services can easily use and understand when selecting cloud services. In order to ensure the credibility of
the cloud service selection, the proposed model collects QoS data from different sources including
cloud providers, users’ reviews and cloud monitoring tools. It implements Multi-Criteria Decision
Making models in order to rank services based on various QoS attributes. The proposed model is
implemented as a simulation tool which is deployed on Amazon cloud platform. Using the si-
mulation tool, the proposed model is rigorously evaluated through a number of experiments by
taking into account data from widely used commercial cloud service providers. The experimental
results show that the proposed model ranks and selects cloud services according to the QoS
requirements of cloud service consumers. Unlike existing approaches the proposed model takes
into account multi-level QoS attributes and data from real cloud providers when ranking cloud
services.

1. Introduction

Cloud computing provides on-demand computing services such as compute power, storage, servers, databases, networking
software applications and software development environment over the Internet (Cuiet et. al., [3]). A cloud service can be defined as a
service which resides on a remote cloud infrastructure that is made available to consumers over the Internet using different provi-
sioning models such as Software as a Service (SaaS), Platform as a Service (PaaS) and Infrastructure as a Service (IaaS)(Janssen et al.
[15]) .
Cloud computing has revolutionized the way small, medium and large scale businesses provide and consume IT services. It
provides various benefits such as reduced IT expenditure, on-demand services, large pool of resources, scalability and elasticity of
services. The use and growth of cloud computing has exponentially increased [4]. With such a growing demand and popularity of
cloud computing, more and more cloud service providers are entering into the cloud service business market and are offering various
types of cloud services. Realizing the benefits of cloud computing, an increasing number of businesses and organizations want to
outsource their in-house IT services to the cloud infrastructure.
There exist a large number of cloud service providers in the market such as Amazon, IBM, Alibaba, Google Cloud, Microsoft Azure

Corresponding author.

E-mail addresses: mona.eisa-2015@brookes.ac.uk (M. Eisa), m.younas@brookes.ac.uk (M. Younas), kbasu@brookes.ac.uk (K. Basu),
i.u.awan@bradford.ac.uk (I. Awan).

https://doi.org/10.1016/j.simpat.2020.102108
Received 3 April 2020; Received in revised form 23 April 2020; Accepted 24 April 2020
Available online 21 May 2020
1569-190X/ © 2020 Elsevier B.V. All rights reserved.
M. Eisa, et al. Simulation Modelling Practice and Theory 103 (2020) 102108

and Oracle. On one hand, such a large number of cloud service providers and service models in the market means that they offer more
choices, better services and possible (financial) savings to cloud service consumers. On the other hand, such a large number of cloud
service providers make it significantly difficult and complicated for cloud service consumers to select the services they need for
fulfilling their IT requirements. This gives rise to various complicated research problems such as a large number of cloud service
options, varying level of Quality of Service (QoS) attributes among different cloud service providers, absence of standard service
structure and measurement for QoS, and different business models .
Given the large number of cloud service providers, the challenge for cloud service consumers is to select the cloud service provider
that meets their requirements. However, the process of cloud service selection is not trivial. There are various factors that contribute
to the complexity of service selection process. First, the level of knowledge and understanding of the cloud service requirements vary
significantly among the cloud service consumers. Second, there is large number of cloud service providers that provide different
services. This makes it even more difficult for consumers to choose the desired services from cloud providers. Third, the cloud service
providers provide services based on varying levels of QoS attributes such as performance, security, reliability, and prices of services.
This makes the service selection significantly difficult. Fourth, the service structure and representation are different for each cloud
service provider. Each cloud service provider gives different interpretation of their QoS attributes associated with cloud services. For
instance, Google Cloud and Microsoft Azure have different specification for the same QoS attributes. Thus taking into account the
QoS attributes, the process of service selection becomes further complicated. Existing models and simulation systems (Yaghoubi et al
[1]), [2] do not cater for such challenges of cloud service selection.
This paper aims to address the above challenges involved in the QoS-aware cloud service selection. The main contributions are
summarised as follows:

• Development of a new model that systematically represents cloud services’ QoS attributes using 3-level architecture. This is
distinct from existing models [18] which commonly use simplified one-level representation of QoS attributes. Our model helps
cloud service consumers to easily understand and define their requirements. It also ensures that cloud services are appropriately
ranked and selected.
• In order to optimally select cloud services, the proposed model is used to collect QoS information from three different sources
which include: cloud service providers, third-party monitoring tools, and users’ reviews. This ensures the credibility and cor-
rectness of cloud services selection as decision is not simply based on one source of information, which is the case of existing
approaches.
• Development of a simulation tool that implements and simulates the proposed model and Multi-Criteria Decision Making (MCDM)
models in order to rank services based on various QoS attributes. Experiments and evaluation show that the proposed model (i)
improves the cloud service selection process, (ii) takes into account the level of user's knowledge of cloud computing technologies,
(iii) and simplifies the selection process for cloud service consumers.

The remainder of the paper is organised as follows: Section 2 presents the review and analysis of related research work on cloud
service selection. Section 3 describes the proposed hierarchical model of QoS attributes which are associated with cloud services.
Section 4 illustrates the Multi Criteria Decision Making models of the Analytical Hierarchy Process and Simple additive weight for
ranking and prioritising cloud services. Section 5 presents the design of the simulation tool for cloud service selection. Section 6
describes the implementation and working mechanism of the simulation tool. Results and evaluation are presented in section 7.
Section 8 concludes the paper.

2. Related Work

The review of related work is classified into two categories. The first category investigates into the academic research work
related to cloud service selection. The second category investigates into the commercial tools which are developed for cloud service
selection. Note that this paper is the first of its kind that reviews both (academic) research papers as well as commercial tools related
to cloud service selection.

2.1. Analysis of Academic Research Studies

The authors in (Yaghoubi et al [1]) address the issue of web services composition in cloud computing. It takes into account QoS
aspects when composing different services. Accordingly, it develops a multi-verse optimization algorithm for web service compo-
sition. The algorithm is simulated and results are produced. The results are claimed to increase normalized QoS in service compo-
sition. This approach mainly focuses on web services composition and does not take into account the major issues and characteristics
of cloud services selection as described above. [2] investigates into logistics service selection in the domain of cloud manufacturing.
The authors analyse the characteristics of logistics services and develops a mathematical model for the selection of such services. The
working mechanism and results of the proposed model are simulated. The proposed model is claimed to optimise the task delivery
time in the cloud manufacturing environment. However, the application of this model is narrow and is limited to logistics services. It
also does not take into account wider QoS issues such as performance, security, and so on.
Authors in (Sun et al., 2014) have surveyed a number of research studies on cloud service selection. Li and Yang [20] suggested
that cloud service selection is an important element in the process of adopting cloud technology. However, the diversity of services
among the numerous cloud providers complicates the cloud service selection process (Rehman and Hussain, 2011; [21]). Li and Yang

2
M. Eisa, et al. Simulation Modelling Practice and Theory 103 (2020) 102108

[20] compared the performance and cost of four major cloud service providers, namely, Google AppEngine, Microsoft Azure, Amazon
AWS, and Rackspace. These cloud providers are some of the leaders of the cloud computing service market. The result of this
comparison shows that even for the same type of cloud service, cloud providers’ cost and performance can vary from one another.
This result emphasizes the need for a thorough and thoughtful cloud service provider selection process. .
In cloud computing, QoS symbolizes, for example, the level of performance, reliability, security and availability of a cloud service
and by the infrastructure upon which the service is hosted [19]. For cloud service consumer, QoS is important because cloud service
provider is expected by cloud service consumer to deliver the advertised quality of their services. For cloud service provider QoS
helps in finding the best trade-off between QoS levels and operational costs [19]. QoS helps providers in finding the middle ground
where they can deliver service on acceptable level while still maintain appropriate operational costs that is needed to produce and/or
deliver the service.
Previous studies have been carried out in reviewing and analysing the QoS attributes in cloud service selection. Some studies use
single quality attribute while other studies use a multi-dimensional QoS attributes. Salama et al. (Salama et al., [5]) propose a cloud
service selection based on a group of broad, multi-dimensional QoS attributes. Where single quality attribute selection is based on
performance or cost-benefit analysis, multi-dimensional QoS selection takes into account user's requirements, performance attributes,
as well as business profitability. Araujo et al. [28] uses a range of attributes primarily in the reliability domain to enable the
comparison between complex cloud services with different redundancy and maintenance policies. The process involves first creation
of a hierarchical cloud base model using Reliability Block Diagrams (RBDs) and Stochastic Petri Nets (SPNs). The model includes
various configurable attributes for repair, failure, redundancy and maintenance that can be used to create various customer re-
quirement scenarios. A MCDM technique is then used to assess and rank the scenarios based on dependability metrics of availability,
reliability and downtime and associated cost.
Furthermore, some of the current approaches make use of a set of a generic cloud QoS attributes called SMI (Service Measurement
Index) which was proposed by CSMIC (The Cloud Service Measurement Initiative Consortium (CSMIC), [6]). Garg et al. (Garg et al.
[7]) and [8] have developed a SMICloud which is a framework for comparing and ranking cloud services. SMICloud uses the main
SMI attributes in order to compare cloud service providers based on cloud service consumer requirements. However, one of the major
issues is that the current techniques do not provide standard definitions and representation of the cloud services and the QoS
attributes. It also does not consider criteria interdependency relationship. In addition, these techniques are complex for the cloud
service consumers with limited knowledge of cloud technologies. Majority of the academic research are considering cloud service
consumers knowledgeable and expert. Although the SMI is a useful step towards standardizing the key performance indicators for
measuring and comparing services, however, some of the quality attributes can be quite subjective and their measurements can be
perceived differently by different cloud service consumers or cloud service providers. For instance, the price of the same service under
same conditions can be perceived as cheap by some although others may consider it as expensive.
(Sun et al., [10]) adopts Analytical Hierarchy Process (AHP) in the process of selecting cloud services using a defined set of service
selection criteria and a limited number of cloud service providers. Other variations to AHP is also utilized in cloud service selection.
For example, the combination of fuzzy logic and AHP are combined into a Fuzzy AHP technique in order to select the optimal cloud-
path for cloud offloading purpose in mobile cloud computing environment (Singla and Kaushal, [22]). Some other research studies
use AHP in combination with other MCDM techniques. A combination of AHP and Technique for Order Preference by Similarity to
Ideal Solution (TOPSIS) is utilized in a fuzzy environment to find the best and the most optimal cloud service option (Wu, Wang and
Wolter, [23]). Ouadah et al. combine MCDM methodologies AHP and Preference Ranking Organization Method for Enrichment
Evaluations (PROMETHEE) with a database operator called Skyline operator to enhance web service selection (Ouadah et al., [12]).

2.2. Analysis of Commercial Service Selection Tools

Different commercial tools have been developed in order to help cloud service consumers in selecting cloud services. A detailed
analysis and comparison of the commercial tools are discussed in our earlier paper (Eisa et al. [13]). This paper presents an analysis of
the three well known commercial cloud service selection tools, which are Intel Cloud Finder, RankCloudz and Cloudorado.

2.2.1. Intel Cloud Finder


Intel Cloud Finder is a cloud service selection tool built by Intel. There are two types of searches in Intel Cloud Finder, namely
Quick Search and Detailed Search where each type of search presents a different set of cloud QoS attributes [25]. The Quick Search
cloud service consumer can quickly search for a cloud service that satisfy their requirements by choosing features that closely match
their specification. The Detailed Search provides cloud service consumers more flexibility and determine the importance of the
attributes presented to them. Similar to the Quick Search, the attributes are categorised in a way that contain sub-attributes, and each
sub-attribute has several options; all of which contain brief description of what the features are. The parameters are Security,
Usability, Quality, Availability, Technology and Business. However, users are presented with an overwhelmingly large number of
features and options. In the selection process, users need to review all of these options in order to narrow down the cloud provider
candidates. In addition, related to explanations to the QoS attributes, some of the explanations offered are not related to the attribute
itself and may give incorrect information.

2.2.2. RankCloudz
RankCloudz's selection process requires cloud service consumers to determine the level of importance of a set of attributes. There
are five different cloud services that cloud service consumer can choose: Dev & Test Infrastructure, Virtual Data Centre, Enterprise

3
M. Eisa, et al. Simulation Modelling Practice and Theory 103 (2020) 102108

Apps & Hosting, Storage & Backup and Big Data & Analytics. Each of service has its own set of attributes, ranging between 11 to 14
attributes [27]. The level of importance of an attribute in RankCloudz is represented by a scale from 0 to 10. RankCloudz ranks the
cloud service providers depending on the attributes and the priority that cloud service consumer has set for each attribute. Though
the process of assigning score that reflects the level of importance is quite straightforward, RankCloudz does not provide explanation
for each of these attributes straight away. Novice cloud service consumer might find the process difficult because of the lack of
description of the attributes. To gain access to explanation for QoS attributes, cloud service consumer needs to sign up, sign in, and
then download the in-depth report in the form of PDF file to access the QoS attributes descriptions. This also means that QoS
explanation is only available after the selection process is done, and not before when cloud service consumer needs it the most.
Further, RankCloudz presents technical and business requirements together in one non-hierarchical structure. Because of the flat
structure of the attributes, there is no flexibility in hierarchy.

2.2.3. Cloudorado
Cloudorado selection process works like Intel Cloud Finder Quick Search does where cloud service consumer chooses an option
from each attribute that matches their requirements [26]. Cloudorado differentiates three different types of cloud services: Cloud
Server, Cloud Hosting and Cloud Storage. For each of the cloud service, Cloudorado presents cloud service consumer with a lot of
detailed attributes, and it also provides brief description for all of these attributes. Cloud Server and Cloud Hosting categories have
the same non-functional attributes presented to the cloud service consumers but with different functional attributes. Further,
Cloudorado presents attributes in a subtle hierarchy structure. Subtle hierarchy here means that the hierarchy is not explicitly written
but it is apparent in the user interface where Cloudorado displays functional attributes as one group and the non-functional attributes
as another group. For the non-functional attributes group, Cloudorado divides these into a more detailed category resulting in a 2-
level hierarchy for the non-functional attributes. Cloud service consumer is not provided with any flexibility to choose any input level
in the hierarchy. There is quite a large number of attributes that are presented to the consumer that make the selection process
difficult.

3. Modelling of QoS for Cloud Service Selection

This section presents the 3-level hierarchical model for representing the QoS attributes of cloud services (Eisa, Younas and Basu,
[14]). This hierarchical model fits well with the hierarchical structure of the AHP which has been used in this MCDM based cloud
service section process. The model is composed of seven top level attributes in level one, each one of which is decomposed into more
granular attributes list in level two. A consumer select level one and level two attributes by doing pairwise comparisons with other
attributes of that level. Level three attributes consists of low level cloud metrics and are for experienced consumers. This hierarchical
model provides an adaptive cloud service selection interface for the consumer where a novice consumer may only perform pairwise
comparison of high level attributes of level one only, whereas a more experienced consumer may perform further pairwise com-
parison at level two and also selection of detailed attributes at level three. This is distinct from existing models [18] which commonly
use simplified one-level representation of QoS attributes. The proposed model helps cloud service consumers to easily understand and
define their requirements. It represents QoS attributes in a way that help cloud services consumers match their requirements with that
of the cloud services. The proposed model also ensures that cloud services are appropriately ranked and selected by taking into
account three levels of QoS attributes.
The proposed model follows the following phases in order to structure and represent the QoS attributes of cloud services.

3.1. Categorisation of the QoS Attributes

Cloud computing includes many QoS attributes that are related to the characteristics and the delivery of cloud services. QoS
attributes can be related to the technical aspects of cloud services as well as business aspects of cloud services. Because of the large
number of possible cloud QoS attributes, categorizing them in a structured way is important as it will help in sorting out these
attributes. In order to cater for technical as well as business aspects of cloud services, the proposed model represents QoS attributes
into four categories: Technical, Strategic & Organizational, Economic, and Political & Legislative. This categorization scheme is
adopted from Baldwin et al. [16] and also Janssen et al. (Janssen and Joha, [15]). However, the focus of the work presented in these
papers is different than the proposed model. For instance, Baldwin et al., look at the prospect of outsourcing banking information
system while in the case of Janssen et al. they look at the prospect of adopting cloud computing technology, specifically SaaS
technology. But the similarity of the proposed model to the models presented in (Janssen and Joha, [15]) and [16] is the ‘technology
adoption by an organization’. The main objective of cloud service selection is to help businesses and organizations in selecting cloud
services and thus adopting cloud technology. The proposed model therefore adopted the aforementioned four-category categorization
scheme. Further, it is paramount to include business-related consideration into the service selection process. Cloud computing
technology consists of not only technical aspect, but also business aspects. In the proposed model, it is considered as important that
QoS attributes should cover both technical and non-technical.
Furthermore, from our literature review of cloud QoS attributes, it is concluded that different providers, different tools, different
studies or researches use different ways to structure or categorize QoS attributes. Selecting a categorization scheme, that covers a
broader and more general scope, the proposed model categorizes cloud service QoS attributes into four top categories: technical,
strategic & organizational, economic, and political & legislative. Each of the four categories is briefly described as follows (Polyviou,
Pouloudi and Rizou, [24]):

4
M. Eisa, et al. Simulation Modelling Practice and Theory 103 (2020) 102108

Figure 1. Three Hierarchical Levels Representation of the Performance and Security Attributes

• Technical: factors related to the capabilities and limitations of the technology which include: Security, Usability, Assurance and
Performance.
• Strategic & Organizational: factors related to an enterprise's organizational and strategic goal, which include: Company perfor-
mance
• Economic: factors related to the financial aspects of cloud services which include: Pricing
• Political & Legislative: compliance with standards which include: Compliance
3.2. Structure and Representation of the QoS Attributes

This phase represents the QoS attributes in a 3-level hierarchical way because of two reasons. As described above, hierarchical
structure provides flexibility to consumers. This is especially tied in with the goal of the proposed model that takes into account user
knowledge of cloud computing. By having hierarchical structure, it is easy to place the more general attributes at the top level and the
more detailed attributes at the lower levels. And in so doing, the proposed model has the ability to present novice users with top-level,
general QoS attributes and expert consumers with the more detailed attributes situated at the lower level. This kind of flexibility is
not provided in the three existing selection tools reviewed in the previous section.
In the proposed framework, there is a three level hierarchy with seven top-level attributes: security, usability, assurance, per-
formance, company performance, pricing, compliance. An example 3-level structure of the Performance and Security attributes in the
proposed model is graphically represented in Figure 1. The different levels are illustrated as follows: Performance is specified as the
main or root QoS parameter at level 1. It then has four sub-parameters (Hardware, Functionality, Flexibility, and Scalability) at level
two. Each of these sub-parameters have further sub-parameters at level three. For example, Hardware has further sub-parameters of
Disk, OS, CPU Type and GPU. Similarly, the Security is specified as the main parameter at level one. It has sub-parameters (Access
control, Data security, Geography, and Audibility) at level two. Each of these sub-parameters have further sub-parameters at level
three. For example, Access Control has further sub-parameters of Authentication and Authorisation.
This structure of the QoS attributes, in Figure 1, is then used in the implementation of the simulation tool wherein data about QoS
attributes are gathered from a combination of cloud providers’ websites, users’ reviews scores, and cloud monitoring tool. Other QoS
attributes are described in the evaluation part of this paper.

4. Model for Cloud Services Ranking

Cloud services are associated with multiple QoS attributes as describe above. This shows that the process of cloud service selection
can be modelled as MCDM problem wherein a decision is to be made by taking into account multiple conflicting criteria [9].
Therefore, efficient and reliable methods are required to rank and prioritise cloud services according to the QoS requirements. This
paper adopts the MCDM models of Analytical Hierarchy Process (AHP) and Simple additive weight (SAW).

4.1. Analytical Hierarchy Process (AHP)

AHP is one of the MCDM (Multi Criteria Decision Making) models which is used in analysing complex decisions and selecting and
ranking cloud services [9,18]. AHP takes input from consumers about the QoS measurements (e.g., performance, reliability, security,

5
M. Eisa, et al. Simulation Modelling Practice and Theory 103 (2020) 102108

price) wherein consumers have different preferences of the QoS attributes. It transforms the qualitative and semi-quantitative
measurements into quantitative numeric values or weights so that cloud services can be ranked and selected.
AHP is a well-defined and structured MCDM model where the problem is defined in a structured way within a hierarchy. The
hierarchical structure of AHP helps in defining the best structure to represent the QoS attributes. It is also beneficial in representing
QoS attributes in a flexible way in order to adjust to different levels of knowledge possessed by different consumers. For instance, in
the service selection process, when consumers only need general attributes, the proposed model only needs to access the top level
attributes in the hierarchy. Similarly, when consumers need to go deeper into the more-detailed attributes, the proposed model can
access the second and third level attributes in the hierarchy.
Moreover, the proposed model (implementing AHP method) allows for inputs from consumers in the form of consumer's pre-
ference of a certain attribute over another attribute. Without having to delve into more technical attributes of cloud service, the
model allows consumer to just input their preference, e.g., QoS attribute A is preferred more than attribute B, and attribute B is
preferred over attribute C, and so on. This simple input is helpful for consumer with limited knowledge of cloud computing. But in the
case of more expert consumer, the proposed model will allow for specifying more technical attributes that at levels 2 and 3 (as
specified in the previous section).
The main steps and algorithm of the AHP method are summarised as follow. These are then implemented in the proposed model
which is presented in the subsequent sections of this paper.
Step 1: Attributes are set up in pair and consumers compare the attributes relative to each other, e.g. comparing performance and
security. The scale used in pairwise comparison is as follows: 1 (equal importance), 3 (slightly more important), 5 (strongly favour
one over the other), 7 (very strongly favour one over the other), and 9 (extremely favour one over the other). For example, if security
is set to 9 as compared to performance, then it means that consumer gives very high importance to security than performance.
Step 2: Construct pairwise comparison matrix of QoS attributes, e.g. a11 represents security, a12 represents performance, etc. The
resulting score of the pairwise comparison between attributes is then represented through a reciprocal matrix, as given below.
a11 a12 a1n
a21 a22 a2n
A=
an1 an2 ann (1)

In this matrix, aii= 1 and aij =


1
aji
Step 3: Construct normalized comparison matrix. Before constructing the normalized matrix, first sum the value for each column
in the pairwise matrix. Given that n is the column index of the matrix, sum for each column is:
Sn = a1n + a2n + + ann (2)

Normalized matrix is then obtained by dividing each value with the sum of its column.
a11 a12 a1n
S1 S2 Sn
a21 a22 a2n
A= S1 S2 Sn

an1 an2 ann


S1 S2 Sn (3)

Step 4: Obtain Eigen vector (priority vector) and Eigen values of QoS attributes. Eigen vector w can be obtained by averaging
across the rows in the matrix (rn) and represents the relative weights among the attributes.
a11 a12 a
S1
+ S2
+ + S1n
n
a21 a22 a
1 + + + S2n
W S1 S2 n
n
an1 an2 ann
+ + +
S1 S2 Sn (4)

Eigen value λmax is obtained from the summation of products between each element of Eigen vector and the sum of columns of
the matrix.
a11 a12 a
S1
+ S2
+ + S1n
n
rn =
n (5)

n
max = Si * ri
i=1 (6)

Step 5: Check comparison consistency through Consistency Index (CI) and Consistency Ratio (CR)

max n
CI =
n 1 (7)

6
M. Eisa, et al. Simulation Modelling Practice and Theory 103 (2020) 102108

Table 1
Random Consistency Index in AHP
n 1 2 3 4 5 6 7 8 9 10

RI 0 0 0.58 0.9 1.12 1.24 1.32 1.41 1.45 1.49

CI
CR =
RI (8)

For reliable result, CR value should be less than or equal to 0.1


RI is Random Consistency Index where the number is different depending on how many attributes are involved. Values for RI are
shown in Table 1 [11].

4.2. Simple additive weight (SAW)

In the propose framework, Simple Additive Weight (SAW) method is used in conjunction with the AHP due to the following
reasons. First, this methodology involves a very simple linear calculation process. This helps in speeding up the processing time of the
proposed model, especially when it involves a large size of data – i.e., different QoS attributes score and different cloud services from
different providers. Second, SAW calculation is centred around weight of each attribute for each provider. The weight for each
attribute is already calculated as part of the AHP calculation process (priority vectors). AHP method calculates in detail the weight of
each attribute and its sub attributes. This weight calculation will seamlessly flow into the SAW calculation process.
In the proposed approach of cloud service selection, first step of SAW is to calculate the utility of each QoS attribute for each cloud
provider. For each cloud provider m, utility of the nth QoS attribute (Umn) is the total sum of the product of the score of the ith sub-
parameter (Smni) multiplied by the weight w of ith sub-parameter (wni). j is the number of sub-parameters of attribute n.
j
Umn = Smni * wni
i=1 (9)

Once the utility of each attribute is calculated, the next step is to calculate the total utility of cloud provider m (Um). Total utility
of cloud provider m is the total sum of utility of each attribute.
l
Um = umn* Wn
n=1 (10)
Where l is the number of attributes, Wn is the weight of attribute n.
When combined, the sub-parameters weight calculation is
l j
Um = Smni *wni *Wn
n=1 i=1 (11)

5. Design and Development of the Simulation Tool

The design and development of the proposed simulation tool is quite complex as it involves, implementation of various functions
and methods such as automatically collecting or scraping data from different sources about cloud service providers, systematic
representation of the QoS attributes, designing multi-level hierarchical user interfaces and implementation of the QoS-based services
ranking methods such as AHP and SAW. The underlying principle of the model is to be able to read inputs from consumer, process
consumer's inputs using the proposed methodologies and then present consumer with ranking of cloud services that best match
consumer's preference.
Taking into account the requirements and complexity of the cloud service selection process, we present the system architecture in
Fig. 2. It comprises the main components of the simulation tool and how they interact with each other in order to implement the main
functions and algorithms required by the cloud service selection process.

5.1. Architecture

The architecture comprises the following main components.

5.1.1. Collection of QoS Data using Scrapers


The Scraper module in Fig. 2, is responsible for (semi) automatically collecting data about QoS attributes from different sources
about different cloud providers. This is one of the main contributions of this work as most of the existing approaches use static data in
ranking and selecting cloud services. However, the design of scraper collecting data about QoS attributes is a challenging task. First,
the proposed approach is based on scrapping data from three different sources as described below. Second, each cloud provider

7
M. Eisa, et al. Simulation Modelling Practice and Theory 103 (2020) 102108

Figure 2. Architecture of the Simulation Tool for Cloud Service Selection

represents their QoS data differently. There is no uniform representation of the QoS data. This makes the design of the scraper more
challenging. In the proposed architecture, there are three different types of Scraper modules and each of them collects data from
different sources. Note that we describe the main functions and data sources used in the design of the Scraper modules. Detailed
design and algorithms of these modules are beyond the scope of this paper.
Cloud Provider Scraper: This module crawls cloud providers’ websites at a certain time interval to gather relevant data about
certain QoS attributes such as performance, security, etc. The collected data are then stored inside the database (or repository) as
shown in Fig. 2. This module regularly scrapes data in order to make sure that the provider's repository always has the latest and
updated data. In the proposed model, cloud service data is scraped from eight different cloud service providers using the scraper
component. These eight providers were chosen based on their leadership position in the industry according — c.f. Gartner Magic
Quadrant report on cloud computing [17]. These include: Amazon AWS, Rackspace, Google Cloud, Oracle Cloud, Alibaba Cloud,
Microsoft Azure, Fujitsu Cloud and IBM Cloud.
User Reviews Scraper: This module gathers and process users’ review data. User reviews are important data sources as they
generally provide fair assessment of cloud services obtained from real consumers who have used and experienced with cloud services.
Score from user review gathered by this module is then stored inside the repository. According to the literature survey of this
research, there are no standard sources that provide users reviews and their scores about cloud services. Therefore, this paper is based
on reviews from two well-known global research and advisory organisation, Trush Radius and Gartner. These provide benchmark
analytics for Information Technology companies and verified authentic reviews and ratings for enterprise IT solutions and cloud
services.
Service Monitoring Scraper: Service monitor module is important to provide objectivity in the proposed model. This module is
responsible to select data from third party service monitoring tools in order to gather cloud service provider's performance data. The
gathered data is then stored inside the repository. For example, cloudharmony monitoring tool (https://cloudharmony.com/)
monitors service status, service availability percentage and outages. There are also other monitoring tools, but this paper collects data
from the cloudharmony as it is openly available.
The simulation tool is scalable as more categories of data source could be added or replaced to the current three categories
without making any major changes to the core tool itself.

5.1.2. Database (Repository)


It stores all information related to the cloud service selection process. These include, QoS attributes, data about cloud service
providers, data about users’ reviews and data from third party monitoring tools. As described above, the proposed model does not rely on
a single data source, which is the case in existing approaches. Relying on a single data source may not provide credible results in service
selection. For instance, service related QoS data provided on provider website may be biased or it may not reflect the QoS that is
experienced by the users. The proposed model takes into account three different sources in order to provide more credible results in the
service selection process. In addition, the repository contains data resulted from internal processing or calculation, namely cloud pro-
viders score. These cloud providers scores are calculated internally after taking into account all relevant data from the outside sources.

5.2. Cloud Service Ranking Algorithms

Figures 3 and 4 respectively show the algorithms of the AHP and SAW methods which are implemented as part of the proposed
simulation tool.

5.3. Usage of the Simulation Tool

The simulation tool comprises three main building blocks which are used in the cloud service selection. These include: (i) Client
side with user interface, (ii) Web server that implements the proposed algorithms/methods, and (iii) NoSQL database server that

8
M. Eisa, et al. Simulation Modelling Practice and Theory 103 (2020) 102108

Figure 3. Algorithm for AHP Method

Figure 4. Simple Additive Weight algorithm

9
M. Eisa, et al. Simulation Modelling Practice and Theory 103 (2020) 102108

stores data. The tool is deployed in a cloud server which is hosted by Amazon AWS EC2. The virtual server instance used in the
implementation is the Amazon EC2 c5.xlarge. The hardware and software specification include: Processor Intel Xeon Platinum
8124M CPU @ 3.00 GHz; Memory 8 GB; Hard disk 320 GB; OS Windows Server 2012 R2 Standard 64-bit; Web server XAMPP for
Windows version 7.0.24; NoSQL - MongoDB version 3.4.10.
The working mechanism of the tool is explained as follows:

5.3.1. Determine User's Expertise Level


The tool first determines the level of expertise of service consumers (who are selecting cloud service providers) in order to identify
how familiar consumers are with cloud computing. Some of the user's interfaces are shown in the experimentation section. The tool
enables beginner consumer to perform service selection using one level of attributes. Based on the requirements of the service
consumers, the final rank of cloud service providers is produced. The tool only asks the beginner consumers to do pairwise com-
parison on this level of attributes given that consumer has a limited knowledge. On the other hand, for intermediate and expert cloud
consumers, the tool will further ask consumers to do pairwise comparison of QoS attributes of up to level two for intermediate and up
to level three for expert consumers. This gives an option to consumers to specify the details of cloud service's QoS attributes. For
example, expert cloud consumer will be able to specify the exact price of cloud service, the location of cloud servers, or the exact type
of security certificate that consumers need for their services.

5.3.2. User's Input on QoS Attributes (High level)


The tool provides option to cloud consumer to carry out pairwise comparison on QoS attributes of up to the first two levels except
the lowest level. The process of cloud service selection is broken down into several levels in a hierarchical structure. The output of
this step is the consumer's comparative preference on each pair of selected QoS attributes.

5.3.3. Get User's Choice on QoS Attributes (Detailed Level)


The tool asks consumers to specify the details of level-3 QoS attributes that the consumers want. The output of this step is to
produce User's Detailed Preference. For example, consumers will specify the exact price range they want, or the kind of security
certificate they need. Only shortlisted providers that meet such requirements are considered for the next step.

5.3.4. Evaluate the weights of QoS attributes


After the pairwise comparison of the QoS attributes, AHP algorithm is used to calculate the relative weights of the attributes.
Referring to the hierarchical structure of the AHP problem, in this case AHP is performed on the first two levels. The lowest level of
the hierarchy (sub-parameters) will be processed using SAW algorithm.

5.3.5. Compute utility of cloud providers


After evaluating the weight of the high level attributes, the utility of each provider is computed using the SAW additive weighting
method based on the AHP weights calculated in the previous step and the additive score of the attributes. After the utility of each
provider is calculated, system will then rank cloud providers based on the weighted utility values.

6. Simulation Results and Evaluation

In the service selection process, the simulation tool utilises QoS data collected from different cloud service providers. As shown in
Table 1, we use a representative set of QoS attributes for the experiments. These include main (level-1) attributes: Security Usability,
and Performance. Each of these attributes has sub-attributes at levels 2 and 3. These are also represented using acronyms.
Four cloud service providers are considered, which include, IBM with service instance BL1.1 × 2 × 100; Fujitsu with service
instance P-1, Oracle with service instance BM.Standard 2.52 and Rackspace with service instance General1-2.

6.1. Experiment 1

This represents empirical evaluation of existing approaches, e.g., [18], which only take into account a single level representation
of QoS attributes in cloud service selection. In this experiment it is assumed that consumer assigns high priority to security as
compared to usability and performance. Similarly, consumer gives preference to performance over usability.
Using the AHP/SAW algorithms, the priority vectors and weighted values of the QoS parameters for cloud providers are computed
in Tables 2 and 3. Based on these values, the final ranking is computed and is shown in Table 4 and Fig. 4. This shows the Oracle is
ranked first, followed by Rackspace, IBM and Fujitsu. However, our proposed approach shows different ranking of the cloud pro-
viders, as illustrated in Experiment 2.

6.2. Experiment 2

This experiment is carried out using the proposed model which takes into account three-levels of QoS attributes. Using the AHP/
SAW algorithm, the values of the QoS parameters are computed in Tables 5 and 6 by taking into account levels 2 and 3 parameters.

10
M. Eisa, et al. Simulation Modelling Practice and Theory 103 (2020) 102108

Table 2
A set of multi-level QoS attributes
Level 1 Level 2 Level 3

Access Control Authentication (Atn)


Security (AC) Authorization (Atr)
Data Security Encryption (Enc)
(DS) Firewall (Frw)
Geography Provider's Nationality
(G) (PrN)
Machine Location
(MLn)
Auditability (Aud) Auditability (Aud)
Interface (Int) Web Interface (WIn)
Usability Mobile App (App)
Terminal Access (Tac)
Operability (Opr) Operability (Opr)
Learnability (Lrn) Learnability (Lrn)
Hardware Disk (Dsk)
Performance (Hw) OS (Os)
CPU Type (CPU)
GPU (GPU)
Functionality Network Performance
(Fnc) (Npr)
Memory Amount
(Mem)
Number of CPU
(Noc)
Flexibility (Flx) Flexibility (Flx)
Scalability (Scl) Scalability (Scl)

Table 3
Priority vector (values) of main parameters
Security Usability Performance

IBM 0.25 0.24 0.23


Fujitsu 0.25 0.24 0.18
Oracle 0.25 0.24 0.30
Rackspace 0.25 0.27 0.29

Table 4
Weighted values of main parameters
Security Usability Performance

IBM 0.09 0.02 0.13


Fujitsu 0.09 0.02 0.10
Oracle 0.09 0.02 0.17
Rackspace 0.09 0.02 0.16

Table 5
Final ranking of the cloud providers
Final Rank Rank

Oracle 27.99 1
Rackspace 26.83 2
IBM 24.04 3
Fujitsu 21.13 4

Based on the QoS values in Tables 5 and 6, the final ranking of cloud providers is shown in Table 7. These are then graphically
represented in Fig. 5.
The result above (ranking) shows that the Rackspace (with General1-2) has the highest ranking followed by Oracle (with
BM.Standard 2.52), IBM (with BL1.1 × 2 × 100) and then Fujitsu (with P-1).
This experiment gives distinct results than experiment 1 (as above) which considers only level 1 attributes. This shows that the
proposed model provides more detailed representation of QoS attributes which consumers can easily understand. It also produces

11
M. Eisa, et al. Simulation Modelling Practice and Theory 103 (2020) 102108

Table 6
Priority vector (values) of sub-parameters
Level 1 Level 2/3 IBM Fujitsu Oracle Rackspace

Security AC, Atn, Atr 0.26 0.26 0.26 0.26


DS, Enc, Frw 0.56 0.56 0.56 0.56
G, PrN, MLn 0.12 0.12 0.12 0.12
Aud, Aud 0.06 0.06 0.06 0.057
Usability Int, Win, App, Tac 0.69 0.69 0.69 0.67
Opr, Opr 0.10 0.10 0.10 0.10
Lrn, Lrn 0.21 0.21 0.21 0.21
Performance Hw, Dsk, OS, CPU, GPU 0.06 0.06 0.06 0.057
Fnc, Npr, Mem, Noc, 0.56 0.56 0.56 0.56
Flex, Flex 0.26 0.26 0.26 0.26
Scl, Scl 0.12 0.12 0.12 0.12

Table 7
Weighted values of main parameters
IBM Fujitsu Oracle Rackspace

Security 68.70 68.70 68.70 68.70


Usability 12.60 12.60 12.60 15.16
Performance 97.52 31.15 97.52 103.29

Table 8
Final ranking of the cloud providers
Final Rank Rank

Rackspace 187.15 1
Oracle 178.82 2
IBM 124.28 3
Fujitsu 112.44 4

Figure 5. Ranking of the cloud providers (case 1)

more appropriate ranking of cloud providers than the existing approaches which consider level 1 attributes.
Experiments 2 and 3 are conducted in order to prove that the proposed model (i) ensures more accurate and credible ranking of
cloud providers by taking into account three-levels of QoS attributes as well as information from three different sources, and (ii)
improves and simplifies the cloud service selection process by taking into account the level of user's knowledge of cloud computing
technologies such as Beginner, Intermediate or Expert level knowledge.

6.3. Experiment 3

This evaluates the scenario wherein a consumer chooses three attributes including security, usability and performance. However,
the consumer is assumed to have an Intermediate level knowledge of cloud computing. Recall that such consumer can choose to
perform pairwise comparison on level one and level two attributes. In this experiment, the consumer chooses security, usability and

12
M. Eisa, et al. Simulation Modelling Practice and Theory 103 (2020) 102108

Figure 6. Ranking of the cloud providers using proposed approach

Figure 7. Consumer selection of two-level attributes

performance and sets level one priorities, i.e., awards a priority score of 9 to both usability and performance against security and a
priority score of 5 to usability against performance. In addition, the consumer sets level two attribute priorities as shown in Fig. 6.
Levels 1&2 attributes and their priorities are described as follows.
Security: The consumer has priority for certain level two security attributes. The consumer prioritises access control over geo-
graphical location, i.e., the consumer believes access control features are more important than the geographic location of the server.
The consumer prioritises data security over geographic location, access control and auditability. The consumer prioritises auditability
over geographic location.

13
M. Eisa, et al. Simulation Modelling Practice and Theory 103 (2020) 102108

Figure 8. Ranking of cloud provider using two-level attributes

Usability: The consumer prioritises the operability of the cloud infrastructure over learnability. The consumer is more interested
in how easy it is to learn to use the cloud services (learnability) as against the user interface. The consumer prioritises interface over
operability.
Performance: The consumer prioritises functionality, scalability and flexibility over the underlying hardware. The consumer
prioritises flexibility over functionality. The consumer prioritises functionality and flexibility over scalability.
Based on the QoS attributes and values in Fig. 6, AHP/SAW algorithms are used to rank cloud providers. The result in Fig. 7,
shows that the Google Cloud (n1-standard-96) topped the ranking followed by the Microsoft Azure (Standard_D16s_v3) and the
Google Cloud (n1-standard-64). Note that the smaller VMs do not rate among the top ranking VMs in this test. This is because the
smaller machines do not necessarily meet with the criteria specified by the consumer in the level two attributes of security, usability
and performance.

6.4. Experiment 4

This experiment uses the same scenario as above. The consumer selected the attributes security, usability and performance. The
level one and two priorities are the same as the preferences selected in previous experiment (i.e same as level 1and level 2 i.e. a
priority score of 9 is assigned to both usability and performance against security and a priority score of 5 to usability against
performance). The consumer then further performs level three selection as shown in Fig. 8.
In Fig. 8, the consumer has made selection to some attributes. The consumer's preference is for virtual machines that support
authentication, authorization and encryption (specifically AES-256). In addition, the consumer has only selected virtual machines
that support windows operating system with memory of 2GB to 32GB. Most importantly, the consumer has filtered only disk size 16
GB.
Based on the QoS attributes and values in Fig. 8, AHP/SAW algorithms are used to rank cloud providers. The ranking is shown
below (Fig. 9). The ranking shows that the system has filtered out some of the results. However, even though the consumer selected
the same level one and two priority combinations, some of the results from the previous experiment have been filtered out to match
the additional level-3 preferences. Therefore, the Amazon Web Service T2 Small topped the ranking in this experiment.
The experiments took into consideration all the aspects that were defined as part of the proposed approach. These included,
knowledge level of consumers, different cloud providers, different number of QoS attributes and their level of hierarchy. The results

14
M. Eisa, et al. Simulation Modelling Practice and Theory 103 (2020) 102108

Figure 9. Consumer selection of three-level attributes

proved that our approach appropriately implemented the cloud service selection process and produced useful and credible results
when ranking and selecting cloud services provided by a large number of cloud providers.

7. Conclusion

This paper has investigated into a challenging and complicated process of cloud service selection given that there exists hundreds
of cloud providers that provide a large number of cloud services. In this, we proposed a new QoS-aware selection model that
developed a three-level representation scheme for systematically and succinctly representing QoS attributes. It also implemented the
Multi-Criteria Decision Making (MCDM) models, Analytic Hierarchy Process (AHP) and Simple Additive Weighting (SAW), in order to
rank services based on the various QoS attributes. The proposed model is designed and is implemented as a simulation tool. It is
rigorously evaluated through a number of experiments by taking into account data from widely used commercial cloud service
providers. Unlike existing approaches, the proposed framework extract service information from three different sources which in-
clude: information from cloud service providers, performance data from third-party monitoring tools, and user reviews. This ensures
the credibility and correctness of cloud services selection as decision is not simply based on one source of information, which is the
case of existing approaches. The simulation results show that the proposed model improves and simplifies the cloud service selection
process. It also ensures the credibility of cloud service selection. In addition, it takes into account the level of user's knowledge of
cloud computing technologies.

15
M. Eisa, et al. Simulation Modelling Practice and Theory 103 (2020) 102108

Figure 10. Ranking of cloud provider using three-level attributes

References

[1] M Yaghoubi, A Maroosi, Simulation and modeling of an improved multi-verse optimization algorithm for QoS-aware web service composition with service level
agreements in the cloud environments, Simulation Modelling Practice and Theory 103 (2020) Sept 2020.
[2] L. Zhou, L. Zhang, L. Ren, Modelling and simulation of logistics service selection in cloud manufacturing, 51st CIRP Conference on Manufacturing Systems,
Procedia CIRP, 72 2018, pp. 916–921 2018.
[3] H. Cui, X. Liu, T. Yu, H. Zhang, Y. Fang, Z. Xia, Cloud service scheduling algorithm research and optimization, Security and Communication Networks 2017
(2017) Article ID: 2503153.
[4] Columbus, L. (2017) Roundup of Cloud Computing Forecasts Update, 2013, Forbes. Available at: https://www.forbes.com/sites/louiscolumbus/2017/04/29/
roundup-of-cloud-computing-forecasts-2017/#7eac163831e8.
[5] M. Salama, A. Shawish, A. Zeid, M. Kouta, Integrated QoS utility-based model for cloud computing service provider selection, Computer Software and
Applications Conference Workshops (COMPSACW), 2012 IEEE 36th Annual, 2012, July, pp. 45–50.
[6] The Cloud Service Measurement Initiative Consortium (CSMIC)(2011) ‘Service Measurement Index Introducing the Service Measurement Index (SMI)’, Available
at:http://www.cloudcommons.com/about-smi.
[7] S.K. Garg, S. Versteeg, R. Buyya, Smicloud: A framework for comparing and ranking cloud services, Utility and Cloud Computing (UCC), 2011 Fourth IEEE
International Conference on, 2011, December, pp. 210–218.
[8] S.K. Garg, S. Versteeg, R. Buyya, A framework for ranking of cloud computing services, Future Generation Computer Systems 29 (4) (2013) pp.1012–pp.1023.
[9] M. Whaiduzzaman, A. Gani, N.B. Anuar, M. Shiraz, M.N. Haque, I.T. Haque, Cloud service selection using multicriteria decision analysis, The Scientific World
Journal 2014 (2014).
[10] M. Sun, T. Zang, X. Xu, R. Wang, Consumer-centered cloud services selection using AHP, 2013 International Conference on Service Sciences (ICSS), IEEE, 2013,
April, pp. 1–6.
[11] C.-L. Hwang, K. Yoon, Multiple Attribute Decision Methods and Applications, CRC Press, Boca Raton, 1981.
[12] A. Ouadah, K. Benouaret, A. Hadjali, F. Nader, Combining skyline and multi-criteria decision methods to enhance Web services selection, Programming and
Systems (ISPS), 2015 12th International Symposium on, IEEE, 2015, April, pp. 1–8.
[13] M. Eisa, M. Younas, K. Basu, H. Zhu, Trends and directions in cloud service selection, In Service-Oriented System Engineering (SOSE), 2016. IEEE Symposium on,
IEEE, 2016, March, pp. 423–432.
[14] M. Eisa, M. Younas, K. Basu, Analysis and representation of QoS attributes in cloud service selection, 2018 IEEE 32nd International Conference on Advanced
Information Networking and Applications (AINA), IEEE, 2018, May, pp. 960–967.
[15] M. Janssen, A. Joha, Challenges for adopting cloud-based software as a service (saas) in the public sector, ECIS (2011, June) 80.
[16] L.P. Baldwin, Z. Irani, P.E. Love, Outsourcing information systems: drawing lessons from a banking case study, European Journal of Information Systems 10 (1)
(2001) pp.15–pp.24.
[17] L. Leong, D. Toombs, B. Gill, G. Petri, T Haynes, ‘Magic Quadrant for Cloud Infrastructure as a Service, Worldwide’, Gartner (2017) 1–36. AugustAvailable at
http://www.gartner.com/technology/reprints.do?id=11UM941C&ct=140529&st=sb.
[18] M. Abdel-Basset, M. Mohamed, V. Chang, NMCDA: A framework for evaluating cloud computing services, Future Generation Computer Systems 86 (2018) 12–29
2018.
[19] D. Ardagna, G. Casale, M. Ciavotta, J.F. Pérez, W. Wang, Quality-of-service in cloud computing: modeling techniques and their applications, Journal of Internet
Services and Applications 5 (1) (2014) 11.
[20] A. Li, X. Yang, S. Kandula, M. Zhang, CloudCmp: comparing public cloud providers, Proceedings of the 10th ACM SIGCOMM conference on Internet mea-
surement, ACM, 2010, November, pp. 1–14.
[21] R. Achar, P.S. Thilagam, A broker based approach for cloud provider selection, International Conference on Advances in Computing, Communications and

16
M. Eisa, et al. Simulation Modelling Practice and Theory 103 (2020) 102108

Informatics (ICACCI), 2014, pp. 1252–1257.


[22] C. Singla, S. Kaushal, Cloud path selection using fuzzy analytic hierarchy process for offloading in mobile cloud computing, Recent Advances in Engineering &
Computational Sciences (RAECS), 2015 2nd International Conference on, IEEE, 2015, December, pp. 1–5.
[23] H. Wu, Q. Wang, K. Wolter, Methods of cloud-path selection for offloading in mobile cloud computing systems, Cloud Computing Technology and Science
(CloudCom), 2012 IEEE 4th International Conference on, IEEE, 2012, December, pp. 443–448.
[24] A. Polyviou, N. Pouloudi, S. Rizou, Which Factors Affect Software-as-a-Service Selection the Most? A Study from the Customer's and the Vendor's Perspective,
System Sciences (HICSS), 2014 47th Hawaii International Conference on, IEEE, 2014, January, pp. 5059–5068.
[25] Intel® (2013) Intel® Cloud Finderhttp://www.intelcloudfinder.com/.
[26] Cloudorado (2011) http://www.cloudorado.com/.
[27] RightCloudz (2014) http://rightcloudz.com/RankCloudzOnline.
[28] J. Araujo, P. Maciel, E. Andrade, et al., Decision making in cloud environments: an approach based on multiple-criteria decision analysis and stochastic models,
Journal of Cloud Computing, 2018 7 (1) (2018) p.7. Available at https://doi.org/10.1186/s13677-018-0106-7.

17

You might also like