Download as pdf or txt
Download as pdf or txt
You are on page 1of 6

QoS-based Discovery and Ranking of Web Services

Eyhab Al-Masri, Qusay H. Mahmoud


Department of Computing and Information Science
University of Guelph, Guelph, Ontario, N1G 2W1 Canada
{ealmasri, qmahmoud}@uoguelph.ca

Abstract— Discovering Web services using keyword-based search share similar functionalities is significantly achieved by
techniques offered by existing UDDI APIs (i.e. Inquiry API) may examining non-functional Web service attributes such as
not yield results that are tailored to clients' needs. When response time, throughput, availability, usability, performance,
discovering Web services, clients look for those that meet their integrity, among others. It would be desirable if existing
requirements, primarily the overall functionality and Quality of standards applied for publishing, discovering, or using Web
Service (QoS). Standards such as UDDI, WSDL, and SOAP have services have the ability to incorporate QoS parameters as part
the potential of providing QoS-aware discovery, however, there of the registration process (i.e. publishing process) while
are technical challenges associated with existing standards such continuously regulating or monitoring revisions to QoS
as the client's ability to control and manage discovery of Web information as a result of any related Web service updates.
services across accessible service registries. This paper proposes a
solution to this problem and introduces the Web Service There are several approaches that address how to deal with
Relevancy Function (WsRF) used for measuring the relevancy QoS for Web services. However, many of them rely on the
ranking of a particular Web service based on client’s preferences, service providers to supply their QoS metrics and therefore,
and QoS metrics. We present experimental validation, results, storing this type of information either in the UDDI or at the
and analysis of the presented ideas. service provider’s site may not be the best solution due to the
fact that features such as response time and throughput will be
Keywords-UDDI, Service Registries, Web Services, Quality of advertised by the service provider and may be subjected to
Service, QoS, Web Service Broker, Discovery of Web Services,
forms of manipulation. In addition, the supply of QoS metrics
Ranking of Web Services, Ranking
by the service provider raises several concerns such as integrity
and reliability of the supplied values. It would be ideal if there
I. INTRODUCTION is a trusted service broker that can manage the supply of QoS
Standards such as UDDI have enabled service providers information for Web services in a transparent manner such that:
and requestors to publish and find Web services of interest (1) service providers provide only QoS information that must
through UDDI Business Registries (UBRs), respectively. directly be supplied (i.e. cost per invocation or price plans)
However, UBRs may not be adequate enough for enabling through an interface; and (2) QoS metrics that are not
clients to search for relevant Web services due to a variety of necessarily supplied by service providers (i.e. response time,
reasons. One of the main reasons hindering the efficient availability, reliability, penalty rate, among others) are
discovery of Web services is the fact that existing search APIs computed in an autonomous manner.
(i.e. UDDI Inquiry API) only exploit keyword-based search To address the above issues, this paper introduces a
techniques which may not be suitable for Web services mechanism that extends our Web Services Repository Builder
particularly when differentiating between those that share (WSRB) architecture [1] by offering a quality-driven discovery
similar functionalities. of Web services and uses a combination of Web service
Furthermore, many software vendors are promoting their attributes as constraints when searching for relevant Web
products with features that enable businesses and organizations services. Our solution has been tested and results show high
to create their own UBRs (i.e. IBM WebSphere, Microsoft success rates of having the correct or most relevant Web
Enterprise Windows Server 2003, and others). In this case, service of interest within top results. Results also demonstrate
businesses and organizations may preferably deploy their own the effectiveness of using QoS attributes as constraints when
internal UBRs for intranet or extranet use which will cause a performing search requests and as elements when outputting
significant increase in the number of discrete UBRs over the results. Incorporating QoS properties when finding Web
Web. This adds to the already existing complexity of finding services of interest provides adequate information to service
relevant Web services of interest in the sense that there needs requestors about service guarantees and gives them some
to exist an automated mechanism that can explore all accessible confidence as to the quality of Web services they are about to
UBRs, mainly a Web services’ discovery engine. invoke.
Due to the fact that much of the information provided by The rest of this paper is organized as follows. Section II
Web services is mostly technical, there is a need for a discusses the related work. Our Web service ranking
mechanism that can distinguish between Web services using mechanism is presented in Section III. Experiments and results
well-defined criteria such as considering Quality of Service are discussed in Section IV, and finally the conclusion and
(QoS) attributes. Differentiating between Web services that future work are discussed in Section V.

1-4244-1251-X/07/$25.00 ©2007 IEEE. 529


II. RELATED WORK III. WEB SERVICE RELEVANCY FUNCTION (WSRF)
Several Web services may share similar functionalities, but Although many of the existing QoS-enabled discovery
possess different non-functional properties. When discovering mechanisms discussed in Section II provide ways for service
Web services, it is essential to take into consideration providers to publish QoS information, there are some
functional and non-functional properties in order to render an challenges that must be taken into consideration. These
effective and reliable selection process. Unfortunately, current challenges include: (1) automating, administering, and
UDDI specification V3 [3] does not include QoS as part of its maintaining updated QoS information in UBRs, (2) ensuring
Publication or Inquiry APIs. the validity of QoS information supplied by service providers,
(3) conducting QoS measurements in an open and transparent
In order to solve this problem, some work has been manner, (4) controlling and varying time periods for evaluating
implemented for enhancing the UBR’s inquiry operations by QoS parameters, (5) managing the format of QoS information
embedding QoS information within the message. One example results, and (6) enhancing UDDI to support QoS information
is UDDIe [4] which provides an API that can associate QoS with the existing version without the need for any
information through a bag of user-defined properties (called modifications or extensions to specifications.
propertyBag) while search queries are executed based on these
properties. Such properties may provide QoS support; however, In order to provide a quality-driven ranking of Web
UDDIe is mainly used for the G-QoSM framework which is services, it is important to collect QoS information about Web
used for Grid Computing, and provides a very limited level of services. The WSRB framework [1] uses a crawler targeted for
support for QoS details. Web services called the Web Service Crawler Engine (WSCE)
[14] that actively crawls accessible UBRs as shown in Figure 1.
Another approach that attempted to enhance the discovery
of Web services using QoS information is Quality
Requirements Language (QRL) [5]. This general XML-based
language can be used for Web services as well as other
distributed systems such as video on demand. However, QRL
does not clearly define how it can be used or associated with
Web services’ interfacing standards such as WSDL. In
addition, QRL does not provide sufficient information on when
and how to control and manage any specified QoS information.
In [6], an approach for certifying QoS information by a
Web service QoS Certifier is proposed. In this approach, the
Web service provider has to provide QoS information at the
time of registration. However, this approach does not provide a
reliable mechanism and an adequate solution for solving the
support of QoS properties for Web services. This solution
proposes an extension to the current UDDI design which might Figure 1. WSRB Framework: Quality-Driven Ranking of Web Services
not be feasible, concentrates on verifying QoS properties at the
time of registration, does not provide any guarantees of having A Web Service QoS Manager (WS-QoSMan) module [15]
up-to-date QoS information (i.e. in case of Web service updates within the WSRB framework is responsible for measuring QoS
or changes), and requires service providers to go through information for the collected Web services, and information is
additional steps during the publishing process which enforces stored in the Web Service Storage (WSS) as shown in Figure 1.
them to think of ways on how to measure the QoS of their Web
services. In addition, the current solution does not differentiate WSRB enables clients to selectively choose and manage
between QoS properties directly supplied by service providers their search criteria through a graphical user interface. Once
[7] (i.e. cost) or how the QoS certifier handles such parameters clients submit their search requests, a Web Service Relevancy
when issuing a certification. Function (WsRF) is used to measure the relevancy ranking of a
particular Web service wsi.
Other approaches focused on using QoS computation and
policing for improving the selection of Web services [8,9], QoS parameters help determine which of the available Web
developing a middleware for enhancing Web service services is the best and meets clients’ requirements. Because of
composition for monitoring metrics of QoS for Web services their significance, we selected the following QoS parameters
[10], using agents based on distributed reputation metric that are based on earlier research studies [5,12] for computing
models [11], and using conceptual models based on Web WsRF values:
service reputation [12]. Many of these approaches do not 1. Response Time (RT): the time taken to send a request
provide guarantees as to the accuracy of QoS values over time and receive a response (unit: milliseconds).
or having up-to-date QoS information. In addition, preventing
false ratings using reputation metric models is not present and 2. Throughput (TP): the maximum requests that are
therefore, false information may be collected and as a result handled at a given unit in time (unit: requests/min).
may significantly impact the overall ratings of service 3. Availability (AV): a ratio of the time period when a
providers. Web service is available (unit: %/3-day period).

1-4244-1251-X/07/$25.00 ©2007 IEEE. 530


4. Accessibility (AC): the probability a system is where hi,j measures the difference of qi,j from the maximum
operating normally and can process requests without normalized value in the corresponding QoS property group or j
any delay. (unit: %/3-day period). column.
5. Interoperability Analysis (IA): a measure indicating In order to allow for different circumstances, there is an
whether a Web service is in compliance with a given apparent need to weight each factor relative to the importance
set of standards. WSRB uses SOAPScope’s [13] or magnitude that it endows upon ranking Web services based
Analysis feature for measuring IA (unit: % of errors on QoS parameters. Therefore, we need to define an array that
and warnings reported). represents the weights contribution for each Pj where w =
{w1,w2,w3,…,wj}. Each weight in this array represents the
6. Cost of Service (C): the cost per Web service request
degree of importance or weight factor associated with a
or invocation (cents per service request). specific QoS property. The values of these weights are
Clients can submit their requests (i.e. via a GUI) to the fractions in the sense that they range from 0 to 1. In addition,
WSRB framework which processes them and computes WsRF all weights must add up to 1. Each weight is proportional to the
values for all matching services. It is assumed that WsRF importance of a particular QoS parameter to the overall Web
values are computed for Web services that are in the same service relevancy ranking. The larger the weight of a specific
domain. A Web service with the highest calculated WsRF parameter, the more important that parameter is to the client
value is considered to be the most desirable and relevant to a and vice versa. The weights are obtained from the client via a
client based on his/her preferences. client interface. Introducing different weights to (3) results in
the following equation:
Assuming that there is a set of Web services that share the
same functionality such that WS ( WS = {ws1, ws2, ws3,  qi , j 
…wsi}) and QoS attributes such that P ( P = {p1,p2,p3,…pj} ), a hi , j = wj   (4)
 max( N( j )) 
QoS-based computational algorithm determines which wsi is
relevant based on QoS constraints provided by the client. Using Applying (4), we get a weighted matrix as shown below:
j criteria for evaluating a given Web service, we obtain the
following WsRF matrix in which each row represents a single  h1,1 h1, 2 ... h1, j 
Web service wsi, while each column represents a single QoS  h 2,1 h 2, 2 ... h 2, j 
 
parameter Pj.  . . . . 
E′ =   (5)
 . . . . 
 q1,1 q1, 2 ... q1, j   . . . . 
 q 2,1 q 2, 2 ... q 2, j   
  hi ,1 hi , 2 ... hi , j 
 . . . . 
E=  (1) Once each Web service QoS value is compared with its
 . . . . 
corresponding set of other QoS values in the same group, we
 . . . .  can calculate the WsRF for each Web service as shown below:
 
 qi ,1 qi , 2 ... qi , j 
N
WsRF ( wsi ) = ∑ hi , j (6)
Due to the fact that QoS parameters vary in units and i =1
magnitude, E(qi,j) values must be normalized to be able to where N represents the number of Web services from a given
perform WsRF computations and perform QoS-based ranking. set. To demonstrate how WsRF works, we will consider a
Normalization provides a more uniform distribution of QoS simple example in which a client assigns weights to QoS
measurements that have different units. In addition, properties discussed earlier as follows:
normalization allows for weights or thresholds to be associated
with QoS parameters and provides clients with effective ways w1 = 0, w2 = 0, w3 = 0, w4 = 0, w5 = 0, and w6 = 1
to fine-tune their QoS search criteria.
From the weights assigned, it is clear that the last weight
In order to calculate WsRF(wsi), we need the maximum that represents the most important QoS property to this client is
normalized value for each Pj column. Let N be an array where w6 or cost.
N = {n1,n2,n3,…nm} with 1 ≤ m ≤ i such that: The importance level assigned to each QoS parameter
i varies since QoS properties vary in units. Due to the fact that
N ( j ) = ∑ qm , j (2) each QoS property chosen by clients has an associated unit that
m
is different from other properties (i.e. response time is
where qm,j represents the actual value from the WsRF matrix in represented in milliseconds while cost is represented in cents),
(1). Each element in the WsRF matrix is compared against the it is important to clarify the fact that each weight represents a
maximum QoS value in the corresponding column based on the different degree of significance which must be optimized. For
following equation: example, a client that sets all weights to zero except for cost
indicates that WsRF should minimize cost since it represents
qi , j 100% significance to the client. Therefore, WsRF will
hi , j = (3)
max( N ( j )) eventually return the cheapest Web service in this case.

1-4244-1251-X/07/$25.00 ©2007 IEEE. 531


IV. EXPERIMENTS AND RESULTS Applying WsRF in (6) without any associated weights, the
Data used in this paper are based on actual Web service following results are obtained:
implementations that are available over the Web, and are listed
in XMethods.net, XMLLogic, and StrikeIron.com. Seven Web TABLE II. RESULTS OF WSRF WITHOUT WEIGHTS
services were chosen from the same domain and they all share
the same functionality of validating an email address. QoS ID Service Provider & Name WsRF
parameters discussed in Section III were used to evaluate QoS 1 XMLLogic ValidateEmail 3.6638
metrics for all of the seven Web services. Table I shows the 2 XWebservices XWebEmail-Validation 3.2166
3 StrikeIron Email Verification 4.6103
average QoS metrics for these Web services from testing four 4 StrikeIron Email Address Validator 4.1955
trials running on two different networks. The QoS parameter 5 CDYNE Email Verifier 3.9246
cost is represented in cents and was provided by the service 6 Webservicex ValidateEmail 4.2679
provider. In addition, availability and accessibility values were 7 ServiceObjects DOTS Email Validation 4.6700
measured over a three-day period.
From Table II, the Web service with the highest WsRF
TABLE I. QOS METRICS FOR VARIOUS AVAILABLE EMAIL value indicates that it has the best QoS metrics. For this
VERIFICATION WEB SERVICES
example, WsRF determines that the best Web service without
having any dependency on any specific QoS parameter (i.e.
(req/min)

C (cents/
RT (ms)

AV (%)

AC (%)

Service

invoke)
IA (%)
I keyword-based search) is Web service number seven. Figure 2
TP

Provider &
D shows the results from computing WsRF values for all Web
Name
services listed in Table I using a keyword-based search
XMLLogic
1 ValidateEmail
720 6.00 85 87 80 1.2 technique versus a client-controlled search in which cost
XWebservices represents the most important QoS parameter (i.e. running
2 XWebEmail- 1100 1.74 81 79 100 1 WsRF that is heavily dependent on cost).
Validation
StrikeIron
Client Controlled vs. Keyword-based
3 Email 710 12.00 98 96 100 1
Verification QoS Specified Generic Search (No QoS)
StrikeIron
Email Address 912 10.00 96 94 100 7 5
4
WsRF (ws i )

Validator 4
CDYNE 3
5 910 11.00 90 91 70 2 2
Email Verifier
Webservicex 1
6 1232 4.00 87 83 90 0 0
ValidateEmail
ServiceObjects 0 1 2 3 4 5 6 7
7 DOTS Email 391 9.00 99 99 90 5 Web Service Number
Validation

Table I shows QoS values that were measured by WS- Figure 2. Results from running WsRF heavily dependenton cost vs.
QoSMan for seven Web services. In order to find the most keyword-based search
relevant Web service, there needs to exist a set of Web services
that share the same functionality and are used to measure QoS Based on QoS values obtained in Table I, Web service
metrics based on QoS criteria. Once WSRB has successfully number six has the least cost (zero implies that it is being
generated the necessary QoS metrics, these QoS values are offered at no cost) which complies with the results obtained
used as inputs to the WsRF and the matrix in (1) is established. from running WsRF that is heavily dependent on cost as Figure
It is important to note that WSRB does not necessarily have to 1 demonstrates (i.e. Web service number six has the highest
perform QoS metrics check when a client performs a request, WsRF value). When analyzing results from Figure 2, it is
but rather uses an update interval for measuring these metrics. important to take into consideration WsRF values when
This provides the WSRB with up-to-date QoS information that associating at least one QoS parameter which results in WsRF
is ready and available upon client requests in real-time values ranging from 0 to 1 while having a broad search that is
scenarios. not QoS specific (i.e. WsRF without weights) produces WsRF
values ranging from 3.22 to 4.67. Having smaller values means
The values shown in Table I represent QoS values that the standard deviation is smaller, and therefore, the faster
measured for seven different Web services. In order to find the the WsRF is converging into a solution. Having smaller
most suitable Web service, it is important to optimize the standard deviation using WsRF outperforms keyword-based
values for each QoS parameter. For instance, having higher search technique in the sense that it provides a very good
probability for accessibility percentage is preferable than estimate of the true or optimal value, while providing precise
having a Web service with low probability for accessibility. In and accurate results.
this case, WsRF will maximize accessibility. However, for
some other QoS parameters such as cost, WsRF will minimize To demonstrate the effectiveness of our ranking technique
them. and how it outperforms other discovery methods that merely
depend on keyword-based technique, we will consider six test

1-4244-1251-X/07/$25.00 ©2007 IEEE. 532


scenarios in which each scenario represents a different
QoS Ranking Dependent on RT (75%) and TP (25%)
combination of QoS requirements.
1.20
1.00

WsRF (wsi )
A. Test Scenario 1 0.80
Figure 3 shows another test from running WsRF with 0.60
having response time as the most important QoS parameter. 0.40
Results from Figure 3 demonstrate that Web service number 0.20
seven has the highest WsRF value, or the one that has the 0.00
0 1 2 3 4 5 6 7
fastest response time, or 0.391 seconds, which conforms to the
data obtained in Table I. Web Service Number

QoS Ranking Heavily Dependent on RT Figure 5. QoS ranking with more emphasis on RT (75%) than TP (25%)
1.20
1.00
WsRF (wsi )

0.80 D. Test Scenario 4


0.60 Another test was conducted with varying the weights for
0.40 response time and cost. Figure 6 shows the results from
0.20 running this test in which the QoS ranking is heavily dependent
0.00
on RT (80%) but considerably takes into account cost (20%).
0 1 2 3 4 5 6 7
Figure 6 shows that Web service number seven has highest
Web Service Number
WsRF value (or 0.8040) followed by the third Web service
(WsRF value of 0.4606).
Figure 3. QoS ranking heavily dependent on RT
QoS Ranking Dependent on RT (80%) and C (20%)
1.20
B. Test Scenario 2
1.00 0.8040
WsRF (wsi )

Figure 4 shows another test when running WsRF with more 0.80
emphasis on the maximum throughput (TP). Results from 0.60 0.4511 0.4606 0.4539
Figure 4 demonstrate that Web service number three has the 0.40 0.3044 0.3458 0.3537
highest WsRF value which is consistent with the data obtained 0.20
in Table I in which this Web service has the highest throughput 0.00
or 12 requests per minute. 0 1 2 3 4 5 6 7
Web Service Number
QoS Ranking Heavily Dependent on TP
1.20
1.00 Figure 6. QoS ranking dependent on RT (80%) and C (20%)
WsRF (wsi )

0.80
0.60 Results shown in Figure 6 can be compared to those in Test
0.40 Scenario 1 in the sense that in both of these scenarios, response
0.20 time is the dominant QoS parameter. However, WsRF values
0.00
slightly change in Test Scenario 4 such that response time
0 1 2 3 4 5 6 7
remains the dominant QoS parameter, but slightly takes into
Web Service Number
consideration the cost parameter. Table III demonstrates the
ranks for both scenarios and the ranking variation for each Web
Figure 4. QoS ranking heavily dependent on TP service.

C. Test Scenario 3 TABLE III. RANKING DEVIATION FOR TEST SCENARIOS 1 AND 4

Another test was conducted with varying the weights for ws i


Test Scenario 1 Test Scenario 4
∆ Rank ∆ Rank %
response time (RT) and throughput (TP). In this test, response WsRF Rank WsRF Rank
time has more weight than throughput which yields results ws1 0.54 3 0.45 4 -1 -14.29
ws2 0.36 6 0.30 7 -1 -14.29
presented in Figure 5. Results from Figure 5 demonstrate that
ws3 0.55 2 0.46 2 0 0.00
Web service number seven has the highest WsRF value while ws4 0.43 5 0.35 6 -1 -14.29
Web service number three has the second WsRF value. By ws5 0.43 4 0.35 5 -1 -14.29
comparing results in Figure 5 to those in Figures 3 and 4, it is ws6 0.32 7 0.45 3 +4 57.14
apparent that the highest WsRF value in Figure 3 dominates the ws7 0.99 1 0.80 1 0 0.00
ranking in this test while the one in Figure 4 (Web service
three) has a lesser WsRF value due to the fact that the
requirements for this test indicate more emphasis (weight) on Table III demonstrates ranking variations from Test
RT than TP. Scenarios 1 and 4. Although the dimensionality of QoS

1-4244-1251-X/07/$25.00 ©2007 IEEE. 533


parameters has changed from Test Scenario 1 to Test Scenario V. CONCLUSION
4 in the sense that an additional QoS parameter was introduced A Web service relevancy ranking function based on QoS
into the ranking (i.e. cost), Web service number seven has the parameters has been presented in this paper for the purpose of
highest WsRF value in both tests. This major change was finding the best available Web service during Web services’
reflected on Web service number six in which the WsRF value discovery process based on a set of given client QoS
increased significantly by 4 ranking degrees or 57.1% in terms preferences. The use of non-functional properties for Web
of percentage ranking difference. For example, Web service services significantly improves the probability of having
number six in this test scenario has gained or moved four relevant output results. The proposed solution has shown
ranking levels (+4) when the cost parameter was introduced usefulness and effectiveness of incorporating QoS parameters
into the ranking technique. Due to the fact that Web service as part of the search criteria and in distinguishing Web services
number six has the least cost, the difference in WsRF value and from one another during the discovery process. The ability to
ranking was affected considerably. It is important to note that discriminate on selecting appropriate Web services relied on
total changes that occurred from Test Scenario 1 to this test the client’s ability to identify appropriate QoS parameters. The
scenario (total ∆ Rank) add up to 1. In this case, although proposed solution provides an effective Web service relevancy
WsRF values have considerably changed, they are as a result of function that is used for ranking and finding most relevant Web
compromising one QoS parameter over the other. It is also services. For future work, we plan to extend QoS parameters to
important to note that WsRF takes into consideration an include information such as reputation, penalty rates,
acceptable error range that can be assigned by clients and reliability, and fault rates.
which allows the computed WsRF values to be more accurate.
Varying QoS parameters and associated degrees of REFERENCES
importance (i.e. weights) significantly affects the ranking [1] Al-Masri, E., and Mahmoud, Q. H., “A Framework for Efficient
results or WsRF values. When varying these weights, WsRF Discovery of Web Services across Heterogeneous Registries”, IEEE
values may potentially overlap with each other which we refer Consumer Communication and Networking Conference, 2007, pp. 415-
to as the QoS point of reflection: a point that determines the 419.
changes required for QoS parameters of two or more Web [2] UDDI Version 3.0.2 Specifications, October 2004,
services that reflect on their WsRF values such that they are http://uddi.org/pubs/uddi_v3.htm.
approximately equal. For example, in this test scenario, varying [3] Ali, A., Rana, O., Al-Ali, R., and Walker, D., “UDDIe: An Extended
Registry for Web Services”, Proc. of 2003 Symposium on Applications
weights for response time to 59.8 % and cost to 41.2% will and the Internet Workshops, 2003, pp.85-89.
produce WsRF values for Web services number 6 and 7 that
[4] Martin-Diaz, O., Ruiz-Cortes, A., Corchuelo, R., and Toro, M., “A
are approximately equal. Framework for Classifying and Comparing Web Services Procurement
Platforms”, Proc. of 1st Int’l Web Services Quality Workshop, Italy,
E. Test Scenario 5 2003, pp. 37-46.
[5] Ran, S., “A Model for Web Services Discovery with QoS”, ACM
Another test was conducted in which the weights associated SIGecom Exchanges 4(1), 2003, pp. 1-10.
are equally distributed across all QoS parameters (w=0.1667)
[6] Kumar, A., El-Geniedy, A., and Agrawal, S., “A Generalized
and yields results shown in Figure 7. Framework for Providing QoS based registry in Service-Oriented
Architecture”, Proc. of IEEE International Conference on Services
QoS Ranking w/ Equal Distribution of Weights Computing, 2005, pp. 295-301.
1.20 (1/6) [7] Liu, Y., Ngu, A., and Zeng, L., “QoS Computation and Policing in
1.00 Dynamic Web Service Selection”, Proceedings of the 13th International
WsRF (wsi )

0.7684 0.7783
0.80 0.6541
World Wide Web Conference, 2004.
0.6106
0.60 0.7113
[8] Zeng, L., Benatallah, B., Dumas, M., Kalagnanam, J., and Sheng, Q.Z.,
0.6993
“Quality Driven Web Services Composition”, Proceedings of the 12th
0.40 0.5361
International World Wide Web Conference, 2003, pp. 411-421.
0.20
[9] Sheth, A., Cardoso, J., Miller, J., and Koch, K., “Web Services and Grid
0.00
Computing”, Proceedings of the Conference on Systemics, Cybernetics
0 1 2 3 4 5 6 7 and Informatics, Florida, 2002.
Web Service Number [10] Sreenath, R., and Singh, M.P., “Agent-based Service Selection”. Journal
of Web Semantics, Volume 1, Issue 3, 2004.
Figure 7. Equal distribution of weights (w=0.1667) [11] Larkey, L., “Automatic Essay Grading Using Text Classification
Techniques”, ACM SIGIR, 1998.
Results in Figure 7 demonstrate a small standard deviation [12] Mansace, D., “QoS Issues in Web Services”, IEEE Internet Computing,
6(6), 2002, pp. 72-75.
when compared to the ones shown in Figure 2 using a generic
[13] Mindreef SOAPScope, http://home.mindreef.com, (Last Accessed May
search. For instance, the standard deviation when considering 2007).
WsRF without any weights is 0.5197 while considering equally
[14] Al-Masri, E., and Mahmoud, Q. H., “Crawling Multiple UDDI Business
distributed weights for all QoS parameters is reduced Registries”, 16th International World Wide Web Conference, 2007, pp.
significantly to 0.0866. This shows that associating weights 1255-1256.
with WsRF significantly improves the accuracy of the ranking [15] Al-Masri, E., and Mahmoud, Q. H., “Discovering the Best Web
and enables WSRB to converge into a solution. Therefore, the Service”, 16th International World Wide Web Conference, 2007, pp.
more client preferences specified, the narrower the results and 1257-1258.
the higher performance of WsRF.

1-4244-1251-X/07/$25.00 ©2007 IEEE. 534

You might also like