Professional Documents
Culture Documents
A Review On Semantic Web Service Composition
A Review On Semantic Web Service Composition
Volume X (2008)
12 pages
Guest Editors: Sebastian Abeck, Reinhold Krger, Nicolas Repp, Michael Zapf
Managing Editors: Tiziana Margaria, Julia Padberg, Gabriele Taentzer
ECEASST Home Page: http://www.easst.org/eceasst/ ISSN 1863-2122
ECEASST
Abstract: Every year, contesters submit contributions to the Web Service Chal-
lenge (WSC) in order to determine which service composition system is the most
efficient one. In this challenge, semantic composition tasks must be solved and the
results delivered by the composers are checked for correctness. The time needed for
the composition process is another important competition criterion.
After we had participated with great success in the 2006 and 2007 WSC, we were
asked to manage the Web Service Challenge 2008. In this paper, we present the
challenge task, the challenge rules, the document format used, and the results of this
competition. We provide a summary over the past challenges and give first previews
on the future developments planned for the Web Service Challenges to come.
Keywords: Business Process Management, quality of service, Web Service com-
position, orchestration, sub-orchestration, BPEL, WSBPEL, WSDL, OWL, WSC,
Web Service Challenge
1 Introduction
Since 2005, the annual Web Service Challenge1 (WSC) provides a platform for researchers in
the area of Web Service composition which allows them to compare their systems and to ex-
change experiences [1]. It is always co-located with the IEEE Joint Conference on E-Commerce
Technology (CEC) and Enterprise Computing, E-Commerce and E-Services (EEE) [2, 3, 4].
In the past, the WSC contest scenarios as well as the involved data formats had no resemblance
with real-world scenarios but were purely artificial tests for the capability of syntactic and se-
mantic Web Service Composition systems. After being asked to manage the 2008 competition,
we decided to develop the WSC to a more practice-oriented event. Therefore, we introduced
new rules and based the challenge on standardized data formats such as OWL [5], WSDL [6],
and WSBPEL [7]. Additionally, we introduced a new test set generator which produces config-
urations very similar to those found in real Service Oriented Architectures in the industry.
This led to an increase in the complexity and the quality of the challenge tasks. In this pa-
per, we also introduce our novel and extensible generator for service composition tasks and an
additional composition verification utility – both building on WSBPEL, WSDL, and OWL. Com-
bined, the two form an efficient benchmarking system for evaluating Web Service composition
engines.
1 / 12 Volume X (2008)
The main contributions of this work are
1. a detailed discussion of the requirements for realistic test scenarios for service composition
systems,
2. the introduction of a versatile system able to generate such scenarios and to verify the
results of composition processes,
3. and a review on the 2008 WSC.
2 Prerequisites
Before discussing the idea of semantic service composition, we define some necessary prerequi-
sites. First, let us assume that all semantic concepts in the knowledge base of the composers are
members of the set M and can be represented as nodes in a wood of taxonomy trees.
Definition 1 (subsumes) Two concepts A, B ∈ M can be related in one of four possible ways.
We define the predicate subsumes : (M × M) 7→ {true, false} to express this relation as fol-
lows:
1. subsumes(A, B) holds if and only if A is a generalization of B (B is a specialization of A).
2. subsumes(B, A) holds if and only if A is a specialization of B (B is a generalization of A).
3. If neither subsumes(A, B) nor subsumes(B, A) holds, A and B are not related to each other.
4. subsumes(A, B) and subsumes(B, A) is true if and only if A = B.
The subsumes relation is transitive, and so are generalization and specialization.
The search space that needs to be investigated in Web Service composition [8] is the set of all
possible permutations of all possible sets of services.
Compute
Required Service
WSDL of Composition
OWL WS-BPEL
file Parse OWL required file Interface Time
Ontology Service Package Measurement
Generate
WS-BPEL
Compute
Required Service
Composition
Generate
WS-BPEL
Challenge Score
The challenge environment itself is a distributed system. The composer software of the con-
testants is placed on the server side and started with a bootstrap procedure. First, it is provided
with a path to a WSDL file which contains a set of services along with annotations of their input
and output parameters. The number of services will change in each challenge. Every service has
an arbitrary number of parameters. Additionally to the WSDL file, we also provide the address
of a file containing an OWL document during the bootstrapping process. This document holds
the taxonomy of concepts used in the challenge. The bootstrapping process comprises loading
all relevant information from these files.
3 / 12 Volume X (2008)
After the bootstrapping on the server side is finished, a client-side GUI queries the composition
system with the challenge problem definition and memorizes the current time. The software of
the contestants must now compute a solution – one or more service compositions – and answer
in the solution format which is a subset of the WSBPEL schema. When the WSBPEL document
is received by the GUI, the time is taken again and the compositions inside the document are
verified.
This evaluation process is illustrated on the right side in Figure 1. The Web Service Challenge
awards both, the most efficient system and also the best system architecture. The best architec-
tural effort will be awarded according to the system features and the contestant’s presentation.
The evaluation of efficiency consists of two parts as described below:
1. Time Measurement: We take the time after submitting the query and the time when the
composition result is fully received. The bootstrap mechanism is excluded from the as-
sessment. There is also a time limit for bootstrapping after which a challenge is considered
as failure.
2. Composition Evaluation:
• Composition Length: The least amount of services which are necessary to produce
the required output parameters.
• Composition Efficiency: The least amount of execution steps inside the orchestration
in order to solve the challenge.
5 / 12 Volume X (2008)
3.4 The WSBPEL solution format
During the WSC 2007, many participants encouraged the usage of a process language like BPEL
as output format for the composition solutions. First of all, a process language has more expres-
siveness than the 2007 solution format. Secondly, a process language can be used to connect the
challenge implementation to real world technologies and thus, improves their reusability. There-
fore, we decided to use a subset of the common Web Service standard WSBPEL as sketched
Listing 4.
In WSBPEL, the concurrent execution of services can (which is a desired feature for the chal-
lenge) can be specified. In the 2008 WSC WSBPEL subset, specification details like partner-
links and copy instructions on message elements were omitted. In the example in 4, two alterna-
tive solutions to a composition request are given.
1 <p r o c e s s name=” M o r e C r e d i t s B P ”
2 xmlns =” h t t p : / / s c h e m a s . x m l s o a p . o r g / ws / 2 0 0 3 / 0 3 / b u s i n e s s −p r o c e s s / ”
3 t a r g e t N a m e s p a c e =” h t t p : / / www. ws−c h a l l e n g e . o r g / W S C 0 8 C o m p o s i t i o n S o l u t i o n / ”>
4 <s e q u e n c e name=” main ”>
5 <r e c e i v e name=” r e c e i v e Q u e r y ”
6 p o r t T y p e =” S o l u t i o n P r o c e s s ” v a r i a b l e =” q u e r y ” />
7 <s w i t c h name=” S o l u t i o n A l t e r n a t i v e s −S o l u t i o n A −S o l u t i o n B ”>
8 <c a s e name=” A l t e r n a t i v e −S o l u t i o n A ”>
9 <s e q u e n c e >
10 <i n v o k e name=” s e r v i c e A ”
11 p o r t T y p e =” seeWSDLFile ”
12 o p e r a t i o n =” seeWSDLFile ” />
13 <flow>
14 <c a s e name=” A l t e r n a t i v e −S o l u t i o n B ” > . . . < / c a s e >
15 </ s w i t c h >
16 </ s e q u e n c e >
17 </ p r o c e s s >
Listing 4: The WSBPEL document
configure the number of solutions, the execution depth of a valid composition, and the inclusion
of concurrent execution of Web Services.
Converter
Lösung
Problem- BPEL- (BPEL)
Parser Creator
User-Input Services
(WSDL)
Testset-Builder Services- WSDL-
Parser Creator Task
(WSDL)
Taxonomy OWL- Taxonomy
-Parser Creator (OWL)
Problem
(XML)
Services
(XML)
Taxonomy
(XML)
Figure 2 illustrates the challenge generation process. We have implemented a user interface for
entering the configuration of the test set. Afterwards, the Test Set Builder produces an ontology
an ontology which consists of an arbitrary amount of taxonomies with differing size and depth.
A set of disjoint solutions is created according to the specified configuration parameters. Then,
a set of additional services is added which may (misleadingly) involve some (but not all) of the
concepts used in these solutions. The result of the generation process is saved as an intermediate
XML format. This format will then be transformed to the discussed WSBPEL, WSDL, and OWL
subsets by the Converter component. The intermediate format is still human-readable for manual
evaluation of the Test Set Quality. Both, the Test Set Builder and the final challenge formats are
independent and can be extended or modified separately, making the Test Set Builder reusable in
scenarios different from the WSC.
We define Test Set Quality as a term for generating demanding test sets. The solutions of
the generated challenge include complex process patterns, such as multiple levels of sequential
and parallel threads of execution. Furthermore, the Test Set Builder can generate multiple valid
solutions, so there may be one solution which involves the minimum number of services and
another one which has more services but lesser execution steps due to better parallelism. As an
example, we used one test set in the WSC 2008 where the minimum amount of services for one
solution was 37 with an execution depth of 17, but another solution existed with 46 services and
an execution depth of 7 (Table 1).
The evaluation of the composition engines became much more interesting just because of
the improved Test Set Quality. In the example just mentioned, for instance, the participating
composition engines delivered different optimal results.
7 / 12 Volume X (2008)
(B)
(C) Client
Application
OWL &
WSDL
(E) SAX-based Web Service
Knowledge Service Output Writer (H) Interface
Base Registry
(G)
(F)
Composer
Composition
System
is the analysis of the correctness of the solution. A correct solution not only is an executable
composition (validity) but also delivers all wanted output parameters of the challenge (solutions).
Furthermore, the evaluator determines the minimum set of services and minimum execution steps
from all correct solutions in its WSBPEL input.
1 <r e s u l t >
2 < v a l i d i t y numYes=” 2 ” . . . > A l l p r o p o s e d BPEL p r o c e s s e s c a n be
e x e c u t e d . </ v a l i d i t y >
3 ...
4 < s o l u t i o n s numYes=” 2 ” . . . > A l l p r o p o s e d BPEL p r o c e s s e s a r e c o r r e c t s o l u t i o n s
f o r t h e c h a l l e n g e . </ s o l u t i o n s >
5 <c o m b i n a t i o n s . . . > 5 4 0 6 9 1 2 e x e c u t i o n s e q u e n c e s . . . were f o u n d i n
t o t a l . </ c o m b i n a t i o n s >
6 <m i n L e n g t h . . . > The s h o r t e s t e x e c u t i o n s e q u e n c e t h a t s o l v e s t h e c h a l l e n g e
f o u n d i s 14. </ minLength>
7 <m i n S e r v i c e . . . > The e x e c u t i o n s e q u e n c e i n v o l v i n g t h e l e a s t number o f
s e r v i c e s . . . c o n s i s t s o f 20 s e r v i c e s . </ m i n S e r v i c e >
8 <r e d u n d a n c e S t a t . . . > No r e d u n d a n t s e r v i c e s h a v e b e e n
d i s c o v e r e d . </ r e d u n d a n c e S t a t >
9 </ r e s u l t >
Listing 5: The Evaluator XML output.
In this analysis, redundant service invocations are detected as well. A redundant service may
be part of a composition but does not provide any necessary output parameter or is executed
more than once.
6 Evaluation
The evaluation of the efficiency of the composition systems in the 2008 WSC consisted of two
steps as described below. A condition of the Web Service Challenge is that each composition
system must be evaluated on the same test system, a Lenovo X61 ThinkPad with an Intel Core2
DUO 1.6 GHz processor, 3 GB memory, and the operating system Windows XP Professional.
Three challenge sets we used in the competition and each composition system can achieve up
to 18 points per challenge set. The time limit for solving all challenges has been 15 minutes. The
score system for each challenge set was:
• +6 Points for finding the minimum set (Min. Services) of services that solves the
challenge.
• +6 Points for finding the composition with the minimum execution length (Min.
Execution) that solves the challenge.
• Additional points for:
1. +6 Points for the composition system which finds the minimum set of services or
execution steps that solves the challenge in the fastest time (Time (ms)).
2. +4 Points for the composition system which solves the challenge in the second
fastest time.
3. +2 Points for the composition system which solves the challenge in the third fastest
time.
9 / 12 Volume X (2008)
Tsinghua University of Pennsylvania State
University Groningen University
Result Points Result Points Result Points
Challenge Set 1
Min. Services 10 +6 10 +6 10 +6
Min. Execution 5 +6 5 +6 5 +6
Time (ms) 312 +4 219 +6 28078
Challenge Set 2
Min. Services 20 +6 20 +6 20 +6
Min. Execution 8 +6 10 8 +6
Time (ms) 250 +6 14734 +4 726078
Challenge Set 3
Min. Services 46 37 +6 No result.
Min. Execution 7 +6 17
Time (ms) 406 +6 241672 +4
Sum 46 Points 38 Points 24 Points
Table 1: Web Service Challenge 2008 Score Board
The results of the Web Service Challenge 2008 are listed in Table 1 which is limited to the results
of the first three places of the eight participants. The performance winners of the Web Service
Challenge 2008 are:
1. Y. Yan, B. Xu, and Z. Gu. Tsinghua University, Beijing, China.
2. M. Aiello, N. van Benthem, and E. el Khoury. University of Groningen, Netherlands.
3. J. Jung-Woo Yoo, S. Kumara, and D. Lee. Pennsylvania State University, and S.-C. Oh of
General Motos R&D Center, Michigan, USA.
7 Related Work
The Web Service Challenge is the first competition for (semantic) service composition. During
the last years there also was a syntactic challenge, then the introduction of semantics also covered
the syntactic tasks and hence, a sole semantic approach was favored. In this section, we give a
short overview on other related competitions.
The closest related event is the Semantic Web Service (SWS) Challenge [14]. In contrast to
the WSC, the SWS defines test scenarios which the participant have to solve. The SWS commit-
tees see themselves more as a certification event for frameworks than a competition. Whereas the
WSC generates vast test sets and concentrate on standardized formats, the SWS defines challenge
scenarios which must be solved by creating new specification languages, matching algorithms,
and execution systems. The scenarios in the SWS comprise state-based services, service media-
tion, ontology reasoning and service provisioning. The certified frameworks not only solve one
or more scenarios, but are also able to execute their results at runtime.
Another closely related contest is the IEEE International Services Computing Contest (SC-
Contest 2007–2008) [15]. This is a Web Service centric demonstration event where participants
choose both problem definition and solution themselves. The participants are invited to demon-
strate methodologies, reference architectures, and tools. The winner of this contest is determined
by evaluating importance in Service-oriented Computing and implementation quality.
Bibliography
[1] M.B. Blake, K.C. Tsui, and A. Wombacher. The eee-05 challenge: a new web service
discovery and composition competition. In Proceedings of the 2005 IEEE International
Conference on e-Technology, e-Commerce, and e-Service, EEE’05, pages 780–783, 2005.
[2] Proceedings of 2006 IEEE Joint Conference on E-Commerce Technology (CEC’06) and
Enterprise Computing, E-Commerce and E-Services (EEE’06). IEEE, 2006.
[3] Proceedings of IEEE Joint Conference (CEC/EEE 2007) on E-Commerce Technology (9th
CEC’07) and Enterprise Computing, E-Commerce and E-Services (4th EEE’07). IEEE,
2007.
2 see http://cec2009.isis.tuwien.ac.at/ (2008-11-03)
11 / 12 Volume X (2008)
[4] Proceedings of IEEE Joint Conference (CEC/EEE 2008) on E-Commerce Technology (10th
CEC’08) and Enterprise Computing, E-Commerce and E-Services (5th EEE’08). IEEE,
2006.
[5] S. Bechhofer, F. van Harmelen, J. Hendler, I. Horrocks, D.L. McGuinness, P.F. Patel-
Schneider, and L.A. Stein. OWL Web Ontology Language Reference. W3C, 2004. W3C
Recommendation. URL: http://www.w3.org/TR/2004/REC-owl-ref-20040210/ [accessed 2007-11-
22].
[6] D. Booth and C.K. Liu. Web Services Description Language (WSDL) Ver-
sion 2.0 Part 0: Primer. W3C, 2007. W3C Recommendation. URL:
http://www.w3.org/TR/2007/REC-wsdl20-primer-20070626 [accessed 2007-09-02].
[7] D. Jordan, J. Evdemon, A. Alves, A. Arkin, S. Askary, C. Barreto, B. Bloch, F. Curbera,
M. Ford, Y. Goland, A. Guı́zar, N. Kartha, C.K. Liu, R. Khalaf, D. König, M. Marin,
V. Mehta, S. Thatte, D. van der Rijn, P. Yendluri, and A. Yiu. Web Services Busi-
ness Process Execution Language Version 2.0. OASIS, April 11, 2007. URL:
http://docs.oasis-open.org/wsbpel/2.0/wsbpel-v2.0.pdf [accessed 2007-10-25].
[8] T. Weise, S. Bleul, D. Comes, and K. Geihs. Different approaches to semantic web ser-
vice composition. In A. Mellouk, J. Bi, G. Ortiz, D.K.W. Chiu, and M. Popescu, editors,
Proceedings of The Third International Conference on Internet and Web Applications and
Services, ICIW 2008, pages 90–96. IEEE, 2008.
[9] A. Bansal, M.B. Blake, S. Kona, S. Bleul, T. Weise, and M.C. Jaeger. Wsc-08: Continuing
the web service challenge. In CEC/EEE’08, pages 351–354, 2008. In [4].
[10] M.B. Blake, W.K.W. Cheung, M.C. Jaeger, and A. Wombacher. Wsc-06: The web service
challenge. In CEC/EEE’06, pages 505–508, 2006. In [2].
[11] M.B. Blake, W.K.W. Cheung, M.C. Jaeger, and A. Wombacher. Wsc-07: Evolving the web
service challenge. In CEC/EEE’07, pages 422–423, 2006. In [3].
[12] S. Bleul, D. Comes, M. Kirchhoff, and M. Zapf. Self-integration of web services in bpel
processes. In Proceedings of the Workshop Selbstorganisierence, Adaptive, Kontextsensi-
tive verteilte Systeme (SAKS), 2008.
[13] S. Bleul, K. Geihs, D. Comes, and M. Kirchhoff. Automated Management of Dynamic
Web Service Integration. In Proceedings of the 15th Annual Workshop of HP Software Uni-
versity Association (HP-SUA), Kassel, Germany, 2008. Infocomics-Consulting, Stuttgart,
Germany.
[14] SWS - Challenge on Automating Web Services Mediation, Choreography and Discovery,
Challenge Homepage. URL: http://sws-challenge.org/ [accessed 2008-11-03].
[15] IEEE Services Computing Contest (SCC) 2008, Contest Homepage. URL:
http://iscc.servicescomputing.org/2008/ [accessed 2008-11-03].