Semantic Web Service Discovery in a Multi-Ontology Environment Swapna Oundhakar

advertisement
Semantic Web Service Discovery
in a Multi-Ontology
Environment
Swapna Oundhakar
Large Scale Distributed Information Systems (LSDIS) Lab,
Department of Computer Science,
The University of Georgia
Acknowledgements
• Advisory Committee
 Dr. Amit P. Sheth ( Major – Advisor )
 Dr. Hamid Arabnia
 Dr. Liming Cai
• LSDIS Student Members
 Kunal Verma
Outline
• Web services – Introduction
• Multi-Ontology Environment
• Approach
• Testing
• Contributions
• Future Work
• References
WWW – Past & Present
Web Service Promise
• Easier Service
Interoperation
• Reduced Costs
• Increased Efficiency
Web Services – Definition
 Web services are a new breed of Web application.
They are self-contained, self-describing, modular
applications that can be published, located, and
invoked across the Web. (IBM web service tutorial)
 A Web service is a software application identified by
a URI, whose interfaces and bindings are capable of
being defined, described and discovered by XML
artifacts and supports direct interactions with other
software applications using XML based messages via
Internet-based protocols. (W3C definition)
Web Services – Standards
• Web Services revolve around UDDI, WSDL, SOAP
– WSDL
• Is a standard for describing a Web service.
• It is an XML grammar for specifying properties of a Web service
such as what it does, where it is located and how it is invoked.
• Does not support semantic description of web services
– SOAP
• SOAP is an XML based messaging protocol.
• It defines a set of rules for structuring the messages being
exchanged.
– UDDI
• UDDI is a registry or a directory where Web service descriptions
can be advertised and discovered
• By itself does not support semantic description of services and
depends on the functionality of the content language
Web Services – Architecture
Service
Registry
Service
Requester
Service
Provider
Web Services – Problems
• Heterogeneity and Autonomy
– Syntactic description can not deliver all the
intended semantics
– Solution – Machine processable descriptions
• Dynamic nature of business interactions
– Demands – Efficient Discovery, Composition, etc.
• Scalability (Enterprises  Web)
– Needs – Automated service discovery/selection
and composition
Web Service Discovery – Problems
• Current Discovery Approach –
– UDDI Search
• Keyword based – Checks for Service names
containing the Keyword
• Browse Categories
• Problem
– Once set of services returned user has to
manually browse to find his requirement
– Or has to know the Category in advance
Solution
 Semantic Web
– Keyword based search improved by adding semantics to
web pages
– Apply semantics to service web – Semantic web services
Dynamic
Web Services
Static
Bringing the web to its full potential
WWW
Semantic Web
URI, HTML, HTTP
RDF, RDF(S), OWL
UDDI, WSDL, SOAP
Intelligent
Web Services
D. Fensel, C. Bussler, "The Web Service Modeling Framework WSMF", Technical Report, Vrije Universiteit Amsterdam
What are Ontologies?
• Ontology as “an explicit specification of a
conceptualization” - [Gruber, 1993]
• Ontology as a “shared understanding of some
domain of interest” – [Uschold and Gruninger,
1996]
• An ontology provides – A common vocabulary of terms
– Some specification of the meaning of the terms
(semantics)
– An agreement for shared understanding for people
and machines.
Single Global Ontology
• Web services adhere to Large Single ontology
– Not Practical
• Why?
– Variety of applications and the intent behind
their development
– The real world cannot be based on a single
ontology
– In a single domain, different domain experts,
users, research groups and organizations can
conceptualize the same real world entities
differently leading to multiple domain
ontologies
Single Global Ontology (Continued)
• Why?
– The scope of an ontology or for that matter a domain is
usually arbitrary and cannot be formally defined.
• Multiple ontologies for the same function
• Example – Travel Syndicators like Travelocity, Expedia partner
with different airlines which may have different ontologies
• Multiple Ontologies for an Activity
• Example - Travel planning service from an organization may
require ontologies from hospitality domain and travel domain
• Coexistence of independent organization and
organizational changes
• Example - Cisco acquired 40 companies in a year
Multiple Domain Ontologies
• This leads to differences and overlaps between the
models used for ontologies.
• A more practical approach would be for Web services
to conform to local, domain or application specific
ontologies
• In such an environment where the service requests
and advertisements adhere to different ontologies,
there arises a need for a matching algorithm which
can deal with the domain semantics modeled by
multiple ontologies
Relationship to Previous Works on
Multi-ontology Environment
• In context of Data / Information
– First reported effort - OBSERVER@LSDIS [Mena et.al,
1996]
– Several other subsequent efforts on ontology mapping /
alignment / merging [Kalfoglou and Schorlemmer, 2003]
• In context of Services
– Cardoso’s Ph.D. thesis [Cardoso 2002] - presents
preliminary work on service discovery in multiple
ontology environment
• This thesis advances [Cardoso 2002] by introducing new
measures: Context, Coverage
– Higher quality matches
Semantic Web Service Discovery –
Approach
• First the user’s service requirement is
captured in a semantic search template
• This search template is then matched against
a set of candidate Web services
• The result of this operation is then returned
to the user as a set of ranked services
Semantic Web Service Discovery –
Service Template
• Service Template (ST)
– ST = < NST, DST, OpsST < NOP, DOP, OsST, IsST > >
• NST : Name of the Web service to be found
• DST : Textual description of the Web service
• OPsST : Set of operations of the required Web service
– The operations in turn are specified as
– OPsST < NOP, DOP, OsST, IsST >
– Where,
»
NOP : Name of the operation
»
DOP : Description of the operation
»
OsST : Outputs of the operation
»
IsST : Inputs of the operation
Semantic Web Service Discovery –
Service Template
• Example Service Template
Search Template
Ontological Concept
Service Name
StockService
Operation
getStockQuote
Input
Ticker
StockTicker
Output
StockQuote
StockQuote
Operation
getCompanyNews
Input
CompanyName
companyName
Output
CompanyNews
News
Semantic Web Service Discovery –
Candidate Service
• Example Candidate Service (CS)
Candidate Service
Ontological Concept
Service Name
XQuotes
Operation
getQuickQuote
Input
quickQuoteInput
MultipleQuote
Output
quickQuoteResponse
QuickQuote
Operation
getQuote
Input
quoteInput
MultipleQuote
Output
quoteResponse
StockQuote
Operation
getFundQuote
Input
fundQuoteInput
FundQuoteQuery
Output
fundQuoteResponse
fundQuote
Semantic Web Service Discovery –
Matching
• Matching ST and CS
– The Search Template is matched against a set of
candidate Web services in a registry
– Match Scores are calculated for each (ST, CS) pair
– These match scores are normalized over the interval of
0 to 1
– Finally the pairs are ranked in the descending order of
the match score
Algorithm
Semantic Web Service Discovery –
Service Similarity
• Matching ST and CS
• The overall service similarity is the weighted
average of Syntactic and Functional similarities
w1* SyntacticSim ST , CS  
MS ST , CS  
w2 * FunctionalSim ST , CS 
 0 1
w1  w2
Semantic Web Service Discovery –
Syntactic Similarity
• Syntactic Similarity
– Relies on name and description of ST and CS
– It is calculated as the weighted average of the Name
similarity (NameMatch (ST, CS)) and the Description
Similarity (DescrMatch (ST, CS)).
SyntacticSim ST , CS 
 w3 * NameMatchST , CS 
  w4 * DescrMatch ST , CS 

 0 1

w3  w4



 NameMatchST , CS 

Descr ST    or
Descr CS   
Descr ST    and
Descr CS   
Syntactic Similarity - Example
• Name Similarity using NGram algorithm
Service Template
Candidate Service
Syntactic Similarity
stockService
xDividendInfo
0.20
stockService
xMarketInfo
0.27
stockService
xQuotes
0.22
stockService
StockQuoteService
0.77
Semantic Web Service Discovery –
Functional Similarity
• Functional Similarity
– Calculated as the average Match Score of the operations
of ST and CS
n
FS 
 fs
i 0
where,
n
i
 0 1
fsi  best functional similarity of an Operation of ST
n  number of Operations of ST
getFM OP S T , OP CS   best  getfmOP S T , OPi CS 
where, OPi CS represents individual operation of CS
Semantic Web Service Discovery –
Matching Two Operations
• Matching Two operations
– Weighted average of Syntactic similarity, conceptual
similarity and IO similarity
w5 * SynSim OP S T , OPCS  
w6 * ConceptSimOP S T , OPCS  
w7 * IOSim OP S T , OPCS 
fs  getfmOP S T , OPCS  
w5  w6  w7
Functional Similarity - Example
ST Operation
CS Operation
SynSim
ConceptSim
IOSim
fs
getStockQuote
getQuickQuote
0.76
1.00
0.52
0.71
getStockQuote
getQuote
0.75
1.00
0.94
0.92
getStockQuote
getFundQuote
0.63
-0.12
0.48
0.33
Best
• Matching Two operations
– getStockQuote is matched with all three operation of CS
individually
• Best Match
– As the “fs” value for getQuote is maximum it is picked
as the matching operation
Semantic Web Service Discovery –
IO Similarity
• Similarity between the inputs and outputs of operations
• Calculated as geometric mean of
– Similarity of inputs i.e. InputSim
– Similarity of outputs i.e. OutputSim
IOSim OP , OP 
ST
CS
 InputSimOP.Is S T , OP.IsCS 

* OutputSimOP.Os S T , OP.Os CS 


  InputSimOP.Is S T , OP.IsCS 
OutputSimOP.Os , OP.Os 
ST
CS



OP.Is S T   , OP.Os S T  
OP.Is S T   , OP.Os S T  
OP.Is S T   , OP.Os S T  
 InputSimOP.IsST  OP.I ST , OP.IsCS  OP.I CS 

InputSimOP.IsST , OP.IsCS   Max
  iSim OP.I ST , OP.I CS 

Semantic Web Service Discovery –
Concept Matching
• Concept Matching Algorithm
– Inputs, Outputs and Operations of a Service Template and
Candidate Web service are annotated with Ontological
Concepts
– Similarity of an individual input or output pair is calculated
as the Similarity of the concepts they are annotated with
– Concept similarity has four dimensions
•
•
•
•
Syntactic Similarity – Names and Descriptions of the concepts
Feature or property similarity – Most important
Coverage similarity – Signifies the abstraction level of the concept
Context similarity – Helps to understand concept better
– Calculated as the weighted average of the above four
values
w8 * synSim  w9 * propSim 
conceptSimC ST , C CS  
w10 * cvrgSim  w11* ctxtSim
w8  w9  w10  w11
Semantic Web Service Discovery –
Property Similarity
• Property Similarity
– A concept is defined using its properties, hence matching
these properties is most important while matching two
concepts
• Syntactic Similarity – The syntactic information of the property i.e.
name and description
• Range Similarity – Similarity of the values property can take
• Cardinality Similarity – How many values property can take
• “c” – Constant (explained latter)
– The property similarity for unmatched properties is also
penalized with a penalty of 0.05 for each unmatched
property
c * 3 rangeSim * crdnSim * synSim
propSim 
 0.05 * unmatched properties
Semantic Web Service Discovery –
Property Similarity (Continued)
•
Property Similarity – Constant “c”
– Calculated based on if the properties being mapped are inverse functional
properties or not
• Inverse Functional property is an OWL construct which tells that if a
property is inverse functional then that property value is unique for every
instance of that concept
• Example – SSN is unique for a person and Stock Symbol is unique for
every Stock. No two stocks can have same stock symbol and no two
persons can have same SSN.
– Such information gives more insight into the real world entity that is
being captured by the concept being matched
– For non-owl ontologies the second case of Equation 11 is considered.
p S T , pCS are inverse functional properties
1

p S T , pCS are not inverse functional properties

1
c
cardinalit y  p S T   1 and pCS is inverse functional property
1


0.8 cardinalit y  pCS   1 and p S T is inverse functional property
Semantic Web Service Discovery –
Range Similarity
• Range similarity
– The values the properties can take are characterized by their ranges
and hence range similarity is important
– Range can either be a primitive data type or another ontological
concept
– Both the property ranges are primitive data types
– Both the property ranges are Ontological concepts
• Shallow Concept Match
– Syntactic Similarity – Names of the concepts
– Property Similarity – Similarity of names of the properties
rangeSim  rangeMatch pS T , pCS 

w12 * synSim  w13 * propSynSim
w12  w13
propSynSim  p.Range ST , p.Range CS  
p p.Range ST   p p.Range CS 
p p.Range ST 
Semantic Web Service Discovery –
Cardinality Similarity
• Cardinality similarity (crdnSim)
– Cardinality provides the information about how many
range values a property can take at a time
– Match value is less if the ST requirement is not satisfied
completely
1
1

crdnSim  p S T , pCS   
0.9

0.7
cardinalit y  p S T   cardinalit y  pCS 
p S T and pCS are functional properties
cardinalit y  p S T   cardinalit y  pCS 
cardinalit y  p S T   cardinalit y  pCS 
Property Similarity – Example
Property Pair
StockOnt
xIgniteStocks
Syntactic
Similarity
Range
Similarity
Cardinality
Similarity
c
Total
Similarity
bestBidSize
Bid_Size
0.72
1.00
0.90
1.00
0.87
bestAskSize
Ask_Size
0.70
1.00
0.90
1.00
0.86
lastSale
Last
0.79
1.00
0.70
1.00
0.82
Open
Open
1.00
1.00
1.00
1.00
1.00
fiftyTwoWeekLow
Low_52_Weeks
(cardinality > 1)
(inverse functional
property)
0.54
1.00
0.90
0.80
0.63
dayHigh
High
0.66
1.00
1.00
1.00
0.87
Date
(xml:date)
Date
(xml:string)
1.00
0.50
1.00
1.00
0.79
tradingVolume
Volume
0.56
1.00
1.00
1.00
0.82
tickerSymbol
Symbol
(ObjectProperty)
(inverseFunctional)
(DataTypeProperty)
(inverseFunctional)
0.60
0.50
1.00
1.00
0.67
Semantic Web Service Discovery –
Coverage Similarity
• Coverage Similarity (cvrgSim)
– Signifies the level of abstraction of the concept
– Immediate parent of the requirement concept is being matched then the
coverage similarity is reduced by 0.1, for a grandparent by 0.2 and so on
– The reduction by a multiple of 0.05 is employed if the candidate concept
is a sub-concept
• Sub-concept satisfies the properties of the requirements completely
• Still needs to be distinguished from a complete match – lesser penalty
1


x 1
1  0.1* 2


cvrgSimC S T , C CS   

1  0.05 * y



0
MS C S T , C CS   0.8
MS C S T , parent x C CS   0.8 and
x is the level of parent above ST concept
MS C S T , child y C CS   0.8 and
x is the level of child below ST concept
otherwise
Semantic Web Service Discovery –
Context Similarity
• Attempt to understand more about a concept by considering
the semantic similarity and semantic disjointness of the
concepts in the vicinity of that concept
• SameConceptSet
– Set of the concepts which describes the same real world entities
as described by the concept in question
• DifferentConceptSet
– Set of concepts which are semantically disjoint from the concept
in question
• Example, a Bond means a Fixed Income Fund and not
Stocks; Bond is also a different concept from the concept
Investment Company
– SameConceptSet(Bond) = {FixedIncomeFund, Bond}
– DifferentConceptSet(Bond) = {Stocks, InvestmentCompany}
Semantic Web Service Discovery –
Context Similarity
• SameConceptSet
– If an ontology specifies them as same or equivalent
concepts
• e.g. OWL language has constructs like sameClassAs or
equivalentClass to describe that two concepts are
similar to each other
– Member concepts of the main concept i.e. concepts
which are used to define main concept
• e.g. OWL has collection classes, which are described in
terms of other classes
– The concept itself is also added in the
SameConceptSet
Semantic Web Service Discovery –
Context Similarity
• DifferentConceptSet
– Concepts which are explicitly modeled as disjoint concepts of the main
concept
• e.g. in OWL concepts which are related with disjointWith or
complementOf relationships
– Concepts appearing as ranges of properties of main concept except if
the range is concept itself
• e.g. in the stocks ontology, Company concept has a property with
Stocks concept as range and hence Company and Stocks do not
represent same the concept
– Concepts which has properties with main concept as range
• e.g. Stocks appears as range of investsIn property of MutualFund
concept and hence MutualFund goes in the DifferentConceptSet
of Stocks
– Siblings of the main concept
• They depict an entirely different specialization than the main
concept
• e.g. EquityFund and FixedIncomeFund are siblings and can not
replace each other in a request
Semantic Web Service Discovery –
Context Similarity
1.0



 1.0




 1.0


contextSim  


 1.0






 ms1 * ms2


 SC ST shallowCon ceptMatch  SC ST , SC CS   0.8
where, SC ST  sameConceptSet  C ST , SC CS  sameConceptSet  C CS 
 SC ST shallowCon ceptMatch  SC ST , DC CS   1
where, SC ST  sameConceptSet  C ST , DC CS  differentConceptSet  C CS 
 DC ST shallowCon ceptMatch  DC ST , SC CS   1
where, DC ST  differentConceptSet  C ST , SC CS  sameConceptSet  C CS 
avgConceptMatch  SC ST , SC CS   0.5 and
avgConceptMatch  SC ST , DC CS   0.8 or
avgConceptMatch  DC ST , SC CS   0.8
where, ms1  avgConceptMatch  SC ST , SC CS 
and ms2  avgConceptMatch  DC ST , DC CS 
Web Service Discovery Algorithm –
Comparison
Algorithm from
[Cardoso, 2002]
Semantic Web Service
discovery Algorithm
Service
Description
Language
DAML-S
Annotated WSDL
Similarity
Calculation
Inputs and Outputs of all operations
considered together
Operations are matched separately, Inputs
and outputs are not matched directly but are
matched as a part of the operation match
No property Penalty for unmatched
properties
Property Penalty proportional to number of
unmatched properties
No Cardinality consideration
Cardinality considered
Not considered
Context similarity is calculated based on the
concepts in the vicinity of the concepts being
matched
Property
Similarity
Context
Similarity
Explicitly calculated
Coverage
Similarity
No explicit consideration
QoS
SWR algorithm for QoS
Considered even for concepts from different
ontologies
No QoS consideration (future work)
Comparison with
Single Ontology Approach
• Similarity Measures
– In a single ontology environment, syntactic and property
similarity can give enough information about a concept to give
good matches
– In a multi-ontology environment, concepts can be modeled
with different levels of abstraction and hence considering only
syntactic and property information does not provide enough
information about the concept
– Measures used in this approach like Context and Coverage
similarity provide this extra information
Comparison with
Single Ontology Approach
• Linguistic Issues
– In a single ontology environment, matching properties and
concepts based on names is enough since properties are
inherited for the related concepts
– In a multi-ontology environment, names of properties and
concepts can be synonyms, hypernyms, hyponyms,
homonyms of each other and hence matching them
syntactically can return bad match scores
– This approach uses WordNet based algorithm, custom
abbreviation dictionary etc. to tackle this problem
Comparison with
Single Ontology Approach
• Model level issues
– In a single ontology environment, single ontology model does
not pose any structure and model level issues
– In a multi-ontology environment, ontologies can have
different modeling techniques which needs to be tackled
before matching two concepts e.g. XML Schema models
Collection concepts as coplexTypes or simpleTypes where as
OWL models them as collection classes
– Common Representation format helps to bridge this gap
Testing
0.48
0.04-
0.00
0.09
0.44
0.45
0.62
0.90
0.42
0.04
0.24-
-0.20
0.13-
0.04-
0.00
-0.40
0.57
0.50
0.72
0.72
0.85
0.95
0.75
0.67
1.00
1.00
0.89
0.77
0.72
0.72
0.57
1.00
0.20
0.00
0.10
0.40
0.55
0.62
0.31
0.60
0.44
0.40
0.80
1.00
0.90
0.77
1.00
1.00
1.00
0.95
0.93
0.94
1.00
0.69
1.20
1.00
1.00
1.00
1.00
1.00
1.00
1.00
Web Service Discovery – Testing
-0.60
1.00-
-1.20
1.00-
-1.00
1.00-
-0.80
Case 1
StockOnt
StockQuote
Case 2
StockOnt
StockDetailQuote
Case 3
StockOnt
StockQuickQuote
Case 4
StockOnt
FundQuote
Case 5
xIgniteStocks
StockQuote
Case 6
xIgniteStocks
ExtendedQuote
Case 7
xIgniteStocks
QuickQuote
Case 8
xIgniteStocks
FundQuote
Total Similarity with only
Syntactic & Property Match
1.00
0.69
0.77
0.62
1.00
0.50
0.57
0.62
Property Similarity
without Penalty
1.00
1.00
-0.24
0.10
0.72
0.72
-0.13
0.09
Syntactic
Similarity
1.00
1.00
0.31
0.55
0.72
0.72
0.42
0.44
Context Similarity
MWSDI
1.00
1.00
1.00
-1.00
1.00
0.85
-1.00
-1.00
Coverage Similarity
MWSDI
1.00
0.95
0.90
0.00
1.00
0.95
0.90
0.00
Property Similarity
with penalty - MWSDI
1.00
0.93
0.44
-0.04
0.89
0.75
0.04
-0.04
Total Similarity
MWSDI
1.00
0.94
0.40
0.57
0.77
0.67
0.45
0.48
Total Similarity with only
Syntactic & Property Match
Property Similarity
without Penalty
Syntactic
Similarity
Context Similarity
MWSDI
Coverage Similarity
MWSDI
Property Similarity
with penalty - MWSDI
Total Similarity
MWSDI
Testing – Concept Matching
Candidate Concepts from Same Ontology
1. The simplest and best case is when the candidate concept is
the same as the ST concept
• All four dimensions give a similarity of 1 and the overall match
score is also 1
2. Candidate concept is a sub-concept of the ST concept
• Even if the requirement is satisfied completely, it is not the
exact concept that is required
–StockDetailQuote, the coverage similarity is reduced by 0.05 as it is
the immediate child of StockQuote
–As coverage similarity has a non-zero value and the concepts are
from the same ontology, the context similarity is defaulted to 1
–This boosts the overall similarity even more to give a better match
score
Testing – Concept Matching (Continued)
3.
Candidate concept is a super-concept of requirement
• Does not satisfy all the properties of the requirement
» Here a penalty of 0.05 is applied for each unmatched property.
» This reduces the property similarity for StockQuickQuote to a negative value, which
signifies that most of the properties of the requirement are not satisfied.
» A deduction of 0.1 is applied to the coverage match
» Non-zero coverage similarity, concepts are from the same ontology, the context
similarity is defaulted to 1
• Overall concept similarity is reduced but the context similarity of 1 gives
an advantage
4.
Candidate concept is from the same ontology but not related
• Treated in the same way as concepts from two different ontologies
» FundQuote gives a context match of -1 as it is modeled as a sibling of StockQuote
» The Coverage similarity is 0 as StockQuote and FundQuote do not have a subsumption
relationship
» Property penalty for unmatched properties applied
• Overall match score below zero
• Easier to discard this match
Testing – Concept Matching (Continued)
Candidate Concepts from Different Ontology
5. Same concept from different ontology
• Same StockQuote concept with a bit of variety
–All the properties are matched
–Same name, the syntactic similarity is 1
–As all the properties for both the concepts match each other with a value of
above 0.7, and the context and the coverage similarities are also 1
• Overall match score is boosted
6. Candidate concept from a different ontology, matches with a
sub-concept
• ExtendedQuote from xIgniteStocks matches better with
StockDetailQuote of StockOnt
–As StockDetailQuote is the immediate child of the ST concept, the coverage
match has a value of 0.95
–Here, all the properties of the requirement are satisfied
–Context similarity of 0.85 is obtained
• Boosts the overall match score
Testing – Concept Matching (Continued)
7. Concept from different ontology matches better with a
super concept
• QuickQuote as it matches to StockQuickQuote which is an immediate
super concept of the ST concept
» Coverage similarity is reduced by 0.1
» QuickQuote is modeled as the sibling of StockQuote in its ontology which
matches best with the ST concept hence Value of -1 is assigned for context
similarity
» Penalty for unmatched properties
• Results in a very low overall match score
• It can be noted here that by considering only syntactic and property
similarity gives a good match score
Testing – Concept Matching (Continued)
8. Unrelated concept from a different ontology
• The FundQuote concept having good property similarity
without penalty
» The property penalty reduces the property similarity to a low value
» In addition since this concept matches better (MS > 0.8) with the
FundQuote concept from the ST ontology, which happens to be a
sibling of the requirement, a score of -1 can be assigned to context
similarity
» The coverage match is 0 as both the concepts do not have any
indirect subsumption relationship
• A negative score for this pair makes it possible to discard this
match
• If only syntactic and properties similarities are considered
then this concept gives a match score near to 0.5
Web Service Discovery – Testing
getExchanges
getHolidayList
LookupStocks
getDividendHistory
getTopGainer
getStockHeadlines
getTopMoverByMarket
getStocksNews
getStockQuote
getQuickQuote
getStockQuote
getStockQuickQuote
getFundQuote
getQuickQuote
getStockNews
getExtendedQuote
getStockNews
getStockQuote
getStockNews
getFundQuote
getCompanyNews
getStockQuickQuote
getCompanyNews
getDetailStockQuote
getCompanyNews
getStockQuote
getCompanyNews
0.0
0.1
0.2 0.3
0.4
0.5
0.6
0.7
0.8 0.9
MWSDI Normalized MS
getStockQuote
getCompanyNews
1.0
1.1 0.0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
Normalized MS not considering Operations
Input Similarity
Output Similarity
1.0
1.1
Web Service Discovery – Testing
• Graph 1 shows the functional similarity values for 14 Web
services calculated using this discovery algorithm.
• Graph 2 describes the similarity values calculated using the
inputs and outputs of these operations together, using only
syntactic similarity and property similarity without penalty
• Each of these 14 Candidate Services depicts a distinct case of
how operations may be modeled in a Candidate Service with
respect to the ST.
Web Service Discovery – Testing
• Services 4 and 8
– The similarity values from Graph 1 are very low as compared to
Graph 2
– False matches can be avoided with proper threshold
• Boosts the values for Candidate Services 6 and 7 as shown in
Graph 1.
• Candidate Services 9 and 10
– One operation matching very well to the first ST operation and a
second operation entirely different from the second ST operation
– Matching based on functional similarity can detect this difference,
whereas similarity based on inputs and outputs (Graph 2) fails to
detect this
Web Service Discovery – Testing
• Candidate Service 11
– Inputs and outputs of both operations together match without
considering operations
– Don’t when operations are considered
– Here, not considering operations will give a false match as in
Graph 2, but this algorithm described in this thesis avoids this
false match.
• Candidate Service 12 is the counterpart of Service 11, from a
different ontology
– This algorithm avoids this false match too.
• Candidate Services 13 and 14
– Both the operations entirely different from the requirement
– This algorithm is able to lower the match scores
Contributions
Contributions
• In a real world there are multiple-ontologies
• This thesis demonstrated how semantic
discovery can be improved by introducing two
new measures
– Context and Coverage
• The algorithms developed in this thesis can be
applied to other multi-ontology environments
• In the web service process lifecycle the multiontology environment can exist in different
stages and this algorithm can be applied
Future Work
Future Work
• Service discovery algorithm can be extended to
support fault matching and QoS similarity
– Currently WSDL does not support both these features
• Test the algorithm on larger set of real world data
by building a larger test bed of annotated
services
Questions
References
1.
[Gruber, 1993] T. Gruber, “A Translation Approach to Portable Ontology
Specifications”, Knowledge Acquisition, 5(2), 199-220, (1993)
2.
[Uschold and Gruninger, 1996] M. Uschold and M. Gruninger, “Ontologies:
Principles, Methods and Applications”, The Knowledge Engineering Review,
1996
3.
[Cardoso, 2002] J. Cardoso (2002), “Quality of Service and Semantic
Composition of Workflows ”, PhD Thesis
4.
[Mena et.al, 1996] E. Mena, V. Kashyap, A. Sheth and A. Illarramendi,
OBSERVER: An Approach for Query Processing in Global Information
Systems based on Interoperation across Pre-existing Ontologies,
Conference on Cooperative Information Systems, Brussels, Belgium, IEEE
Computer Society Press, (1996)
5.
[Kalfoglou and Schorlemmer, 2003] Y. Kalfoglou and M. Schorlemmer,
“Ontology mapping: the state of the art”, The Knowledge Engineering
Review, Volume 18 Issue 1, January 2003
Additional Information
Semantic Web services – Approaches
• Describe services with ontology based languages e.g. OWL-S
• Add semantics to existing Web service standards e.g.
METEOR-S
• Common factor
OWL-S, WSMF
Describe Web services
using ontology based
service description
languages
METEOR-S
Add Semantics by
adding annotations to
service descriptions of
existing service
standards
Common Factor
Relate Web service I/O
parameters with Ontological
concepts
METEOR-S Project @ LSDIS lab
• METEOR-S exploits Workflow, Semantic Web,
Web Services, and Simulation technologies to
meet these challenges in a practical and
standards based approach.
– Applying Semantics in Annotation, Quality of
Service, Discovery, Composition, Execution
of Web Services
– Adding semantics to different layers of Web
services conceptual stack
– Use of ontologies to provide underpinning
for information sharing and semantic
interoperability
http://swp.semanticweb.org, http://lsdis.cs.uga.edu/proj/meteor/swp.htm
Semantics for Web Process Life-Cycle
Development
/ Description
/ Annotation
Execution
BPWS4J,
Commercial
BPEL Execution
Engines, Intalio
n3, HP eFlow
Data /
Execution
Information
Semantics
Semantics
Semantics Required for
Web Processes
BPEL, BPML,
WSCI, WSCL,
DAML-S,
METEOR-S
(SCET, SPTB) Composition
Functional /
QoS
Operational
Semantics
Semantics
WSDL, WSEL
DAML-S
Meteor-S
(WSDL
Annotation)
UDDI
WSIL, DAML-S
Publication
/ Discovery
METEOR-S (P2P
model of
registries)
Download