Integration of Heterogeneous Databases without Common Domains Using Queries

advertisement
Integration of Heterogeneous Databases
without Common Domains Using Queries
Based on Textual Similarity:
Embodied Cognition and Knowledge
William W. Cohen
Machine Learning Dept. and Language Technologies Inst.
School of Computer Science
Carnegie Mellon University
What was that paper, and who is this guy talking?
Machine Learning
Representation
languages:
DBs, KR
Human
languages:
NLP, IR
WHIRL
Word-Based Heterogeneous
Information Representation
Language
History
94
1982/1984: Ehud Shapiro’s thesis:
– MIS: Learning logic programs as debugging
an empty Prolog program
– Thesis contained 17 figures and a 25-page
appendix that were a full implementation of
MIS in Prolog
– Incredibly elegant work
96
• “Computer science has a great advantage over
82
84
86
88
90
92
98
00
04
08
13
18
•
other experimental sciences: the world we
investigate is, to a large extent, our own
creation, and we are the ones to determine if it
is simple or messy.”
History
82
84
86
88
90
92
94
96
98
00
04
08
13
18
• Grad school in AI at Rutgers
• MTS at AT&T Bell Labs in group
doing KR, DB, learning, information
retrieval, …
• My work: learning logical
(description-logic-like, Prolog-like,
rule-based) representations that
model large noisy real-world
datasets.
History
82
84
86
88
90
92
94
96
98
00
04
08
13
18
• AT&T Bells Labs becomes AT&T
Labs Research
• The web takes off
– as predicted by Vinge and Gibson
• IR folks start looking at retrieval and
question-answering with the Web
• Alon Halevy starts the Information
Manifold project to integrate data on
the web
– VLDB 2006 10-year Best Paper
Award for 1996 paper on IM
• I started thinking about the same
problem in a different way….
History: WHIRL motivation 1
82
84
86
88
90
92
94
96
98
00
04
08
13
18
• As the world of computer
science gets richer and
more complex, computer
science can no longer limit
itself to studying “our own
creation”.
• Tension exists between
– Elegant theories of
representation
– The not-so-elegant real
world that is being
represented
CA
History: WHIRL motivation 1
82
84
86
88
90
92
94
96
98
00
04
08
13
18
• The beauty of the real world is its complexity….
History: integration by mediation
82
84
86
88
90
92
94
96
98
00
04
08
13
18
• Mediator translates between
the knowledge in multiple
separate KBs
• Each KB is a separate
“symbol system”
– No formal connection
between them except via
the mediator
QuickTime™ and a
decompressor
are needed to see this picture.
TFIDF similarity
WHIRL idea: exploit linguistic properties of the HTML
“veneer” of web-accessible DBs
82
84
86
88
90
92
94
96
QuickTime™ and a
decompressor
are needed to see this picture.
98
00
04
08
13
18
WHIRL Motivation 2: Web KBs are embodied
SELECT R.a,S.a,S.b,T.b FROM R,S,T
WHERE R.a=S.a and S.b=T.b
Link items as
needed by Q
Query Q
R.a
S.a
S.b
T.b
Anhai
Anhai
Doan
Doan
Dan
Dan
Weld
Weld
Weaker links: those
agreeable to some users
William
Will
Cohen
Cohn
Steve
Steven
Minton
Mitton
even weaker links…
William
David
Cohen
Cohn
Strongest links: those
agreeable to most users
WHIRL approach:
SELECT R.a,S.a,S.b,T.b FROM R,S,T
WHERE R.a~S.a and S.b~T.b
Link items as
needed by Q
Incrementally produce a
ranked list of possible links,
with “best matches” first. User
(or downstream process)
decides how much of the list to
generate and examine.
(~ TFIDF-similar)
Query Q
R.a
S.a
S.b
T.b
Anhai
Anhai
Doan
Doan
Dan
Dan
Weld
Weld
William
Will
Cohen
Cohn
Steve
Steven
Minton
Mitton
William
David
Cohen
Cohn
QuickTime™ and a
decompressor
are needed to see this picture.
WHIRL queries
• Assume two relations:
review(movieTitle,reviewText): archive of reviews
listing(theatre, movieTitle, showTimes, …): now showing
The
Hitchhiker’s
Guide to the
Galaxy, 2005
This is a faithful re-creation of
the original radio series – not
surprisingly, as Adams wrote
the screenplay ….
Men in
Black, 1997
Will Smith does an excellent
job in this …
Space Balls,
1987
Only a die-hard Mel Brooks
fan could claim to enjoy …
…
…
Star Wars
Episode
III
The
Senator
Theater
1:00,
4:15, &
7:30pm.
Cinderella
Man
The
Rotunda
Cinema
1:00,
4:30, &
7:30pm.
…
…
…
WHIRL queries
• “Find reviews of sci-fi comedies [movie domain]
FROM review SELECT * WHERE r.text~’sci fi comedy’
(like standard ranked retrieval of “sci-fi comedy”)
• “ “Where is [that sci-fi comedy] playing?”
FROM review as r, LISTING as s, SELECT *
WHERE r.title~s.title and r.text~’sci fi comedy’
(best answers: titles are similar to each other – e.g., “Hitchhiker’s
Guide to the Galaxy” and “The Hitchhiker’s Guide to the Galaxy,
2005” and the review text is similar to “sci-fi comedy”)
WHIRL queries
•
•
Similarity is based on TFIDF rare words are most important.
Search for high-ranking answers uses inverted indices….
- It is easy to find the (few) items that match on “important” terms
- Search for strong matches can prune “unimportant terms”
The
Star Wars Episode III
Hitchhiker’s Guide to the Galaxy,
2005
Hitchhiker’s Guide to the Galaxy
Men in Black, 1997
Cinderella Man
Space Balls, 1987
…
Years are common in the
review archive, so have
low weight
…
hitchhiker
movie00137
the
movie001,movie003,movie007,movie008,
movie013,movie018,movie023,movie0031,
…..
After WHIRL
82
84
86
88
90
92
94
96
98
00
04
08
13
18
• Efficient text joins
• On-the-fly, best-effort, imprecise integration
• Interactions between information extraction
quality and results of queries on extracted data
• Keyword search on databases
• Use of statistics on text corpora to build
intelligent “embodied” systems
• Turney: solving SAT analogies with PMI over
word pairs
• Mitchell & Just: predicting FMI brain images
resulting from reading a common noun
(“hammer”) from co-occurrence information
between nouns and verbs
Recent work: non-textual similarity
82
84
“William W. Cohen, CMU”
86
88
cohen
90
92
94
96
98
00
04
08
13
18
dr
william
w
“Dr. W. W. Cohen”
“Christos
Faloutsos,
CMU”
cmu
“George H.
W. Bush”
“George W. Bush”
Recent Work
82
84
86
88
90
92
94
96
98
00
04
08
13
18
• Personalized PageRank aka Random Walk with
Restart:
– Similarity measure for nodes in a graph, analogous
to TFIDF for text in a WHIRL database
– natural extension to PageRank
– amenable to learning parameters of the walk
(gradient search, w/ various optimization
metrics):
• Toutanova, Manning & NG, ICML2004; Nie
et al, WWW2005; Xi et al, SIGIR 2005
– various speedup techniques exist
– queries:
Given type t* and node x, find y:T(y)=t* and y~x
Learning to Search Email
Einat Minkov, CMU; Andrew Ng, Stanford
[SIGIR 2006, CEAS 2006,
WebKDD/SNA 2007]
CALO
Sent
To
Term In
Subject
William
graph
proposal
CMU
6/17/07
6/18/07
einat@cs.cmu.edu
Tasks that are like similarity queries
Person name
disambiguation
[ term “andy”
file
msgId ]
“person”
Threading
 What are the adjacent messages in
this thread?
 A proxy for finding “more
messages like this one”
Alias finding
What are the email-addresses of
Jason ?...
[ file msgId ]
“file”
[ term Jason ]
“email-address”
Meeting
attendees finder
Which email-addresses (persons)
should I notify about this meeting?
[ meeting mtgId ]
“email-address”
Results on one task
100%
80%
Recall
PERSON NAME DISAMBIGUATION
Mgmt. game
60%
40%
20%
0%
1
2
3
4
5
6
Rank
7
8
9
10
Results on several tasks (MAP)
0.85
0.8
0.75
Name
disambiguation
0.7
0.65
*
*
*
0.6
0.55
0.5
0.45
++
0.4
M.game
Threading
sager
* *
0.85
0.8
*
0.75
0.7
0.65
0.6
Shapiro
*
*
**
*
*
0.55
0.5
0.45
0.4
+ +
M.game
Alias finding
0.85
0.8
0.75
0.7
0.65
0.6
0.55
0.5
0.45
0.4
Meetings
+
Farmer
Germany
1.
2.
3.
•
•
•
Canon
Nikon
Olympus
Set Expansion using the Web
Richard Wang, CMU
Fetcher: download web pages from the Web
Extractor: learn wrappers from web pages
Ranker: rank entities extracted by wrappers
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
Pentax
Sony
Kodak
Minolta
Panasonic
Casio
Leica
Fuji
Samsung
…
The Extractor
• Learn wrappers from web documents
and seeds on the fly
– Utilize semi-structured documents
– Wrappers defined at character level
• No tokenization required; thus language
independent
• However, very specific; thus page-dependent
– Wrappers derived from document d is applied to d only
<li class="ford"><a href="http://www.curryauto.com/">
<img src="/common/logos/ford/logo-horiz-rgb-lg-dkbg.gif" alt="3"></a>
<ul><li class="last"><a href="http://www.curryauto.com/">
<span class="dName">Curry Ford</span>...</li></ul>
</li>
<li class="acura"><a href="http://www.curryauto.com/">
<img src="/curryautogroup/images/logo-horiz-rgb-lg-dkbg.gif" alt="5"></a>
<ul><li class="last"><a href="http://www.curryacura.com/">
<span class="dName">Curry Acura</span>...</li></ul>
</li>
<li class="nissan"><a href="http://www.curryauto.com/">
<img src="/common/logos/nissan/logo-horiz-rgb-lg-dkbg.gif" alt="6"></a>
<ul><li class="last"><a href="http://www.geisauto.com/">
<span class="dName">Curry Nissan</span>...</li></ul>
</li>
Ranking Extractions
“ford”, “nissan”, “toyota”
Wrapper #2
find
northpointcars.com
extract
curryauto.com
“chevrolet”
22.5%
Wrapper #3
“honda”
26.1%
derive
Wrapper #1
“acura”
34.6%
“volvo chicago”
8.4%
Wrapper #4
“bmw pittsburgh”
8.4%
• A graph consists of a fixed set of…
– Node Types: {seeds, document, wrapper, mention}
– Labeled Directed Edges: {find, derive, extract}
• Each edge asserts that a binary relation r holds
• Each edge has an inverse relation r-1 (graph is cyclic)
Minkov et al. Contextual Search and Name Disambiguation in Email using Graphs. SIGIR 2006
Evaluation Method
• Mean Average Precision
–
–
–
–
Commonly used for evaluating ranked lists in IR
Contains recall and precision-oriented aspects
Sensitive to the entire ranking
Mean of average precisions for each ranked list
Prec(r) = precision at rank r
NewEntity (r ) 
1 if (a) and (b) are true

otherwise
0
(a) Extracted mention at r
matches any true mention
where L = ranked list of extracted mentions, r = rank
•
Evaluation: Average over 36 datasets in three
languages (Chinese, Japanese, English)
1. Average over several 2- or 3-seed queries for
each dataset.
2. MAP performance: high 80s - mid 90s
3. Google Sets: MAP in 40s, only English
(b) There exist no other
extracted mention at rank
less than r that is of the
same entity as the one at r
# True Entities = total number
of true entities in this dataset
Evaluation Datasets
Top three mentions are the seeds
Try it out at http://rcwang.com/seal
Relational Set Expansion
Seeds
Future?
82
84
Machine Learning
86
88
90
92
94
96
98
00
04
08
13
18
Representation
languages:
DBs, KR
?
Human
languages:
NLP, IR
Download