a(x)

advertisement
Webes keresés
What is web search?
• Access to “heterogeneous”, distributed
information
– Heterogeneous in creation
– Heterogeneous in motives
– Heterogeneous in accuracy …
• Multi-billion dollar business
• Source of new opportunities in marketing
• Strains the boundaries of trademark and
intellectual property laws
• A source of unending technical challenges
What is web search?
• Nexus of
– Sociology
– Economics
– Law
• … with technical implications.
The driver
• Pew Study (US users Aug 2004):
“Getting information is the most highly valued and most popular type of
everyday activity done online”.
www.pewinternet.org/pdfs/PIP_Internet_and_Daily_Life.pdf
The coarse-level dynamics
Content creators
Content aggregators
Content consumers
Brief (non-technical) history
• Early keyword-based engines
– Altavista, Excite, Infoseek, Inktomi, Lycos, ca.
1995-1997
• Paid placement ranking: Goto.com (morphed
into Overture.com  Yahoo!)
– Your search ranking depended on how much you
paid
– Auction for keywords: casino was expensive!
Brief (non-technical) history
• 1998+: Link-based ranking pioneered by Google
– Blew away all early engines
– Great user experience in search of a business model
– Meanwhile Goto/Overture’s annual revenues were
nearing $1 billion
• Result: Google added paid-placement “ads” to
the side, independent of search results
– 2003: Yahoo follows suit, acquiring Overture (for paid
placement) and Inktomi (for search)
Követelmények a keresőmotorokkal
szemben
• széleskörűségi követelmény:
– le kell fedni az interneten elérhető elemek
– dönteni kell, milyen prioritású legyen a régi lapok
ellenőrzése
• naprakészségi követelmény:
– aktualizálni kell a nyilvántartást
• rangsorolási követelmény:
– relevancia alapján sorrendet építsen fel a találati listában
– testre szabható sorrendiség legyen
• megjelentési követelmény:
– olvasható formátum
– automatikus kiegésztése a lapoknak
Ads vs. search results
Sponsored Links
• Google has maintained that ads (based
on vendors bidding for keywords) do
not affect vendors’ rankings in search
results
Web
Search =
miele
CG Appliance Express
Discount Appliances (650) 756-3931
Same Day Certified Installation
www.cgappliance.com
San Francisco-Oakland-San Jose,
CA
Miele Vacuum Cleaners
Miele Vacuums- Complete Selection
Free Shipping!
www.vacuums.com
Miele Vacuum Cleaners
Miele-Free Air shipping!
All models. Helpful advice.
www.best-vacuum.com
Results 1 - 10 of about 7,310,000 for miele. (0.12 seconds)
Miele, Inc -- Anything else is a compromise
At the heart of your home, Appliances by Miele. ... USA. to miele.com. Residential Appliances.
Vacuum Cleaners. Dishwashers. Cooking Appliances. Steam Oven. Coffee System ...
www.miele.com/ - 20k - Cached - Similar pages
Miele
Welcome to Miele, the home of the very best appliances and kitchens in the world.
www.miele.co.uk/ - 3k - Cached - Similar pages
Miele - Deutscher Hersteller von Einbaugeräten, Hausgeräten ... - [ Translate this
page ]
Das Portal zum Thema Essen & Geniessen online unter www.zu-tisch.de. Miele weltweit
...ein Leben lang. ... Wählen Sie die Miele Vertretung Ihres Landes.
www.miele.de/ - 10k - Cached - Similar pages
Herzlich willkommen bei Miele Österreich - [ Translate this page ]
Herzlich willkommen bei Miele Österreich Wenn Sie nicht automatisch
weitergeleitet werden, klicken Sie bitte hier! HAUSHALTSGERÄTE ...
www.miele.at/ - 3k - Cached - Similar pages
Ads vs. search results
• Other vendors (Yahoo!, MSN) have made
similar statements from time to time
– Any of them can change anytime
• We will focus primarily on search results
independent of paid placement ads
Web search basics
Sponsored Links
CG Appliance Express
Discount Appliances (650) 756-3931
Same Day Certified Installation
www.cgappliance.com
San Francisco-Oakland-San Jose,
CA
User
Miele Vacuum Cleaners
Miele Vacuums- Complete Selection
Free Shipping!
www.vacuums.com
Miele Vacuum Cleaners
Miele-Free Air shipping!
All models. Helpful advice.
www.best-vacuum.com
Web
Results 1 - 10 of about 7,310,000 for miele. (0.12 seconds)
Miele, Inc -- Anything else is a compromise
At the heart of your home, Appliances by Miele. ... USA. to miele.com. Residential Appliances.
Vacuum Cleaners. Dishwashers. Cooking Appliances. Steam Oven. Coffee System ...
www.miele.com/ - 20k - Cached - Similar pages
Web spider
Miele
Welcome to Miele, the home of the very best appliances and kitchens in the world.
www.miele.co.uk/ - 3k - Cached - Similar pages
Miele - Deutscher Hersteller von Einbaugeräten, Hausgeräten ... - [ Translate this
page ]
Das Portal zum Thema Essen & Geniessen online unter www.zu-tisch.de. Miele weltweit
...ein Leben lang. ... Wählen Sie die Miele Vertretung Ihres Landes.
www.miele.de/ - 10k - Cached - Similar pages
Herzlich willkommen bei Miele Österreich - [ Translate this page ]
Herzlich willkommen bei Miele Österreich Wenn Sie nicht automatisch
weitergeleitet werden, klicken Sie bitte hier! HAUSHALTSGERÄTE ...
www.miele.at/ - 3k - Cached - Similar pages
Search
Indexer
The Web
Indexes
Ad indexes
Web search engine pieces
• Spider (a.k.a. crawler/robot) – builds corpus
– Collects web pages recursively
• For each known URL, fetch the page, parse it, and extract new URLs
• Repeat
– Additional pages from direct submissions & other sources
• The indexer – creates inverted indexes
– Various policies wrt which words are indexed, capitalization, support
for Unicode, stemming, support for phrases, etc.
• Query processor – serves query results
– Front end – query reformulation, word stemming, capitalization,
optimization of Booleans, etc.
– Back end – finds matching documents and ranks them
The Web
• No design/co-ordination
• Distributed content creation, linking
• Content includes truth, lies, obsolete
information, contradictions …
• Structured (databases), semi-structured …
• Scale larger than previous text corpora …
(now, corporate records)
• Growth – slowed down from initial “volume
doubling every few months”
• Content can be dynamically generated
The Web
The Web: Dynamic content
• A page without a static html version
– E.g., current status of flight AA129
– Current availability of rooms at a hotel
• Usually, assembled at the time of a request
from a browser
– Typically, URL has a ‘?’ character in it
AA129
Application server
Browser
Back-end
databases
Dynamic content
• Most dynamic content is ignored by web
spiders
– Many reasons including malicious spider traps
• Some dynamic content (news stories from
subscriptions) are sometimes delivered as
dynamic content
– Application-specific spidering
• Spiders most commonly view web pages just
as Lynx (a text browser) would
A web robot működése
• Feladatok:
– világháló kapcsolatrendszerének felderítése
– dokumentumok begyűjtése
– lapok aktualitásának ellenőrzése
• nehézségek:
–
–
–
–
–
világháló mérete
világháló dinamizmusa
nem elérhető lapok
ismeretlen formátumok
hozzáférési védelem
• viselkedési szabályok:
–
–
–
–
kiválasztási elv
ujralátogatási elv
udvariassági elv
párhuzamos feldolgozás elve
Robot viselkedés elemei
•
•
•
•
•
dokumentum fontosságát nézi
formátum ellenőrzése
statikus lapok keresése
útvonal elemzést végez
kiterjeszti a lekérdezést egy lapból
kiindulva
• ne okozzon nagyterhelést
• a gazda szabályozhassa a
működését (robots.txt)
• örögedési algoritmusok az
újralátogatásra
Speciális indexek webes kereséshez
•
•
•
•
•
•
•
•
•
Szóindex
Metaadatindex
szó / dokumentum index
dokumentum / szó index
dokumentum metaadatok index
dokumentum URL index
frissítési, kezelési adattár
nyelvi adattár
felhasználói adatok
The web: size
• What is being measured?
– Number of hosts
– Number of (static) html pages
• Volume of data
• Number of hosts – netcraft survey
– http://news.netcraft.com/archives/web_server_survey.html
– Gives monthly report on how many web servers are out there
• Number of pages – numerous estimates
– More to follow later in this course
– For a Web engine: how big its index is
The web: evolution
• All of these numbers keep changing
• Relatively few scientific studies of the
evolution of the web
– http://research.microsoft.com/research/sv/sv-pubs/p97-fetterly/p97fetterly.pdf
• Sometimes possible to extrapolate from small
samples
– http://www.vldb.org/conf/2001/P069.pdf
Static pages: rate of change
• Fetterly et al. study: several views of data, 150 million pages
over 11 weekly crawls
– Bucketed into 85 groups by extent of change
Diversity
• Languages/Encodings
– Hundreds (thousands ?) of languages, W3C encodings: 55 (Jul01) [W3C01]
– Google (mid 2001): English: 53%, JGCFSKRIP: 30%
• Document & query topic
Popular Query Topics (from 1 million Google queries, Apr 2000)
Arts
14.6%
Arts: Music
6.1%
Computers
13.8%
Regional: North America
5.3%
Regional
10.3%
Adult: Image Galleries
4.4%
Society
8.7%
Computers: Software
3.4%
Adult
8%
Computers: Internet
3.2%
Recreation
7.3%
Business: Industries
2.3%
Business
7.2%
Regional: Europe
1.8%
…
…
…
…
Other characteristics
• Significant duplication
– Syntactic – 30%-40% (near) duplicates [Brod97,
Shiv99b]
– Semantic – ???
• High linkage
– More than 8 links/page in the average
• Complex graph topology
– Not a small world; bow-tie structure [Brod00]
• Spam
– 100s of millions of pages
• More on these later
The user
• Diverse in background/training
– Although this is improving
– Increasingly, can tell a search bar from the URL bar
• Although this matters less now
– Increasingly, comprehend UI elements such as the
vertical slider
• But browser real estate “above the fold” is still a
premium
The user
• Diverse in access methodology
– Increasingly, high bandwidth connectivity
– Growing segment of mobile users: limitations of form
factor – keyboard, display
• Diverse in search methodology
– Search, search + browse, filter by attribute …
• Average query length ~ 2.5 terms
– Has to do with what they’re searching for
• Poor comprehension of syntax
– Early engines surfaced rich syntax – Boolean, phrase,
etc.
– Current engines hide these
The user: information needs
• Informational – want to learn about something (~40%)
Low hemoglobin
• Navigational – want to go to that page (~25%)
United Airlines
• Transactional – want to do something (web-mediated) (~35%)
– Access a service
– Downloads
Mendocino weather
Mars surface images
– Shop
Nikon CoolPix
• Gray areas
Car rental Finland
– Find a good hub
– Exploratory search “see what’s there”
Courtesy Andrei Broder, IBM
Users’ evaluation of engines
• Relevance and validity of results
• UI – Simple, no clutter, error tolerant
• Trust – Results are objective, the engine wants to
help me
• Pre/Post process tools provided
– Mitigate user errors (auto spell check)
– Explicit: Search within results, more like this, refine ...
– Anticipative: related searches
• Deal with idiosyncrasies
– Web addresses typed in the search box
Users’ evaluation
• Quality of pages varies widely
– Relevance is not enough
– Duplicate elimination
• Precision vs. recall
– On the web, recall seldom matters
• What matters
– Precision at 1? Precision above the fold?
– Comprehensiveness – must be able to deal with
obscure queries
• Recall matters when the number of matches is very small
• User perceptions may be unscientific, but are significant
over a large aggregate
Keresési funkciók
• szó alapú keresés
• taxonómia alapú keresés
(témakatalógus)
• kifejezés alapú keresés
• összetett feltétel
alkalmazása
• finomítható keresés,
visszacsatolás
• kiterjesztő keresés,
visszacsatolás
• témaorientált keresés
•
•
•
•
•
szótő alapú keresés
szekció szerinti szűkítés
metaadat szerinti szűkítés
klaszter alapú válasz
természetes nyelvi
lekérdezés
• beszéd alapú
parancsbevitel
• szemantikus háló alapú
keresés
Speciális keresési funkciók
• Metakeresők:
– más keresőket felhasználva gyűjtik be az adatokat,
rendszereznek összesítenek
• Mamma, metacrawler
• Mélyhálókeresők
– deep web, sötét web
– elsősorban dinamikus lapok vannak mögötte
közvetlenül nem elérhetők
A WEB MÉRETÉNEK MEGBECSÜLÉSE
What is the size of the web ?
• Issues
– The web is really infinite
• Dynamic content, e.g., calendar
• Soft 404: www.yahoo.com/anything is a valid page
– Static web contains syntactic duplication, mostly due
to mirroring (~20-30%)
– Some servers are seldom connected
• Who cares?
– Media, and consequently the user
– Engine design
– Engine crawl policy. Impact on recall
What can we attempt to measure?
•The relative size of search engines
– The notion of a page being indexed is still reasonably well defined.
– Already there are problems
• Document extension: e.g. Google indexes pages not yet crawled by
indexing anchortext.
• Document restriction: Some engines restrict what is indexed (first n
words, only relevant words, etc.)
•The coverage of a search engine relative to another
particular crawling process.
Statistical methods
• Random queries
• Random searches
• Random IP addresses
• Random walks
URL sampling via Random Queries
• Ideal strategy: Generate a random URL and
check for containment in each index.
• Problem: Random URLs are hard to find!
Random queries [Bhar98a]
• Sample URLs randomly from each engine
– 20,000 random URLs from each engine
• Issue random conjunctive query with <200 results
• Select a random URL from the top 200 results
• Test if present in other engines.
– Query with 8 rarest words. Look for URL match
• Compute intersection & size ratio
Intersection = x% of E1 = y% of E2
E1/E2 = y/x
E1
• Issues
– Random narrow queries may bias towards long documents
(Verify with disjunctive queries)
– Other biases induced by process
E2
Random searches
• Choose random searches extracted from a
local log [Lawr97] or build “random searches”
[Note02]
– Use only queries with small results sets.
– Count normalized URLs in result sets.
– Use ratio statistics
• Advantage:
– Might be a good reflection of the human
perception of coverage
Random searches [Lawr98, Lawr99]
• 575 & 1050 queries from the NEC RI employee logs
• 6 Engines in 1998, 11 in 1999
• Implementation:
– Restricted to queries with < 600 results in total
– Counted URLs from each engine after verifying query match
– Computed size ratio & overlap for individual queries
– Estimated index size ratio & overlap by averaging over all queries
• Issues
– Samples are correlated with source of log
– Duplicates
– Technical statistical problems (must have non-zero
results, ratio average, use harmonic mean? )
Queries from Lawrence and Giles study
• adaptive access control
• neighborhood preservation
topographic
• hamiltonian structures
• right linear grammar
• pulse width modulation neural
• unbalanced prior probabilities
• ranked assignment method
• internet explorer favourites
importing
• karvel thornber
• zili liu
• softmax activation function
• bose multidimensional system
theory
• gamma mlp
• dvi2pdf
• john oliensis
• rieke spikes exploring neural
• video watermarking
• counterpropagation network
• fat shattering dimension
• abelson amorphous computing
Size of the Web Estimation
[Lawr98, Bhar98a]
• Capture – Recapture technique
– Assumes engines get independent random
subsets of the Web
E2 contains x% of E1.
Assume, E2 contains x%
of the Web as well
E2
E1
Knowing size of E2
compute size of the Web
Size of the Web = 100*E2/x
Bharat & Broder: 200 M (Nov 97), 275 M (Mar 98)
Lawrence & Giles: 320 M (Dec 97)
WEB
Random IP addresses [Lawr99]
– Generate random IP addresses
– Find, if possible, a web server at the given address
– Collect all pages from server
– Advantages
• Clean statistics, independent of any crawling strategy
Random IP addresses [ONei97, Lawr99]
• HTTP requests to random IP addresses
– Ignored: empty or authorization required or excluded
– [Lawr99] Estimated 2.8 million IP addresses running crawlable web
servers (16 million total) from observing 2500 servers.
– OCLC using IP sampling found 8.7 M hosts in 2001
• Netcraft [Netc02] accessed 37.2 million hosts in July 2002
• [Lawr99] exhaustively crawled 2500 servers
and extrapolated
– Estimated size of the web to be 800 million
– Estimated use of metadata descriptors:
• Meta tags (keywords, description) in 34% of home pages, Dublin core
metadata in 0.3%
Issues
•
•
•
•
Virtual hosting
Server might not accept http://102.93.22.15
No guarantee all pages are linked to root page
Power law for # pages/hosts generates bias
Random walks [Henz99, BarY00, Rusm01]
• View the Web as a directed graph from a given list of seeds.
• Build a random walk on this graph
– Includes various “jump” rules back to visited sites
– Converges to a stationary distribution
• Time to convergence not really known
– Sample from stationary distribution of walk
– Use the “small results set query” method to check coverage by SE
– “Statistically clean” method, at least in theory!
Issues
• List of seeds is a problem.
• Practical approximation might not be valid:
Non-uniform distribution, subject to link
spamming
• Still has all the problems associated with
“strong queries”
Conclusions
•
•
•
•
No sampling solution is perfect.
Lots of new ideas ...
....but the problem is getting harder
Quantitative studies are fascinating and a
good research problem
PAGERANK
Citation Analysis
• Citation frequency
• Co-citation coupling frequency
– Cocitations with a given author measures “impact”
– Cocitation analysis [Mcca90]
• Convert frequencies to correlation coefficients, do multivariate
analysis/clustering, validate conclusions
• E.g., cocitation in the “Geography and GIS” web shows communities
[Lars96 ]
• Bibliographic coupling frequency
– Articles that co-cite the same articles are related
• Citation indexing
– Who is a given author cited by? (Garfield [Garf72])
• E.g., Science Citation Index ( http://www.isinet.com/ )
• CiteSeer ( http://citeseer.ist.psu.edu ) [Lawr99a]
Pagerank alapok
• Rangsoroló algoritmus
• Egy lap fontosságának mérése:
– lapra mutató hivatkozások darabszáma
– hivatkozások súlyozása
– Normalizálás
• Alap rank érték:
– ahol α a súlyozási tényező, ci az i. lapból kiinduló
hivatkozások összdarabszáma az összegzés a lapra
hivatkozó lapokra történik
– A hivatkozó lapok rank értékei egymást befolyásolják.
Query-independent ordering
• First generation: using link counts as simple
measures of popularity.
• Two basic suggestions:
– Undirected popularity:
• Each page gets a score = the number of in-links plus
the number of out-links (3+2=5).
– Directed popularity:
• Score of a page = number of its in-links (3).
Query processing
• First retrieve all pages meeting the text query
(say venture capital).
• Order these by their link popularity (either
variant on the previous page).
Pagerank scoring
• Imagine a browser doing a random walk on
web pages:
1/3
1/3
1/3
– Start at a random page
– At each step, go out of the current page along one
of the links on that page, equiprobably
• “In the steady state” each page has a longterm visit rate - use this as the page’s score.
Not quite enough
• The web is full of dead-ends.
– Random walk can get stuck in dead-ends.
– Makes no sense to talk about long-term visit rates.
??
Teleporting
• At a dead end, jump to a random web page.
• At any non-dead end, with probability 10%,
jump to a random web page.
– With remaining probability (90%), go out on a
random link.
– 10% - a parameter.
Result of teleporting
• Now cannot get stuck locally.
• There is a long-term rate at which any page is
visited (not obvious, will show this).
• How do we compute this visit rate?
Markov chains
• A Markov chain consists of n states, plus an
nn transition probability matrix P.
• At each step, we are in exactly one of the
states.
• For 1  i,j  n, the matrix entry Pij tells us the
probability of j being the next state, given we
are currently in state i.
Pii>0
is OK.
i
Pij
j
Markov chains
n
• Clearly, for all i,  Pij  1.
j 1
• Markov chains are abstractions of random
walks.
• Exercise: represent the teleporting random
walk from 3 slides ago as a Markov chain, for
this case:
Ergodic Markov chains
• A Markov chain is ergodic if
– you have a path from any state to any other
– you can be in any state at every time step, with
non-zero probability.
Not
ergodic
(even/
odd).
Ergodic Markov chains
• For any ergodic Markov chain, there is a
unique long-term visit rate for each state.
– Steady-state distribution.
• Over a long time-period, we visit each state in
proportion to this rate.
• It doesn’t matter where we start.
Probability vectors
• A probability (row) vector x = (x1, … xn) tells us
where the walk is at any point.
• E.g., (000…1…000) means we’re in state i.
1
i
n
More generally, the vector x = (x1, … xn) means the
walk is in state i with probability xi.
n
x
i 1
i
 1.
Change in probability vector
• If the probability vector is x = (x1, … xn) at this
step, what is it at the next step?
• Recall that row i of the transition prob. Matrix
P tells us where we go next from state i.
• So from x, our next state is distributed as xP.
Steady state example
• The steady state looks like a vector of
probabilities a = (a1, … an):
– ai is the probability that we are in state i.
3/4
1/4
1
2
3/4
1/4
For this example, a1=1/4 and a2=3/4.
How do we compute this vector?
• Let a = (a1, … an) denote the row vector of steadystate probabilities.
• If we our current position is described by a, then
the next step is distributed as aP.
• But a is the steady state, so a=aP.
• Solving this matrix equation gives us a.
– So a is the (left) eigenvector for P.
– (Corresponds to the “principal” eigenvector of P with
the largest eigenvalue.)
– Transition probability matrices always have larges
eigenvalue 1.
One way of computing a
• Recall, regardless of where we start, we
eventually reach the steady state a.
• Start with any distribution (say x=(10…0)).
• After one step, we’re at xP;
• after two steps at xP2 , then xP3 and so on.
• “Eventually” means for “large” k, xPk = a.
• Algorithm: multiply x by increasing powers of
P until the product looks stable.
Pagerank summary
• Preprocessing:
– Given graph of links, build matrix P.
– From it compute a.
– The entry ai is a number between 0 and 1: the
pagerank of page i.
• Query processing:
– Retrieve pages meeting query.
– Rank them by their pagerank.
– Order is query-independent.
The reality
• Pagerank is used in google, but so are many
other clever heuristics
– more on these heuristics later.
Pagerank: Issues and Variants
• How realistic is the random surfer model?
– What if we modeled the back button? [Fagi00]
– Surfer behavior sharply skewed towards short paths [Hube98]
– Search engines, bookmarks & directories make jumps non-random.
• Biased Surfer Models
– Weight edge traversal probabilities based on match with topic/query (nonuniform edge selection)
– Bias jumps to pages on topic (e.g., based on personal bookmarks & categories
of interest)
Topic Specific Pagerank [Have02]
•
Conceptually, we use a random surfer who
teleports, with say 10% probability, using the
following rule:
•
•
Selects a category (say, one of the 16 top level ODP
categories) based on a query & user -specific
distribution over the categories
Teleport to a page uniformly at random within the
chosen category
– Sounds hard to implement: can’t compute
PageRank at query time!
Topic Specific Pagerank [Have02]
•
Implementation
•
offline:Compute pagerank distributions wrt to
individual categories
Query independent model as before
Each page has multiple pagerank scores – one for each ODP
category, with teleportation only to that category
•
online: Distribution of weights over categories
computed by query context classification
Generate a dynamic pagerank score for each page - weighted
sum of category-specific pageranks
Influencing PageRank
(“Personalization”)
• Input:
– Web graph W
– influence vector v
v : (page  degree of influence)
• Output:
– Rank vector r: (page  page importance wrt v)
• r = PR(W , v)
Non-uniform Teleportation
Sports
Teleport with 10% probability to a Sports page
Interpretation of Composite Score
• For a set of personalization vectors {vj}
j [wj · PR(W , vj)] = PR(W , j [wj · vj])
• Weighted sum of rank vectors itself forms a
valid rank vector, because PR() is linear wrt vj
Interpretation
Sports
10% Sports teleportation
Interpretation
Health
10% Health teleportation
Interpretation
Health
Sports
pr = (0.9 PRsports + 0.1 PRhealth) gives you:
9% sports teleportation, 1% health teleportation
The Web as a Directed Graph
Page A
Anchor
hyperlink
Assumption 1: A hyperlink between pages denotes
perceived relevance (quality signal)
Assumption 2: The anchor of the hyperlink
target page (textual context)
Page B
author
describes the
Assumptions Tested
• A link is an endorsement (quality signal)
– Except when affiliated
– Can we recognize affiliated links? [Davi00]
• 1536 links manually labeled
• 59 binary features (e.g., on-domain, meta tag overlap,
common outlinks)
• C4.5 decision tree, 10 fold cross validation showed 98.7%
accuracy
– Additional surrounding text has lower probability but can be useful
Assumptions tested
• Anchors describe the target
– Topical Locality [Davi00b]
• ~200K pages (query results + their outlinks)
• Computed “page to page” similarity (TFIDF measure)
– Link-to-Same-Domain > Cocited > Link-to-Different-Domain
• Computed “anchor to page” similarity
– Mean anchor len = 2.69
– 0.6 mean probability of an anchor term in target page
Anchor Text
WWW Worm - McBryan [Mcbr94]
• For [ ibm] how to distinguish between:
– IBM’s home page (mostly graphical)
– IBM’s copyright page (high term freq. for ‘ibm’)
– Rival’s spam page (arbitrarily high term freq.)
“ibm”
“ibm.com”
A million pieces of anchor text
with “ibm” send a strong signal
www.ibm.com
“IBM home page”
Indexing anchor text
• When indexing a document D, include anchor
text from links pointing to D.
Armonk, NY-based computer
giant IBM announced today
www.ibm.com
Joe’s computer hardware links
Compaq
HP
IBM
Big Blue today announced
record profits for the quarter
Indexing anchor text
• Can sometimes have unexpected side effects e.g., evil empire.
• Can index anchor text with less weight.
Anchor Text
• Other applications
– Weighting/filtering links in the graph
• HITS [Chak98], Hilltop [Bhar01]
– Generating page descriptions from anchor text
[Amit98, Amit00]
Hyperlink-Induced Topic Search (HITS)
- Klei98
• In response to a query, instead of an ordered list
of pages each meeting the query, find two sets of
inter-related pages:
– Hub pages are good lists of links on a subject.
• e.g., “Bob’s list of cancer-related links.”
– Authority pages occur recurrently on good hubs for
the subject.
• Best suited for “broad topic” queries rather than
for page-finding queries.
• Gets at a broader slice of common opinion.
Hubs and Authorities
• Thus, a good hub page for a topic points to
many authoritative pages for that topic.
• A good authority page for a topic is pointed to
by many good hubs for that topic.
• Circular definition - will turn this into an
iterative computation.
The hope
Alice
AT&T
Authorities
Hubs
Bob
Sprint
MCI
Long distance telephone companies
High-level scheme
• Extract from the web a base set of pages that
could be good hubs or authorities.
• From these, identify a small set of top hub and
authority pages;
iterative algorithm.
Base set
• Given text query (say browser), use a text
index to get all pages containing browser.
– Call this the root set of pages.
• Add in any page that either
– points to a page in the root set, or
– is pointed to by a page in the root set.
• Call this the base set.
Visualization
Root
set
Base set
Assembling the base set [Klei98]
• Root set typically 200-1000 nodes.
• Base set may have up to 5000 nodes.
• How do you find the base set nodes?
– Follow out-links by parsing root set pages.
– Get in-links (and out-links) from a connectivity
server.
– (Actually, suffices to text-index strings of the form
href=“URL” to get in-links to URL.)
Distilling hubs and authorities
• Compute, for each page x in the base set, a
hub score h(x) and an authority score a(x).
• Initialize: for all x, h(x)1; a(x) 1; Key
• Iteratively update all h(x), a(x);
• After iterations
– output pages with highest h() scores as top hubs
– highest a() scores as top authorities.
Iterative update
• Repeat the following updates, for all x:
h( x ) 
 a( y )
x
x y
a ( x) 
 h( y )
y x
x
Scaling
• To prevent the h() and a() values from getting
too big, can scale down after each iteration.
• Scaling factor doesn’t really matter:
– we only care about the relative values of the
scores.
How many iterations?
• Claim: relative values of scores will converge
after a few iterations:
– in fact, suitably scaled, h() and a() scores settle
into a steady state!
– proof of this comes later.
• We only require the relative orders of the h()
and a() scores - not their absolute values.
• In practice, ~5 iterations get you close to
stability.
Japan Elementary Schools
Hubs
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
schools
LINK Page-13
“ú–{‚ÌŠw•
Z
a‰„•
•
¬Šw•
Zƒz•
[ƒ€
ƒy•
[ƒW
100 Schools Home Pages (English)
K-12 from Japan 10/...rnet and Education )
http://www...iglobe.ne.jp/~IKESAN
‚l‚f‚j•
¬Šw•
Z‚U”N‚P‘g•¨Œê
ÒŠ—’¬—§•
•
ÒŠ—“Œ•
¬Šw•
Z
Koulutus ja oppilaitokset
TOYODA HOMEPAGE
Education
Cay's Homepage(Japanese)
–y“ì•
¬Šw•
Z‚̃z•
[ƒ€
ƒy•
[ƒW
UNIVERSITY
‰J—³•
¬Šw•
Z DRAGON97-TOP
以
Ž
¬Šw•
Z‚T”N‚P‘gƒz•
[ƒ€
ƒy•
[ƒW
¶µ°é¼ÂÁ© ¥á¥Ë¥å¡¼ ¥á¥Ë¥å¡¼
Authorities
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
The American School in Japan
The Link Page
以
èŽ
s—§ˆä“c•
¬Šw•
Zƒz•
[ƒ€
ƒy•
[ƒW
Kids' Space
ˆÀ•
éŽ
s—§ˆÀ•
é•
¼•”•
¬Šw•
Z
‹{•
鋳ˆç‘åŠw••
‘®•
¬Šw•
Z
KEIMEI GAKUEN Home Page ( Japanese )
Shiranuma Home Page
fuzoku-es.fukui-u.ac.jp
welcome to Miasa E&J school
_“Þ•
•
쌧•
E‰¡•lŽ
s—§’†•
ì•
¼•
¬Šw•
Z‚̃y
http://www...p/~m_maru/index.html
fukui haruyama-es HomePage
Torisu primary school
goo
Yakumo Elementary,Hokkaido,Japan
FUZOKU Home Page
Kamishibun Elementary School...
Things to note
• Pulled together good pages regardless of
language of page content.
• Use only link analysis after base set assembled
– iterative scoring is query-independent.
• Iterative computation after text index retrieval
- significant overhead.
Proof of convergence
• nn adjacency matrix A:
– each of the n pages in the base set has a row
and column in the matrix.
– Entry Aij = 1 if page i links to page j, else = 0.
1
2
3
1
2
3
0
1
0
2
1
1
1
3
1
0
0
1
Hub/authority vectors
• View the hub scores h() and the authority
scores a() as vectors with n components.
• Recall the iterative updates
h( x ) 
 a( y )
x y
a ( x) 
 h( y )
y x
Rewrite in matrix form
• h=Aa.
• a=Ath.
Recall At is
the
transpose of
A.
Substituting, h=AAth and a=AtAa.
Thus, h is an eigenvector of AAt and a is an
eigenvector of AtA.
Further, our algorithm is a particular, known algorithm for computing eigenvectors: the power
iteration method.
Guaranteed to converge.
Issues
• Topic Drift
– Off-topic pages can cause off-topic “authorities”
to be returned
• E.g., the neighborhood graph can be about a “super
topic”
• Mutually Reinforcing Affiliates
– Affiliated pages/sites can boost each others’
scores
• Linkage between affiliated pages is not a useful signal
Solutions
• ARC [Chak98] and Clever [Chak98b]
– Distance-2 neighborhood graph
– Tackling affiliated linkage
• IP prefix (E.g., 208.47.*.*) rather than hosts to identify “same author”
pages
– Tackling topic drift
• Weight edges by match between query and extended anchor text
• Distribute hub score non-uniformly to outlinks
Intuition: Regions of the hub page with links to good authorities get more
of the hub score
(For follow-up based on Document Object Model see [Chak01])
Solutions (contd)
• Topic Distillation [Bhar98]
• Tackling affiliated linkage
– Normalize weights of edges from/to a single
host
• Tackling topic drift
– Query expansion.
– “Topic vector” computed from docs in the
initial ranking.
– Match with topic vector used to weight
edges and remove off-topic nodes
• Evaluation
• 28 broad queries. Pooled results, blind
ratings of results by 3 reviewers per query
• Average precision @ 10
– Topic Distillaton = 0.66, HITS = 0.46
Host A 1/3 Host B
1/3
1
1/3
Hilltop
[Bhar01]
• Preprocessing: Special index of “expert” hubs
– Select a subset of the web (~ 5%)
– High out-degree to non-affiliated pages on a theme
• At query time compute:
– Expert score (Hub score)
• Based on text match between query and expert hub
– Authority score
• Based on scores of non-affiliated experts pointing to the given page
• Also based on match between query and extended anchor-text (includes
enclosing headings + title)
– Return top ranked pages by authority score
TERMÉSZETES NYELVI KÉRÉSEK
KIÉRTÉKELÉSE
Question Answering from text
• An idea originating from the IR community
• With massive collections of full-text documents, simply
finding relevant documents is of limited use: we want answers
from textbases
• QA: give the user a (short) answer to their question, perhaps
supported by evidence.
• The common person’s view? [From a novel]
– “I like the Internet. Really, I do. Any time I need a piece of shareware or I want to find
out the weather in Bogota … I’m the first guy to get the modem humming. But as a
source of information, it sucks. You got a billion pieces of data, struggling to be heard
and seen and downloaded, and anything I want to know seems to get trampled
underfoot in the crowd.”
• M. Marshall. The Straw Men. HarperCollins Publishers, 2002.
104
People want to ask questions…
Examples from AltaVista query log
who invented surf music?
how to make stink bombs
where are the snowdens of yesteryear?
which english translation of the bible is used in official catholic liturgies?
how to do clayart
how to copy psx
how tall is the sears tower?
Examples from Excite query log (12/1999)
how can i find someone in texas
where can i find information on puritan religion?
what are the 7 wonders of the world
how can i eliminate stress
What vacuum cleaner does Consumers Guide recommend
Around 12–15% of query logs
105
The Google answer #1
• Include question words etc. in your stop-list
• Do standard IR
• Sometimes this (sort of) works:
• Question: Who was the prime minister of
Australia during the Great Depression?
• Answer: James Scullin (Labor) 1929–31.
106
Page about Curtin (WW II
Labor Prime Minister)
(Can deduce answer)
Page about Curtin (WW II
Labor Prime Minister)
(Lacks answer)
Page about Chifley
(Labor Prime Minister)
(Can deduce answer)
107
But often it doesn’t…
• Question: How much money did IBM spend on
advertising in 2002?
• Answer: I dunno, but I’d like to … 
108
Lot of ads on
Google these days!
No relevant info
(Marketing firm page)
No relevant info
(Mag page on ad exec)
No relevant info
(Mag page on MS-IBM)
109
The Google answer #2
• Take the question and try to find it as a string on the web
• Return the next sentence on that web page as the answer
• Works brilliantly if this exact question appears as a FAQ
question, etc.
• Works lousily most of the time
• Reminiscent of the line about monkeys and typewriters
producing Shakespeare
• But a slightly more sophisticated version of this approach has
been revived in recent years with considerable success…
110
A Brief (Academic) History
• In some sense question answering is not a new
research area
• Question answering systems can be found in
many areas of NLP research, including:
• Natural language database systems
– A lot of early NLP work on these (e.g., LUNAR)
• Spoken dialog systems
– Currently very active and commercially relevant
• The focus on open-domain QA is fairly new
– MURAX (Kupiec 1993): Encyclopedia answers
– Hirschman: Reading comprehension tests
– TREC QA competition: 1999–
111
AskJeeves
• AskJeeves is probably most hyped example of
“Question answering”
• It largely does pattern matching to match your
question to their own knowledge base of
questions
• If that works, you get the human-curated answers
to that known question
• If that fails, it falls back to regular web search
• A potentially interesting middle ground, but a
fairly weak shadow of real QA
112
Online QA Examples
• Examples
– LCC: http://www.languagecomputer.com/demos/question_answering/index.html
– AnswerBus is an open-domain question
answering system: www.answerbus.com
– Ionaut: http://www.ionaut.com:8400/
– EasyAsk, AnswerLogic, AnswerFriend, Start,
Quasm, Mulder, Webclopedia, etc.
113
Question Answering at TREC
• Question answering competition at TREC consists of answering a set
of 500 fact-based questions, e.g., “When was Mozart born?”.
• For the first three years systems were allowed to return 5 ranked
answer snippets (50/250 bytes) to each question.
– IR think
– Mean Reciprocal Rank (MRR) scoring:
• 1, 0.5, 0.33, 0.25, 0.2, 0 for 1, 2, 3, 4, 5, 6+ doc
– Mainly Named Entity answers (person, place, date, …)
• From 2002 the systems are only allowed to return a single exact
answer and the notion of confidence has been introduced.
114
The TREC Document Collection
• The current collection uses news articles from the following
sources:
• AP newswire, 1998-2000
• New York Times newswire, 1998-2000
• Xinhua News Agency newswire, 1996-2000
• In total there are 1,033,461 documents in the collection. 3GB of
text
• This is too much text to process entirely using advanced NLP
techniques so the systems usually consist of an initial information
retrieval phase followed by more advanced processing.
• Many supplement this text with use of the web, and other
knowledge bases
115
Sample TREC questions
1. Who is the author of the book, "The Iron Lady: A
Biography of Margaret Thatcher"?
2. What was the monetary value of the Nobel Peace
Prize in 1989?
3. What does the Peugeot company manufacture?
4. How much did Mercury spend on advertising in 1993?
5. What is the name of the managing director of Apricot
Computer?
6. Why did David Koresh ask the FBI for a word processor?
7. What debts did Qintex group leave?
8. What is the name of the rare neurological disease with
symptoms such as: involuntary movements (tics), swearing,
and incoherent vocalizations (grunts, shouts, etc.)?
116
Top Performing Systems
• Currently the best performing systems at TREC can answer
approximately 60-80% of the questions
– A pretty amazing performance!
• Approaches and successes have varied a fair deal
– Knowledge-rich approaches, using a vast array of
NLP techniques stole the show in 2000, 2001
• Notably Harabagiu, Moldovan et al. – SMU/UTD/LCC
– AskMSR system stressed how much could be
achieved by very simple methods with enough
text (now has various copycats)
– Middle ground is to use a large collection of
surface matching patterns (ISI)
117
AskMSR
• Web Question Answering: Is More Always Better?
– Dumais, Banko, Brill, Lin, Ng (Microsoft, MIT, Berkeley)
• Q: “Where is
the Louvre
located?”
• Want “Paris”
or “France”
or “75058
Paris Cedex 01”
or a map
• Don’t just
want URLs
118
AskMSR: Shallow approach
• In what year did Abraham Lincoln die?
• Ignore hard documents and find easy ones
119
AskMSR: Details
1
2
3
5
4
120
Step 1: Rewrite queries
• Intuition: The user’s question is often
syntactically quite close to sentences that contain
the answer
– Where is the Louvre Museum located?
– The Louvre Museum is located in Paris
– Who created the character of Scrooge?
– Charles Dickens created the character of Scrooge.
121
Query rewriting
•
–
–
–
Classify question into seven categories
Who is/was/are/were…?
When is/did/will/are/were …?
Where is/are/were …?
a. Category-specific transformation rules
eg “For Where questions, move ‘is’ to all possible locations”
“Where is the Louvre Museum located”

“is the Louvre Museum located”

“the is Louvre Museum located”

“the Louvre is Museum located”

“the Louvre Museum is located”

“the Louvre Museum located is”
b. Expected answer “Datatype” (eg, Date, Person, Location, …)
When was the French Revolution?  DATE
•
Nonsense,
but who
cares? It’s
only a few
more queries
to Google.
Hand-crafted classification/rewrite/datatype rules
(Could they be automatically learned?)
122
Query Rewriting - weights
• One wrinkle: Some query rewrites are more
reliable than others
Where is the Louvre Museum located?
Weight 1
Lots of non-answers
could come back too
Weight 5
if we get a match,
it’s probably right
+“the Louvre Museum is located”
+Louvre +Museum +located
123
Step 2: Query search engine
• Send all rewrites to a Web search engine
• Retrieve top N answers (100?)
• For speed, rely just on search engine’s
“snippets”, not the full text of the actual
document
124
Step 3: Mining N-Grams
• Unigram, bigram, trigram, … N-gram:
list of N adjacent terms in a sequence
• Eg, “Web Question Answering: Is More Always Better”
– Unigrams: Web, Question, Answering, Is, More, Always, Better
– Bigrams: Web Question, Question Answering, Answering Is, Is More, More
Always, Always Better
– Trigrams: Web Question Answering, Question Answering Is, Answering Is
More, Is More Always, More Always Betters
125
Mining N-Grams
• Simple: Enumerate all N-grams (N=1,2,3 say) in all retrieved
snippets
• Use hash table and other fancy footwork to make this efficient
• Weight of an N-gram: occurrence count, each weighted by
“reliability” (weight) of rewrite that fetched the document
• Example: “Who created the character of Scrooge?”
–
–
–
–
–
–
–
–
Dickens - 117
Christmas Carol - 78
Charles Dickens - 75
Disney - 72
Carl Banks - 54
A Christmas - 41
Christmas Carol - 45
Uncle - 31
126
Step 4: Filtering N-Grams
• Each question type is associated with one or
more “data-type filters” = regular expressions
Date
• When…
Location
• Where…
Person
• What …
• Who …
• Boost score of N-grams that do match regexp
• Lower score of N-grams that don’t match regexp
127
Step 5: Tiling the Answers
Scores
20
Charles Dickens
15
merged,
discard
old n-grams
Dickens
10
Mr Charles
Score 45
Mr Charles Dickens
tile highest-scoring n-gram
N-Grams
N-Grams
Repeat, until no more overlap
128
Results
• Standard TREC contest test-bed:
~1M documents; 900 questions
• Technique doesn’t do too well (though would
have placed in top 9 of ~30 participants!)
– MRR = 0.262 (i.e., right answer ranked about #4#5 on average)
– Why? Because it relies on the enormity of the
Web!
• Using the Web as a whole, not just TREC’s 1M
documents… MRR = 0.42 (i.e., on average,
129
Issues
• In many scenarios (e.g., monitoring an
individual’s email…) we only have a small set
of documents
• Works best/only for “Trivial Pursuit”-style factbased questions
• Limited/brittle repertoire of
– question categories
– answer data types/filters
– query rewriting rules
130
ISI: Surface patterns approach
• Use of Characteristic Phrases
• "When was <person> born”
– Typical answers
• "Mozart was born in 1756.”
• "Gandhi (1869-1948)...”
– Suggests phrases (regular expressions) like
• "<NAME> was born in <BIRTHDATE>”
• "<NAME> ( <BIRTHDATE>-”
– Use of Regular Expressions can help locate
correct answer
131
Use Pattern Learning
• Example:
• “The great composer Mozart (1756-1791) achieved
fame at a young age”
• “Mozart (1756-1791) was a genius”
• “The whole world would always be indebted to the
great music of Mozart (1756-1791)”
– Longest matching substring for all 3 sentences
is "Mozart (1756-1791)”
– Suffix tree would extract "Mozart (1756-1791)"
as an output, with score of 3
• Reminiscent of IE pattern learning
132
Pattern Learning (cont.)
• Repeat with different examples of same
question type
– “Gandhi 1869”, “Newton 1642”, etc.
• Some patterns learned for BIRTHDATE
– a. born in <ANSWER>, <NAME>
– b. <NAME> was born on <ANSWER> ,
– c. <NAME> ( <ANSWER> – d. <NAME> ( <ANSWER> - )
133
Experiments
• 6 different Q types
– from Webclopedia QA Typology (Hovy et al.,
2002a)
•
•
•
•
•
•
BIRTHDATE
LOCATION
INVENTOR
DISCOVERER
DEFINITION
WHY-FAMOUS
134
Experiments: pattern precision
• BIRTHDATE table:
• 1.0 <NAME> ( <ANSWER> - )
• 0.85 <NAME> was born on <ANSWER>,
• 0.6 <NAME> was born in <ANSWER>
• 0.59 <NAME> was born <ANSWER>
• 0.53 <ANSWER> <NAME> was born
• 0.50 - <NAME> ( <ANSWER>
• 0.36 <NAME> ( <ANSWER> • INVENTOR
• 1.0 <ANSWER> invents <NAME>
• 1.0 the <NAME> was invented by <ANSWER>
135
Experiments (cont.)
• DISCOVERER
• 1.0 when <ANSWER> discovered <NAME>
• 1.0 <ANSWER>'s discovery of <NAME>
• 0.9 <NAME> was discovered by <ANSWER> in
• DEFINITION
• 1.0 <NAME> and related <ANSWER>
• 1.0 form of <ANSWER>, <NAME>
• 0.94 as <NAME>, <ANSWER> and
136
Experiments (cont.)
• WHY-FAMOUS
• 1.0 <ANSWER> <NAME> called
• 1.0 laureate <ANSWER> <NAME>
• 0.71 <NAME> is the <ANSWER> of
• LOCATION
• 1.0 <ANSWER>'s <NAME>
• 1.0 regional : <ANSWER> : <NAME>
• 0.92 near <NAME> in <ANSWER>
• Depending on question type, get high MRR (0.6–
0.9), with higher results from use of Web than
TREC QA collection
137
Shortcomings & Extensions
• Need for POS &/or semantic types
• "Where are the Rocky Mountains?”
• "Denver's new airport, topped with white fiberglass
cones in imitation of the Rocky Mountains in the
background , continues to lie empty”
• <NAME> in <ANSWER>
• NE tagger &/or ontology could enable system
to determine "background" is not a location
name
138
Shortcomings... (cont.)
• Long distance dependencies
• "Where is London?”
• "London, which has one of the most busiest
airports in the world, lies on the banks of the river
Thames”
• would require pattern like:
<QUESTION>, (<any_word>)*, lies on <ANSWER>
– Abundance & variety of Web data helps system
to find an instance of patterns w/o losing
answers to long distance dependencies
139
Shortcomings... (cont.)
• System currently has only one anchor word
– Doesn't work for Q types requiring multiple words from question to be
in answer
• "In which county does the city of Long Beach lie?”
• "Long Beach is situated in Los Angeles County”
• required pattern:
<Q_TERM_1> is situated in <ANSWER> <Q_TERM_2>
• Did not use case
• "What is a micron?”
• "...a spokesman for Micron, a maker of semiconductors, said SIMMs
are..."
• If Micron had been capitalized in question, would be a perfect
answer
140
Lexical Terms Extraction as input to Information
Retrieval
• Questions approximated by sets of unrelated
words (lexical terms)
• Similar to bag-of-word IR models: but choose
nominal non-stop words and verbs
Question (from TREC QA track)
Lexical terms
Q002: What was the monetary value of
the Nobel Peace Prize in 1989?
monetary, value,
Nobel, Peace, Prize
Q003: What does the Peugeot company
manufacture?
Peugeot, company,
manufacture
Q004: How much did Mercury spend on
advertising in 1993?
Mercury, spend,
advertising, 1993
141
Rank candidate answers in retrieved passages
Q066: Name the first private citizen to fly in space.
n
n
Answer type: Person
Text passage:
“Among them was Christa McAuliffe, the first private
citizen to fly in space. Karen Allen, best known for her
starring role in “Raiders of the Lost Ark”, plays McAuliffe.
Brian Kerwin is featured as shuttle pilot Mike Smith...”
n
Best candidate answer:
Christa McAuliffe
142
Abductive inference
• System attempts inference to justify an answer
(often following lexical chains)
• Their inference is a kind of funny middle ground
between logic and pattern matching
• But quite effective: 30% improvement
• Q: When was the internal combustion engine
invented?
• A: The first internal-combustion engine was built
in 1867.
• invent -> create_mentally -> create -> build
143
Question Answering Example
• How hot does the inside of an active volcano get?
• get(TEMPERATURE, inside(volcano(active)))
• “lava fragments belched out of the mountain were as hot
as 300 degrees Fahrenheit”
• fragments(lava, TEMPERATURE(degrees(300)),
belched(out, mountain))
– volcano ISA mountain
– lava ISPARTOF volcano
lava inside volcano
– fragments of lava HAVEPROPERTIESOF lava
• The needed semantic information is in WordNet definitions,
and was successfully translated into a form that was used
for rough ‘proofs’
144
Download