powerpoint

advertisement
CS276A
Text Information Retrieval, Mining, and
Exploitation
Lecture 4
15 Oct 2002
Recap of last time




Index size
Index construction techniques
Dynamic indices
Real world considerations
Back of the envelope index size
calculation





Number of docs = n = 40M
Number of terms = m = 1M
Use Zipf to estimate number of postings
entries:
n + n/2 + n/3 + …. + n/m ~ n ln m = 560M
postings entries
This is just a word-document index, not one
that includes positional information
Merge sort of 56 sorted runs


Merge tree of log256 ~ 6 layers.
During each layer, read into memory runs in
blocks of 10M, merge, write back.
1
2
3
4
1
2
3
4
Disk
Merge sort of 56 sorted runs



How do you write back long merged runs?
Wait to accumulate 10M-sized output blocks
before writing back.
Thus amortize seek time over block transfer.
1
2
1
3
4
2
Disk
3
4
Today’s topics

Ranking models



The vector space model
Inverted indexes with term weighting
Evaluation with ranking models
Ranking models in IR

Key idea:


To do this, we want to know which
documents best satisfy a query


We wish to return in order the documents
most likely to be useful to the searcher
An obvious idea is that if a document talks
about a topic more then it is a better match
A query should then just specify terms that
are relevant to the information need, without
requiring that all of them must be present

Document relevant if it has a lot of the terms
Binary term presence matrices

Record whether a document contains a
word: document is binary vector in {0,1}v


What we have mainly assumed so far
Idea: Query satisfaction = overlap measure:
X Y
Antony and Cleopatra
Julius Caesar
The Tempest
Hamlet
Othello
Macbeth
Antony
1
1
0
0
0
1
Brutus
1
1
0
1
0
0
Caesar
1
1
0
1
1
1
Calpurnia
0
1
0
0
0
0
Cleopatra
1
0
0
0
0
0
mercy
1
0
1
1
1
1
worser
1
0
1
1
1
0
Overlap matching


What are the problems with the overlap
measure?
It doesn’t consider:



Term frequency in document
Term scarcity in collection (document
mention frequency)
Length of documents

(And queries: score not normalized)
Overlap matching

One can normalize in various ways:

Jaccard coefficient:
X Y / X Y

Cosine measure:
X Y /

X Y
What documents would score best using
Jaccard against a typical query?

Does the cosine measure fix this problem?
Count term-document matrices


We haven’t considered frequency of a word
Count of a word in a document:


Normalization:
Calpurnia
vs.
Calphurnia
Bag of words model
Document is a vector in ℕv
Antony and Cleopatra
Julius Caesar
The Tempest
Hamlet
Othello
Macbeth
Antony
157
73
0
0
0
0
Brutus
4
157
0
1
0
0
Caesar
232
227
0
2
1
1
Calpurnia
0
10
0
0
0
0
Cleopatra
57
0
0
0
0
0
mercy
2
0
3
5
5
1
worser
2
0
1
1
1
0
Weighting term frequency: tf

What is the relative importance of




0 vs. 1 occurrence of a term in a doc
1 vs. 2 occurrences
2 vs. 3 occurrences …
Unclear: but it seems that more is better, but
a lot isn’t necessarily better than a few


Can just use raw score
Another option commonly used in practice:
tft ,d  0 ? 1  log tft ,d : 0
Dot product matching

Match is dot product of query and document
q  d  i tfi ,q  tfi ,d

[Note: 0 if orthogonal (no words in common)]
Rank by match

It still doesn’t consider:



Term scarcity in collection (document mention
frequency)
Length of documents and queries

Not normalized
Weighting should depend on
the term overall

Which of these tells you more about a doc?





10 occurrences of hernia?
10 occurrences of the?
Suggest looking at collection frequency (cf)
But document frequency (df) may be better:
Word
cf
df
try
10422
8760
insurance 10440
3997
Document frequency weighting is only
possible in known (static) collection.
tf x idf term weights

tf x idf measure combines:

term frequency (tf)


measure of term density in a doc
inverse document frequency (idf)



measure of informativeness of term: its rarity
across the whole corpus
could just be raw count of number of documents
the term occurs in (idfi = 1/dfi)
but by far the most commonly used version is:
 n 
idf i  log  
 df i 

See Kishore Papineni, NAACL 2, 2002 for theoretical justification
Summary: tf x idf (or tf.idf)

Assign a tf.idf weight to each term i in each
document d
wi ,d  tfi ,d  log( n / df i )
What is the wt
of a term that
occurs in all
of the docs?
tfi ,d  frequency of term i in document j
n  total number of documents
df i  the number of documents that contain te rm i


Increases with the number of occurrences within a doc
Increases with the rarity of the term across the whole corpus
Real-valued term-document
matrices

Function (scaling) of count of a word in a
document:

Bag of words model
Each is a vector in ℝv

Here log scaled tf.idf

Antony and Cleopatra
Julius Caesar
The Tempest
Hamlet
Othello
Macbeth
Antony
13.1
11.4
0.0
0.0
0.0
0.0
Brutus
3.0
8.3
0.0
1.0
0.0
0.0
Caesar
2.3
2.3
0.0
0.5
0.3
0.3
Calpurnia
0.0
11.2
0.0
0.0
0.0
0.0
Cleopatra
17.7
0.0
0.0
0.0
0.0
0.0
mercy
0.5
0.0
0.7
0.9
0.9
0.3
worser
1.2
0.0
0.6
0.6
0.6
0.0
Documents as vectors


Each doc j can now be viewed as a vector of
tfidf values, one component for each term
So we have a vector space




terms are axes
docs live in this space
even with stemming, may have 20,000+
dimensions
(The corpus of documents gives us a matrix,
which we could also view as a vector space
in which words live – transposable data)
Why turn docs into vectors?

First application: Query-by-example


Given a doc d, find others “like” it.
Now that d is a vector, find vectors (docs)
“near” it.
Intuition
t3
d2
d3
d1
θ
φ
t1
d5
t2
d4
Postulate: Documents that are “close together”
in vector space talk about the same things.
The vector space model
Query as vector:
 We regard query as short document
 We return the documents ranked by the
closeness of their vectors to the query, also
represented as a vector.

Developed in the SMART system (Salton,
c. 1970) and standardly used by TREC
participants and web IR systems
Desiderata for proximity



If d1 is near d2, then d2 is near d1.
If d1 near d2, and d2 near d3, then d1 is not
far from d3.
No doc is closer to d than d itself.
First cut

Distance between vectors d1 and d2 is the
length of the vector |d1 – d2|.



Why is this not a great idea?
We still haven’t dealt with the issue of length
normalization


Euclidean distance
Long documents would be more similar to
each other by virtue of length, not topic
However, we can implicitly normalize by
looking at angles instead
Cosine similarity


Distance between vectors d1 and d2 captured
by the cosine of the angle x between them.
Note – this is similarity, not distance
t3
d2
d1
θ
t1
t2
Cosine similarity
 
d j  dk
sim (d j , d k )    
d j dk




n
i 1
wi , j wi ,k
i1 w
n
2
i, j
2
w
i1 i,k
n
Cosine of angle between two vectors
The denominator involves the lengths of the
vectors
So the cosine measure is also known as the
normalized inner product

Length d j 
2
w
i1 i, j
n
Cosine similarity exercises

Exercise: Rank the following by decreasing
cosine similarity:



Two docs that have only frequent words (the,
a, an, of) in common.
Two docs that have no words in common.
Two docs that have many rare words in
common (wingspan, tailfin).
Normalized vectors





A vector can be normalized (given a length
of 1) by dividing each of its components by
the vector's length
This maps vectors onto the unit circle:

Then, d j 

n
i 1
wi , j  1
Longer documents don’t get more weight
For normalized vectors, the cosine is simply
the dot product:
 
 
cos( d j , d k )  d j  d k
Exercise


Euclidean distance between vectors:
Euclidean distance:
 
d j  dk 

 w
n
i 1
i, j
 wi ,k 
2
Show that, for normalized vectors, Euclidean
distance gives the same closeness ordering
as the cosine measure
Example

Docs: Austen's Sense and Sensibility, Pride
and Prejudice; Bronte's Wuthering Heights
affection
jealous
gossip
SaS
115
10
2
SaS
affection 0.996
jealous 0.087
gossip 0.017


PaP
58
7
0
WH
20
11
6
PaP
0.993
0.120
0.000
WH
0.847
0.466
0.254
cos(SAS, PAP) = .996 x .993 + .087 x .120 + .017 x 0.0 = 0.999
cos(SAS, WH) = .996 x .847 + .087 x .466 + .017 x .254 = 0.929
Digression: spamming indices

This was all invented before the days when
people were in the business of spamming
web search engines:


Indexing a sensible passive document
collection vs.
An active document collection, where people
(and indeed, service companies) are trying to
shape documents in an attempt to achieve
ranking function maximization
Digression: ranking in Machine
Learning

Our problem is:



Given document collection D and query q,
return a ranking of D according to relevance
to q.
Such ranking problems have been much less
studied in machine learning than
classification/regression problems
But much more interest recently, e.g.,


W.W. Cohen, R.E. Schapire, and Y. Singer.
Learning to order things. Journal of Artificial
Intelligence Research, 10:243–270, 1999.
And subsequent research
Digression: ranking in Machine
Learning

Many “WWW” applications are ranking (aka
ordinal regression) problems:



Text information retrieval
Image similarity search (QBIC)
Book/movie recommendations


Collaborative filtering
Meta-search engines
Summary: What’s the real point
of using vector spaces?




Key: A user’s query can be viewed as a (very)
short document.
Query becomes a vector in the same space
as the docs.
Can measure each doc’s proximity to it.
Natural measure of scores/ranking – no
longer Boolean.
Evaluation II

Evaluation of ranked results:


You can return any number of results ordered
by similarity
By taking various numbers of documents
(levels of recall), you can produce a precisionrecall curve
Precision-recall curves
Interpolated precision

If you can increase precision by increasing
recall, then you should get to count that…
Evaluation

There are various other measures

Precision at fixed recall


This is perhaps the most appropriate thing for web
search: all people want to know is how many good
matches there are in the first one or two pages of
results
11-point interpolated average precision

The standard measure in the TREC competitions:
you take the precision at 11 levels of recall varying
from 0 to 1 by tenths of the documents, using
interpolation (the value for 0 is always
interpolated!), and average them
We’ll use more notions from
linear algebra next lecture




Matrix, vector
Transpose and product
Rank
Eigenvalues and eigenvectors.
Resources, and beyond


MG 4.4–4.5, MIR 2.5.
Next steps



Computing cosine similarity efficiently.
Dimensionality reduction.
Probabilistic approaches to IR
Download