Slides - Asian Institute of Technology

advertisement
Introduction to Information Retrieval
Information Retrieval and Data Mining
(AT71.07)
Comp. Sc. and Inf. Mgmt.
Asian Institute of Technology
Instructor: Prof. Sumanta Guha
Slide Sources: Introduction to
Information Retrieval book slides
from Stanford University, adapted
and supplemented
Chapter 6: Scoring, term weighting,
and the vector space model
1
Introduction to Information Retrieval
Introduction to
Information Retrieval
CS276: Information Retrieval and Web Search
Christopher Manning and Prabhakar Raghavan
Lecture 6: Scoring, term weighting, and the
vector space model
Introduction to Information Retrieval
This lecture; IIR Sections 6.2-6.4.3






Ranked retrieval
Scoring documents
Term frequency
Collection statistics
Weighting schemes
Vector space scoring
Introduction to Information Retrieval
Ch. 6
Ranked retrieval
 Thus far, our queries have all been Boolean.
 Documents either match or don’t.
 Good for expert users with precise understanding of
their needs and the collection.
 Also good for applications: Applications can easily
consume 1000s of results.
 Not good for the majority of users.
 Most users incapable of writing Boolean queries (or they
are, but they think it’s too much work).
 Most users don’t want to wade through 1000s of results.
 This is particularly true of web search.
Introduction to Information Retrieval
Problem with Boolean search:
feast or famine
Ch. 6
 Boolean queries often result in either too few (=0) or
too many (1000s) results.
 Query 1: “standard user dlink 650” → 200,000 hits
 Query 2: “standard user dlink 650 no card found”: 0
hits
 It takes a lot of skill to come up with a query that
produces a manageable number of hits.
 AND gives too few; OR gives too many
Introduction to Information Retrieval
Ranked retrieval models
 Rather than a set of documents satisfying a query
expression, in ranked retrieval models, the system
returns an ordering over the (top) documents in the
collection with respect to a query
 Free text queries: Rather than a query language of
operators and expressions, the user’s query is just
one or more words in a human language
 In principle, there are two separate choices here, but
in practice, ranked retrieval models have normally
been associated with free text queries and vice versa
6
Introduction to Information Retrieval
Feast or famine: not a problem in
ranked retrieval
Ch. 6
 When a system produces a ranked result set, large
result sets are not an issue
 Indeed, the size of the result set is not an issue
 We just show the top k ( ≈ 10) results
 We don’t overwhelm the user
 Premise: the ranking algorithm works
Introduction to Information Retrieval
Ch. 6
Scoring as the basis of ranked retrieval
 We wish to return in order the documents most likely
to be useful to the searcher
 How can we rank-order the documents in the
collection with respect to a query?
 Assign a score – say in [0, 1] – to each document
 This score measures how well document and query
“match”.
Introduction to Information Retrieval
Parametric and zone indexes
 Consider query: “find docs authored by Shakespeare in 1601
containing the phrase alas poor Yorick”
 Fields can have well-defined set of values, e.g., numeric or
character strings of fixed max length.
date field
author field
Parametric search interface to enter parameter values (=field values)
9
Introduction to Information Retrieval
Parametric and zone indexes
 Zone are similar to fields, except contents of a zone can be
arbitrary free text. E.g., title zone, abstract zone, body zone, …
 Indexes for fields and zones can be
 Separate (drawback larger dictionary):
william.author
11
177
244
255
william.title
11
134
244
255
william.body
4
134
213
255
 Combined (drawback larger postings, but dictionary is not enlarged –
preferable to get a compact dictionary, e.g., to fit in main):
william
4.body
11.author.title
134.title.body
10
Introduction to Information Retrieval
Weighted zone scoring
 Given a Boolean query q and doc d, weighted zone* scoring
assigns each pair (q, d) a score between 0 and 1 by computing
a linear combination of zone scores.
 Suppose docs each have l zones.
 Let g1, g2, …, gl be the respective zone weights, s.t., ∑i=1..l gi = 1.
 Let s1, s2, …, sl be a Boolean score for the respective zones, being 1/0
according as q occurs/does not occur in the i th zone of doc d.
 Then, weighted zone score is
∑i=1..l gisi
 E.g., for indexes of previous slide, if zone weights of author, title and
body are, resp., g1 = 0.2, g2 = 0.3, g3 = 0.5, then
WEIGHTEDZONE(“william”, 11) = 1*0.2 + 1*0.3 + 0*0.5 = 0.5
*Here and in the following, by zone we mean zone and fields.
11
Introduction to Information Retrieval
Weighted zone scoring
Algorithm to compute weighted zone score from two postings lists given an AND query.
ZONESCORE(q1, q2) // Score for query “q1 AND q2”
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
// Boolean score 1 if both queries present in zone; 0, otherwise
float
scores[N] = [0]
constant g[ℓ]
p1 ← postings(q1)
p2 ← postings(q2)
// scores[] is an array with a score entry for each document, initialized to zero.
//p1 and p2 are initialized to point to the beginning of their respective postings.
//Assume g[] is initialized to the respective zone weights.
while p1 ≠ NIL AND p2 ≠ NIL
do if docID(p1) == docID(p2)
then scores[docID(p1)] ← WEIGHTEDZONE(p1, p2, g)
p1 ← next(p1)
p2 ← next(p2)
else if docID(p1) < docID(p2)
then p1 ← next(p1)
else
p2 ← next(p2)
return scores
∑i=1..l gisi, where si is 1 if both queries
present in zone i; 0, otherwise.
12
Introduction to Information Retrieval
Learning weights (simple machine
learning)
Assume only two zones title (T) and body (B) with zone weights g and 1 – g, respectively.
Example
φ1
φ2
φ3
φ4
φ5
φ6
φ7
DocID d Query
37
linux
37
penguin
238
system
238
penguin
1741
kernel
2094
driver
3191
driver
sT
1
0
0
0
1
0
1
sB
1
1
1
0
1
1
0
Judgment (human expert)
Relevant
Non-relevant
Relevant
Non-relevant
Relevant
Relevant
Non-relevant
r (quantized judgment)
1
0
1
0
1
1
0
Training examples
sT
0
0
1
1
sB
0
1
0
1
Score
0
1–g
g
1
Four possible combinations of sT and sB and the corresponding
score(d, q) = g * sT(d, q) + (1 – g) * sB(d, q)
13
Introduction to Information Retrieval
Learning weights (simple machine
learning)
Squared error of the scoring function with weight g on example φ is
ε(g, φ) = ( r(d, q) – score(d, q) )2
Example
φ1
φ2
φ3
φ4
φ5
φ6
φ7
d
37
37
238
238
1741
2094
3191
sT sB
Score
0
0
1
1
0
1–g
g
1
0
1
0
1
Query
sT sB
linux
penguin
system
penguin
kernel
driver
driver
1
0
0
0
1
0
1
1
1
1
0
1
1
0
Score
r
ε
1
1–g
1–g
0
1
1–g
g
1
0
1
0
1
1
0
0
(1 – g)2
g2
0
0
g2
g2
Tot_ ε
ε assuming g = 0.4
0
0.36
0.16
0
0
0.16
0.16
Tot_ ε = 0.84
Score assuming g = 0.4
0
0.6
0.4
1
14
Introduction to Information Retrieval
Learning weights (simple machine
learning)



Total error of a set of training examples Tot_ ε = ∑j ε(g, φj) = ∑j ( r(dj, q) – score(dj, q) )2
Goal is to choose g to minimize the total error.
Note: Our example has only two zones with weights g and 1 – g, respectively!
Generally, there will be l zones with weights g1, …, gl. Same principles!
sT sB
Score
r
No.
ε
0 0
0
0
n00n
0
0 0
0
1
n00r
1
0 1
1–g
0
n01n
(1 – g)2
0 1
1–g
1
n01r
g2
1 0
g
0
n10n
g2
1 0
g
1
n10r
(1 – g)2
1 1
1
0
n11n
1
1 1
1
1
n11r
0
Total error Tot_ ε : (n01r + n10n )g2 + (n10r + n01n)(1 – g)2 + n00r + n11n
15
Introduction to Information Retrieval
Learning weights (simple machine
learning)
 Want to minimize total error Tot_ ε = (n01r + n10n )g2 + (n10r + n01n)(1 – g)2 +
n00r + n11n
 Differentiating w.r.t. g: d(Tot_ ε)/dg
= 2(n01r + n10n )g – 2(n10r + n01n)(1 – g)
 Find minimum by solving:
2(n01r + n10n )g – 2(n10r + n01n)(1 – g) = 0
→ (n10r + n10n + n01r + n01n )g = n10r + n01n
→ g = (n10r + n01n )/ (n10r + n10n + n01r + n01n )
16
Introduction to Information Retrieval
Exercises







Exercise 6.1: When using weighted scoring, is it necessary for all zones to use the
same match function?
Exercise 6.2: If author, title and body zones have weights g1 = 0.2, g2 = 0.31 and g3
= 0.49, what are all the distinct score values a doc may get?
Exercise 6.3: Rewrite the algorithm in Fig. 6.4 to the case of more than two
queries, viz., q1 AND q2 AND … AND qm.
Exercise 6.4: Write pseudocode for the function WeightedZone for the case of two
postings lists in Fig. 6.4.
Exercise 6.5: Apply Eq. 6.6 to the sample training set in Fig. 6.5 to estimate the best
value of g for this example.
Exercise 6.6: For the value of g estimated in Ex. 6.5, compute the weighted zone
score of each (query, doc) example. How do these scores relate to the relevance
judgments of Fig. 6.4 (quantized to 0/1)?
Exercise 6.7: Why does the expression for g in Eq. 6.6 not involve the training
examples in which sT(d, q) and sB(d, q) have the same value?
17
Introduction to Information Retrieval
Ch. 6
Query-document matching scores
 We need a way of assigning a score to a
query/document pair
 Let’s start with a one-term query
 If the query term does not occur in the document:
score should be 0
 The more frequent the query term in the document,
the higher the score (should be)
 We will look at a number of alternatives for this.
Introduction to Information Retrieval
Ch. 6
Take 1: Jaccard coefficient
 A commonly used measure of overlap of two sets A
and B:
 jaccard(A,B) = |A ∩ B| / |A ∪ B|
 jaccard(A, A) = 1
 jaccard(A, B) = 0 if A ∩ B = 0
 jaccard(B, A) = jaccard(A, B)
 A and B don’t have to be the same size.
 Always assigns a number between 0 and 1.
 Exercise: A = {1, 2, 3, 4}, B = {1, 2, 4}, C = {1, 2, 4, 5}.
Calculate jaccard(A, B), jaccard(B, C), jaccard(A, C).
Introduction to Information Retrieval
Ch. 6
Jaccard coefficient: Scoring example
 Exercise: What is the query-document match score
that the Jaccard coefficient computes for each of the
two documents below?
 Query: ides of march
 Document 1: caesar died in march
 Document 2: the long march
Introduction to Information Retrieval
Ch. 6
Issues with Jaccard for scoring
 It doesn’t consider term frequency (how many times
a term occurs in a document)
 Rare terms in a collection are more informative than
frequent terms. Jaccard doesn’t consider this
information
 We need a more sophisticated way of normalizing for
length
Sec. 6.2
Introduction to Information Retrieval
Recall (Lecture 1): Binary termdocument incidence matrix
Antony and Cleopatra
Julius Caesar
The Tempest
Hamlet
Othello
Macbeth
Antony
1
1
0
0
0
1
Brutus
1
1
0
1
0
0
Caesar
1
1
0
1
1
1
Calpurnia
0
1
0
0
0
0
Cleopatra
1
0
0
0
0
0
mercy
1
0
1
1
1
1
worser
1
0
1
1
1
0
Each document is represented by a binary vector ∈ {0,1}|V| !
Sec. 6.2
Introduction to Information Retrieval
Term-document count matrices
 Consider the number of occurrences of a term in a
document*:
Antony and Cleopatra
Julius Caesar
The Tempest
Hamlet
Othello
Macbeth
Antony
157
73
0
0
0
0
Brutus
4
157
0
1
0
0
Caesar
232
227
0
2
1
1
Calpurnia
0
10
0
0
0
0
Cleopatra
57
0
0
0
0
0
mercy
2
0
3
5
5
1
worser
2
0
1
1
1
0
Each document is represented by a count vector ∈ ℕ|V| !
*Recall from Ch. 1 that term frequency can be stored with a document in the
inverted index.
Introduction to Information Retrieval
Bag of words model
 Vector representation doesn’t consider the ordering
of words in a document
 John is quicker than Mary and Mary is quicker than
John have the same vectors
 This is called the bag of words model.
 In a sense, this is a step back: The positional index
was able to distinguish these two documents.
 We will look at “recovering” positional information
later in this course.
 For now: bag of words model
Introduction to Information Retrieval
Term frequency tf
 The term frequency tft,d of term t in document d is
defined as the number of times that t occurs in d.
 We want to use tf when computing query-document
match scores. But how?
 Raw term frequency is not what we want:
 A document with 10 occurrences of the term is more
relevant than a document with 1 occurrence of the term.
 But not 10 times more relevant!
 Relevance does not increase proportionally with
term frequency.
Sec. 6.2
Introduction to Information Retrieval
Log-frequency weighting
 The log frequency weight of term t in d is
wt,d
1  log10 t ft,d ,

0,

if t ft,d  0
if t ft,d  0
 0 → 0, 1 → 1, 2 → 1.3, 10 → 2, 1000 → 4, etc.
 Score for a document-query pair: sum over terms t in
both q and d:
 score  tq d (1  log tft ,d )  tq  d w t ,d


 The score is 0 if none of the query terms is present in
the document.
Introduction to Information Retrieval
Sec. 6.2.1
Document frequency
 Rare terms are more informative than frequent terms
 Recall stop words
 Consider a term in the query that is rare in the
collection (e.g., capricious) vs. a term that is frequent
(e.g., person)
 A document containing this term “capricious” is very
likely to be relevant to a query containing “capricious”
 → We want a high weight for rare terms like capricious.
Introduction to Information Retrieval
Sec. 6.2.1
Document frequency, continued
 Frequent terms are less informative than rare terms
 Consider a query term that is frequent in the
collection (e.g., high, increase, line)
 A document containing such a term is more likely to
be relevant than a document that doesn’t
 But it’s not a sure indicator of relevance.
 → For frequent terms, we want high positive weights
for words like high, increase, and line
 But lower weights than for rare terms.
 We will use document frequency (df) to capture this.
Introduction to Information Retrieval
Sec. 6.2.1
idf weight
 dft is the document frequency of t: the number of
documents that contain t
 dft is an inverse measure of the informativeness of t
 dft  N
 We define the idf (inverse document frequency) of t
by
idf t  log10 ( N/df t )
 We use log (N/dft) instead of N/dft to “dampen” the effect
of idf.
Will turn out the base of the log is immaterial.
Sec. 6.2.1
Introduction to Information Retrieval
idf example, suppose N = 1 million
term
dft
idft
calpurnia
1
6
animal
100
4
sunday
1,000
3
10,000
2
100,000
1
1,000,000
0
fly
under
the
idf t  log10 ( N/df t )
There is one idf value for each term t in a collection.
Introduction to Information Retrieval
Effect of idf on ranking
 Question: Does idf have an effect on ranking for oneterm queries, like
 iPhone
 idf has no effect on ranking one term queries
 idf affects the ranking of documents for queries with at
least two terms
 For the query capricious person, idf weighting makes
occurrences of capricious count for much more in the final
document ranking than occurrences of person.
31
Sec. 6.2.1
Introduction to Information Retrieval
Collection vs. Document frequency
 The collection frequency of t is the number of
occurrences of t in the collection, counting
multiple occurrences.
 Example:
Word
Collection frequency
Document frequency
insurance
10440
3997
try
10422
8760
 Which word is a better search term (and should
get a higher weight)?
Introduction to Information Retrieval
Sec. 6.2.2
tf-idf weighting
 The tf-idf weight of a term is the product of its tf weight
and its idf weight.
w t ,d  (1 log10 tft ,d )  log10 ( N / df t )
 Best known weighting scheme in information retrieval
 Note: the “-” in tf-idf is a hyphen, not a minus sign!
 Alternative names: tf.idf, tf x idf
 Increases with the number of occurrences within a
document
 Increases with the rarity of the term in the collection
Sec. 6.2.2
Introduction to Information Retrieval
Final ranking of documents for a query
Score (q,d)  
t qd
tf.idf t,d
34
Sec. 6.3
Introduction to Information Retrieval
Binary → count → weight matrix
Antony and Cleopatra
Julius Caesar
The Tempest
Hamlet
Othello
Macbeth
Antony
5.25
3.18
0
0
0
0.35
Brutus
1.21
6.1
0
1
0
0
Caesar
8.59
2.54
0
1.51
0.25
0
Calpurnia
0
1.54
0
0
0
0
Cleopatra
2.85
0
0
0
0
0
mercy
1.51
0
1.9
0.12
5.25
0.88
worser
1.37
0
0.11
4.15
0.25
1.95
Each document is now represented by a real-valued
vector of tf-idf weights ∈ R|V|
Introduction to Information Retrieval






Exercise 6.8: Why is the idf of a term always finite?
Exercise 6.9: What is the idf of a term that occurs in every document? Compare
this with the use of stop word lists.
Exercise 6.10: Consider the table of term frequencies for 3 docs:
term
dft
idft
Doc1
Doc2
Doc3
car
18,165
1.65
27
4
24
auto
6,723
2.08
3
33
0
insurance 19,241
1.62
0
33
29
best
25,235
1.50
14
0
17
Compute the tf-idf weights for the terms car, auto, insurance and best for each
doc using the given idf values.
Exercise 6.11: Can the tf-idf weight of a term in a doc exceed 1?
Exercise 6.12: How does the base of the logarithm in idf t  log10 ( N/df t ) affect
the score calculation by Score (q,d)  
tf.idf t,d ? How does it affect the
t qd
relative scores of two docs on a given query?
Exercise 6.13: If the logarithm in idf t  log10 ( N/df t ) is computed base 2, suggest
a simple approximation of the idf of a term.

36
Introduction to Information Retrieval
Sec. 6.3
Documents as vectors




So we have a |V|-dimensional vector space
Terms are axes of the space
Documents are points or vectors in this space
Very high-dimensional: tens of millions of
dimensions when you apply this to a web search
engine
 These are very sparse vectors – most entries are
zero.
Introduction to Information Retrieval
Sec. 6.3
Queries as vectors
 Key idea 1: Do the same for queries: represent them
as vectors in the space
 Key idea 2: Rank documents according to their
proximity to the query in this space
 proximity = similarity of vectors
 proximity ≈ inverse of distance
 Recall: We do this because we want to get away from
the you’re-either-in-or-out Boolean model.
 Instead: rank more relevant documents higher than
less relevant documents
Introduction to Information Retrieval
Sec. 6.3
Formalizing vector space proximity
 First cut: distance between two points
 ( = distance between the end points of the two vectors)
 Euclidean distance?
 Euclidean distance is a bad idea . . .
 . . . because Euclidean distance is large for vectors of
different lengths.
Introduction to Information Retrieval
Why distance is a bad idea
The Euclidean
distance between q
and d2 is large even
though the
distribution of terms
in the query q and the
distribution of
terms in the
document d2 are
very similar.
Sec. 6.3
Introduction to Information Retrieval
Sec. 6.3
Use angle instead of distance
 Thought experiment: take a document d and append
it to itself. Call this document d′.
 “Semantically” d and d′ have the same content
 The Euclidean distance between the two documents
can be quite large
 The angle between the two documents is 0,
corresponding to maximal similarity.
 Key idea: Rank documents according to angle with
query.
Introduction to Information Retrieval
Sec. 6.3
From angles to cosines
 The following two notions are equivalent.
 Rank documents in decreasing order of the angle between
query and document
 Rank documents in increasing order of
cosine(query,document)
 Cosine is a monotonically decreasing function for the
interval [0o, 180o]
Introduction to Information Retrieval
Sec. 6.3
From angles to cosines
 But how – and why – should we be computing cosines?
Introduction to Information Retrieval
Sec. 6.3
Length normalization
 A vector can be (length-) normalized by dividing each
of its components by its length – for this we use the
L2 norm:

x 2  i xi2
 Dividing a vector by its L2 norm makes it a unit
(length) vector (on surface of unit hypersphere)
 Effect on the two documents d and d′ (d appended
to itself) from earlier slide: they have identical
vectors after length-normalization.
 Long and short documents now have comparable weights
Sec. 6.3
Introduction to Information Retrieval
cosine(query,document)
Dot product
Unit vectors
  

 
qd q d
cos(q , d )        
q d
qd

V
q di
i 1 i

V
2
i 1 i
q
2
d
i1 i
V
qi is the tf-idf weight of term i in the query
di is the tf-idf weight of term i in the document
cos(q,d) is the cosine similarity of q and d … or,
equivalently, the cosine of the angle between q and d.
Introduction to Information Retrieval
Cosine for length-normalized vectors
 For length-normalized vectors, cosine similarity is
simply the dot product (or scalar product):
cos(q,d )  q  d   qi di
V
i1
for q, d length-normalized.

46
Introduction to Information Retrieval
Cosine similarity illustrated
47
Sec. 6.3
Introduction to Information Retrieval
Cosine similarity amongst 3 documents
How similar are
the novels
SaS: Sense and
Sensibility
PaP: Pride and
Prejudice, and
WH: Wuthering
Heights?
term
affection
SaS
PaP
WH
115
58
20
jealous
10
7
11
gossip
2
0
6
wuthering
0
0
38
Term frequencies (counts)
Note: To simplify this example, we don’t do idf weighting.
Sec. 6.3
Introduction to Information Retrieval
3 documents example contd.
Log frequency weighting
term
SaS
PaP
After length normalization
WH
term
SaS
PaP
WH
affection
3.06
2.76
2.30
affection
0.789
0.832
0.524
jealous
2.00
1.85
2.04
jealous
0.515
0.555
0.465
gossip
1.30
0
1.78
gossip
0.335
0
0.405
0
0
2.58
wuthering
0
0
0.588
wuthering
cos(SaS,PaP) ≈
0.789 × 0.832 + 0.515 × 0.555 + 0.335 × 0.0 + 0.0 × 0.0
≈ 0.94
cos(SaS,WH) ≈ 0.79
cos(PaP,WH) ≈ 0.69
Why do we have cos(SaS,PaP) > cos(SAS,WH)?
SaS and PaP were both written by Jane Austen, WH by Emily Brontë.
Introduction to Information Retrieval
Computing cosines
Suppose the terms in the query q are t1, t2, …, tn. E.g., if
q = “cat dog cat rat”, then n =3 and t1 = cat, t2 = dog, t3
= rat. Suppose the document is d.
Now, q = (0, …, 0, wt1,q, 0, …, 0, wt2,q, …, 0, …, 0, wtn,q , 0, …, 0)
d = (x, …, x, wt1,d, x, …, x, wt2,d, …, x, …, x, wtn,d , x, …, x)
where 0’s are terms not in q and x’s are corresponding values in d.
Now, q d = wt1,q x wt1,d + wt2,q x wt2,d + … + wtn,q x wtn,d
and
cos(q, d) =
wt1,q x wt1,d + wt2,q x wt2,d + … + wtn,q x wtn,d
|q| x |d|
where |q| = Length(q) = (w2t1,q+ w2t2,q+ …+ w2tn,q ) and
|d| = Length(d) = (Σt runs over all terms in d w2t,d)
50
Introduction to Information Retrieval
Computing cosines example
E.g., if q = “cat dog cat rat”, then n =3 and t1 = cat, t2 =
dog, t3 = rat. Suppose the document is d = “cat cat cat
dog goat dog horse”.
Then,
cos(q, d) = wcat,q x wcat,d + wdog,q x wdog,d + wrat,q x wrat,d
|q| x |d|
where |q| = Length(q) = (w2cat,q + w2dog,q + w2rat,q ) and
|d| = Length(d) = (w2cat,d + w2dog,d + w2goat,d + w2horse,d)
51
Introduction to Information Retrieval
Sec. 6.3
Computing cosine scores
No need to divide by Length[q]
as it is the same for all docs!
Can traverse posting lists one
term at time – which is called
term-at-a-time scoring. Or can
traverse them concurrently
as in the INTERSECT algorithm
of Ch. 1 – which is called
document-at-a-time scoring
No need to store these per doc per
posting list. Can be computed on-the-fly
from the dft value at the head of the
postings list and the tft,d value in the doc.
Priority queue=heap!
Introduction to Information Retrieval
tf-idf weighting has many variants
Why is the base of the log in idf immaterial?
Sec. 6.4
Introduction to Information Retrieval
Weighting may differ in queries vs
documents
Sec. 6.4
 Many search engines allow for different weightings
for queries vs. documents
 SMART Notation: denotes the combination in use in
an engine, with the notation ddd.qqq, using the
acronyms from the previous table
 A very standard weighting scheme is: lnc.ltc
 Document: logarithmic tf (l first character), no idf (n
second character), cosine normalization…
A bad idea?
 Query: logarithmic tf (l first character), idf (t second
character), cosine normalization …
Sec. 6.4
Introduction to Information Retrieval
tf-idf example: lnc.ltc
Document: car insurance auto insurance
Query: best car insurance
Term
Query
tf- tf-wt
raw
df
idf
Document
wt
n’liz
e
tf-raw
tf-wt
Pro
d
n’liz
e
wt
auto
0
0
5000
2.3
0
0
1
1
1
0.52
0
best
1
1 50000
1.3
1.3
0.34
0
0
0
0
0
car
1
1 10000
2.0
2.0
0.52
1
1
1
0.52
0.27
insurance
1
1
3.0
3.0
0.78
2
1.3
1.3
0.68
0.53
1000
N, the number of docs = 1,000,000
Doc length = 12  02 12 1.32 1.92
Score = 0+0+0.27+0.53 = 0.8
Introduction to Information Retrieval




Exercise 6.14: If we were to stem jealous and jealousy to a common stem before
setting up the vector space, detail how the definitions of tf and idf should be
modified.
Exercise 6.15: Recall the tf-idf weights computed in Exercise 6.10. Compute the
Euclidean normalized document vectors for each of the docs, where each has four
components, one for each of the four terms.
Exercise 6.16: Verify that the sum of the squares of the components of each of the
document vectors in Exercise 6.15 is 1 (to within rounding error). Why?
Exercise 6.17: With term weights as computed in Exercise 6.15, rank the three
documents by computed score for the query car insurance for each of the
following cases of term weight in the query:



The weight of the term is 1 if present in the query, 0 otherwise.
Euclidean normalized idf.
Exercise 6.18: One measure of the similarity of two vectors x and y is the Euclidean
(or L2) distance between them:
|x – y| = sqrt( ∑i=1..m (xi – yi)2 )
Given a query q and docs d1, d2, …, we may rank the docs di in order of increasing
distance from q. Show that if q and the di are all normalized to unit vectors, then
the rank ordering produced by Euclidean distance is identical to that produced by
cosine similarity.
Introduction to Information Retrieval

Exercise 6.19: Compare the vector space similarity between the query “digital cameras”
and the document “digital cameras and video cameras” by filling out the empty
columns in the table below.
query
document
word
tf
wf
df
idf
qi=wf-idf
tf wf di=normalized wf qi · di
digital
10,000
video
100,000
cameras
50,000
Assume N = 10,000,000, logarithmic term weighting (wf columns) for query and doc, idf
weighting for the query only and cosine normalization for the doc only. Treat and as a
stop word. Enter term counts in the tf columns. What is the final similarity score?
57
Introduction to Information Retrieval





Exercise 6.20: Show that for the query affection, the relative ordering of the scores
of the three docs in the table below is the reverse of the ordering for the query
jealous gossip.
term
SaS
PaP
WH
affection
0.996
0.993
0.847
jealous
0.087
0.120
0.466
gossip
0.017
0
0.254
Exercise 6.21: In turning a query into a unit vector in the table above, we assigned
equal weights to each of the query terms. What other principled approaches are
plausible?
Exercise 6.22: Consider the case of a query term that is not in the set of M indexed
terms.; thus, our standard construction of the query vector results in V(q) not being
in the vector space created from the collection. How would one adapt the vector
space construction to handle this case?
Exercise 6.23: Refer to the tf and idf values for the four terms and three docs in
Exercise 6.10. Compute the two top-scoring docs on the query best car insurance
for each of the following weighting schemes (i) nnn.atc (ii) ntc.atc.
Exercise 6.24: Suppose the word coyote does not occur in the collection used in
Exercises 6.10 and 6.23. How would one compute ntc.atc scores for the query
coyote insurance?
58
Introduction to Information Retrieval
Summary – vector space ranking
 Represent the query as a weighted tf-idf vector
 Represent each document as a weighted tf-idf vector
 Compute the cosine similarity score for the query
vector and each document vector
 Rank documents with respect to the query by score
 Return the top K (e.g., K = 10) to the user
Introduction to Information Retrieval
Ch. 6
Resources for today’s lecture
 IIR 6.2 – 6.4.3
 http://www.miislita.com/information-retrievaltutorial/cosine-similarity-tutorial.html
 Term weighting and cosine similarity tutorial for SEO folk!
Download