PowerPoint

advertisement
RELEVANCE
in information science
Tefko Saracevic, PhD
tefkos@rutgers.edu
http://comminfo.rutgers.edu/~tefko/articles.htm
Tefko Saracevic
This work is licensed under a
Creative Commons Attribution-NonCommercial-ShareAlike 3.0 United States License
1
Fundamental concepts
Every scholarly field has a fundamental,
basic notion, concept, idea ... or a few

Tefko Saracevic
Relevance is a fundamental
concept or notion in
information science
2
Two large questions*
Why? (Part I)

Why did relevance
become a central
notion of
information
science?
What? (Part II)

What did we learn
about relevance
through research
in information
science?
* URLs and references are in Notes – accessible after download
Tefko Saracevic
3
Relevance definitions
“1:
a: relation to the matter at hand (emphasis added)
b: practical and especially social applicability :
pertinence <giving relevance to college courses>
2:
the ability (as of an information retrieval system)
to retrieve material that satisfies the needs of the
user.”
Tefko Saracevic
4
What is “matter at hand”?

Context in relation to
which



a question is asked
an information need is
expressed as a query
a problem is addressed
interaction is conducted


No such thing as
relevance without a
context
Axiom:
One cannot not
have a context in
information
interaction.
Relevance is ALWAYS contextual
Tefko Saracevic
5
Relevance – by any other name...

Many names connote
relevance e.g.:
pertinent; useful;
applicable; significant;
germane; material; bearing;
proper; related; important;
fitting; suited; apropos; ...
& nowadays even truthful

"A rose by any other
name would
smell as sweet“
Shakespeare, Romeo and
Juliet
Connotations may differ
but the concept is still
relevance
Tefko Saracevic
6
Two worlds in information science
IR systems offer as
answers their version of
what may be relevant

by ever improving
algorithms
People go their way &
asses relevance

The two
worlds
interact
by their problem at hand,
context & criteria
Covered here: human world of relevance
NOT covered: how IR deals with relevance
Tefko Saracevic
7
Part I
WHY RELEVANCE?
Tefko Saracevic
8
Bit of history

Vannevar Bush: Article “As we may think” 1945

Defined the problem as “... the massive task of
making more accessible of a bewildering store of
knowledge.”



problem still with us & growing
Suggested a solution, a machine:
“Memex ... association of ideas ...
duplicate mental processes artificially.”
Technological fix to problem
1890-1974
Tefko Saracevic
9
Information Retrieval (IR) – definition
Term “information retrieval” coined & defined by
Calvin Mooers, 1951
“ IR: ... intellectual aspects of description of
information, ...
and its specification for search
... and systems, technique,
or machines...
[to provide information]
useful to user”

1919-1994
Tefko Saracevic
10
Technological determinant

In IR emphasis was not only on
organization but even more on searching

technology was suitable for searching


Tefko Saracevic
in the beginning information organization was
done by people & searching by machines
nowadays information organization mostly by
machines (sometimes by humans as well) &
searching almost exclusively by machines
11
Two important pioneers
Mortimer Taube1910-1965
Hans Peter Luhn 1896-1964

at IBM pioneered many IR
computer applications

first to describe searching
using Venn diagrams
Tefko Saracevic

at Documentation Inc.
pioneered coordinate
indexing

first to describe searching
as Boolean algebra
12
Searching & relevance

Searching became a key
component of
information retrieval



And searching is about
retrieval of relevant
answers
extensive theoretical &
practical concern with
searching
technology uniquely
suitable for searching
Thus RELEVANCE emerged as a key notion
Tefko Saracevic
13
Aboutness in librarianship

Key notion for bibliographic classifications,
subject headings, indexing languages
used in organizing inf. records – goes back centuries
 choice of a given classification code, subject
heading, index term ... denotes what a
document (or part) is all about


Searching is assumed but not addressed

Tefko Saracevic
a given, taken for granted
14
A bit of history – assumptions
related to searching
Charles Ammi Cutter
1837-1903


In “Rules for Dictionary
Catalog” (1876, 1904)
defined “Objects” –
objectives of a catalog –
“to enable a person to
find...to show what a library
has ... to assist in choice ...”
Tefko Saracevic
IFLA 1998, 2009, defined
FRBR (Functional Requirements for
Bibliographic Records)

“four generic user tasks ... in
relation to the elementary
uses that are made of the
data by the user: ...
Find, Identify, Select, Obtain”

essentially the same as Cutter’s
15
Why relevance?
Aboutness
 A fundamental notion
related to organization of
information
 Relates to subject & in a
broader sense to
epistemology
Relevance
 A fundamental notion
related to searching for
information
 Relates to problem-at-hand
and context & in a broader
sense to pragmatism
Relevance emerged as a central notion in information science
because of practical & theoretical concerns with searching
Tefko Saracevic
16
Part II
WHAT HAVE WE LEARNED ABOUT
RELEVANCE?
Tefko Saracevic
17
Claims & counterclaims in IR
Historically from the outset: “My system
is better than your system!”
 Well, which one is it? A: Lets test it. But:

what criterion to use?
 what measure(s) based on the criterion?


Things got settled by the end of 1950’s
and remain mostly the same to this day
Tefko Saracevic
18
Relevance & IR testing

In 1955 Allen Kent &
James W. Perry were first
to propose two measures
for test of IR systems:


Allen Kent
1921 -
“relevance” later renamed
“precision” & “recall”
A scientific & engineering
approach to testing
Tefko Saracevic
James W. Perry
1907-1971
19
Relevance as criterion for measures
Precision
 Probability that what is
retrieved is relevant


conversely: how much junk is
retrieved?
Recall
 Probability that what is
relevant in a file is retrieved

conversely: how much
relevant stuff is missed?
Probability of agreement between what the system
retrieved/not retrieved as relevant (systems relevance)
& what the user assessed as relevant (user relevance)
where user relevance is the gold standard for
comparison
Tefko Saracevic
20
First test – law of unintended consequences

Mid 1950’s test of two
competing systems:




subject headings by Armed
Services Tech Inf Agency
uniterms (keywords) by
Documentation Inc.
15,000 documents
indexed by each group,
98 questions searched
but relevance judged by
each group separately
Results:


First group: 2,200 relevant
Second: 1,998 relevant


Then peace talks


but low agreement
but even after these talks
agreement came to 30.9%
Test collapsed on
relevance disagreements
Learned: Never, ever use more than a single judge per query.
Since then to this day IR tests don’t
Tefko Saracevic
21
Cranfield tests 1957-1967




Funded by NSF
Controlled testing:
Cyril Cleverdon
1914-1997
different indexing languages,
same documents, same
relevance judgment
Used traditional IR model –
non-interactive
Many results, some surprising
 e.g. simple keywords “high
ranks on many counts”


Developed Cranfield
methodology for testing
Still in use today incl. in
TREC started in 1992, still strong in 2013
Tefko Saracevic
22
Tradeoff in recall vs. precision
Cleverdon’s law

Generally, there is a
tradeoff:



Example from TREC:
recall can be increased by
retrieving more but
precision decreases
precision can be increased
by being more specific but
recall decreases
Some users want high
precision others high
recall
Tefko Saracevic
23
Relevance experiments

First experiments
reported in 1960 & 61


by an IBM group
compared effects on
relevance judgements of
various representations
Tefko Saracevic


Over the years about
300 or so experiments
Little funding


only two funded by a US
agency (1967)
A variety of factors in
human judgments of
relevance addressed
24
Assumptions in Cranfield methodology


IR and thus relevance is
static (traditional IR model)
Further: Relevance is:






topical
binary
independent
stable
consistent
if pooling: complete


Inspired relevance
experimentation on
every one of these
assumptions
Main finding:
none of them holds
but these simplified assumptions enabled rich IR tests and many improvements
Tefko Saracevic
25
IR & relevance: static vs. dynamic
Q: Do relevance inferences & criteria change over
time for the same user & task? A: They do

For a given task, user’s inferences are dependent on
the stage of the task:
Different stages = differing selections but different
stages = similar criteria = different weights
Increased focus = increased discrimination = more
stringent relevance inferences
IR & relevance inferences are highly dynamic processes
Tefko Saracevic
26
Experimental results
Topical
Binary
Independent
Tefko Saracevic
Topicality: very important but not exclusive role.
Cognitive, situational, affective variables: play a role
e.g. user background (cognitive); task complexity
(situational); intent, motivation (affective)
Continuum: Users judge not only binary (relevant – not
relevant), but on a continuum & comparatively.
Bi-modality: Seems that assessments have high peaks
at end points of the range (not relevant, relevant) with
smaller peaks in the middle range
Order: in which documents are presented to users
seems to have an effect.
Near beginning: Seems that documents presented
early have a higher probability of being inferred as
relevant.
27
Experimental results (cont.)
Stable
Consistent
If pooling:
Complete
Tefko Saracevic
Time: relevance judgments = not completely stable;
change over time as tasks progress & learning advances
Criteria: for judging relevance are fairly stable
Expertise: higher = higher agreement, less differences;
lower = lower agreement, more leniency.
Individual differences: the most prominent feature &
factor in relevance inferences. Experts agree up to 80%;
others around 30%
Number of judges: More judges = less agreement
(if only a sample of collection or a pool from several
searches is evaluated)
Additions: with more pools or increased sampling more
relevant objects are found
28
Other experiments: Clues on what basis & criteria users make relevance judgments?
Content
topic, quality, depth, scope, currency,
treatment, clarity
Object
characteristics of information objects,
e.g., type, organization, representation,
format, availability, accessibility, costs
Validity
accuracy of information provided,
authority, trustworthiness of sources,
verifiability
Tefko Saracevic
29
Matching - on what basis & criteria users make
relevance judgments to match their context?
appropriateness to situation, or
Use or
situational tasks, usability, urgency; value in use
match
Cognitive
match
understanding, novelty, mental effort
Affective
match
emotional responses to information,
fun, frustration, uncertainty
Belief
match
personal credence given to information,
confidence
Tefko Saracevic
30
Major general finding & conclusion
from relevance experiments
Relevance is measurable
became part of general experimentation
related to human information behavior
Tefko Saracevic
31
In conclusion

Information technology & systems will
change dramatically
even in the short run
 and in unforeseeable directions


But relevance is here to stay!
and relevance has many faces – some unusual
Tefko Saracevic
32
...... different technology...
Tefko Saracevic
33
and relevance in its use
Tefko Saracevic
34
Unusual [relevant] services: Library
therapy dogs
U Michigan, Ann Arbor,
Shapiro Library
Tefko Saracevic
35
Seed lending at public
libraries
Tefko Saracevic
36
Thank you
for inviting me!
Tefko Saracevic
37
Presentation in Wordle
Tefko Saracevic
38
Download