The Incentive Structure in an Online Information Market (Raban 2008)

advertisement
Understanding Google Answers
KSE 801
Uichin Lee
Outline
• Earnings and Ratings at Google Answers
(Edelman 2004)
– General statistics, what do askers value, experience
and learning on the jobs, hourly wages, specialization
• Knowledge Market Design: A Field Experiment at
Google Answers, (Chen 2008)
– Effects of prices, tips, and reputation on answer
quality and answerer’s effort
• The Incentive Structure in an Online Information
Market (Raban 2008)
– Effects of monetary/non-monetary incentives
• Online knowledge market offered by Google that allowed users to
post bounties for well researched answers to their queries.
• Launched in April 2002 and came out of beta in May 2003.
• Asker-accepted answers cost $2 to $200.
• Google retained 25% of the researcher (=answerer)'s reward and a
50 cent fee per question.
• In addition to the researcher's fees, a client who was satisfied with
the answer could also leave a tip of up to $100.
• In late November 2006, Google reported that it planned to
permanently shut down the service, and it was fully closed to new
activity by late December 2006, although its archives remain
available
• GA received more than 100 question postings per day when the
service ended in December 2006.
From Wikipedia: http://en.wikipedia.org/wiki/Google_Answers
• Researchers (called Google Answers Researchers or GARs)
– Not Google employees, but contractors that were required to
complete an application process to be approved to answer for the site.
– GARs were limited in number (~500).
– Application process tested their research and communication abilities.
– Researchers with low ratings could be fired, a policy which encouraged
eloquence and accuracy.
• For a Researcher, a question was answered by logging into a special
researchers page and then "locking" a question they wanted to
answer.
• This act of "locking" claimed the question for that researcher.
– Questions worth less than $100 could be locked for up to four hours,
– Questions worth more than $100 could be locked up to eight hours at
a time in order to be properly answered.
– A Researcher could only lock one question at a time (or two)
From Wikipedia: http://en.wikipedia.org/wiki/Google_Answers
Earnings and Ratings at Goggle
Answers
Benjamin Edelman
Harvard Business School
Economic Inquiry, 2004
Descriptive Stats of GA
43% answered
63% rated answers
21% tipped answers
(Edelman 2004)
April 2002- Nov. 2003
The Incentive Structure in an Online Information Market (Raban 2008)
Number of Answers per GAR
(Google Answer Researcher)
• Approximately follows a power-law distribution
Frequent/Active GARs
Occasional/Less Active GARs
The Incentive Structure in an Online Information Market (Raban 2008)
Time Difference: Posting and Answer
10000 minutes equals to 6.9 days
(expires after a week)
1440 minutes equals one day
480 minutes equals to 8 hours
(mins)
(mins)
What Do Askers Value?
• Rate, tips vs. answer length, # URLs, timespent (for answers)
• Longer answers elicit better rating/tips (slight
improvement though)
• # URLs and answer length highly correlated
• Asker who waited longer for answer is less
likely to assign a top rating
Experience and Learning on the Job
• Labor market literature: on-the-job learning plays a
significant role in developing worker skills and
facilitating worker productivity (Jovanovic and Nyarko
1996)
• Answer with 10 or more questions is 0.28% more likely
to be rated a 5 (and w/ more 2.5 cents more tips)
• Selection effect: higher quality answerers enjoy higher
ratings and elect to participate more or longer
• Experience users paid by $7.63 per hour (but only
showing $0.02 difference based on experience)
Specialization
• Specialization results in less payment per hour
• Answerers may forgo the opportunities in
other fields (as there aren’t many questions
available to the answerers)
• Askers feel better when they get answers from
“experts” (giving favorable ratings)
Day of Week, Hour of Day
• Relative lack of questions over the weekend
• Mondays have the highest pay per minute
(relative lack of answerers on Mondays)
• Questions posted from 8-10PM get fastest
responses
• Answers made during business hours receive
better ratings/tips (for tips, it’s 15% vs. 8%)
Knowledge Market Design: A Field
Experiment at Google Answers
Yan Chen, Teck-Hua Ho, Yong-Mi Kim
School of Information
University of Michigan
Journal of Public Economics Theory, 2010
Understanding Answer Quality
• Hypothesis 1 (Reciprocity: effort)
– A question with a higher price generates an answer involving more effort.
• Hypothesis 2 (Reciprocity: quality)
– A question with a higher price generates an answer with better quality.
• Hypothesis 3 (Tip: effort)
– A conditional (or unconditional) tip generates an answer involving more effort
compared to a fixed price $20 question.
• Hypothesis 4 (Tip: quality)
– A conditional (or unconditional) tip produces a higher quality answer than a
fixed price $20 question.
• Hypothesis 5 (Unconditional vs. Conditional Tips)
– An unconditional tip produces a better answer than a conditional tip.
• Hypothesis 6 (Reputation)
– Answerers with higher reputation will provide better answers.
Methods
• Treatment settings:
–
–
–
–
$20 fixed price
$30 fixed price
$20 plus an unconditional $10 tip
$20 plus a conditional $10 tip
• 100 question-answer pairs from IPL (Internet
Public Library), a non-profit organization that
provides a question-answering reference service
– At least taking 10-30 minutes of work by knowledge
workers
Methods
• For each question-answer pair, human raters provide
ratings using the following metrics:
– Difficulty (1: very easy, 5: very difficult)
– Factors(1: strongly disagree, 5: strongly agree, N/A):
•
•
•
•
•
•
•
The question that was asked is answered.
The answer is thorough, addressing all question parts.
The sources cited are credible and authoritative.
The links provided are to relevant web sites or pages.
Information in the cited sources is summarized.
Only information pertinent to the question is presented.
The answer is well-organized and written clearly, avoiding jargon
and/or inappropriate language
– Quality of the answer (1: very low, 5: very high)
Methods
• Information science students rated the QA pairs
– 100 IPL questions used for GA experiments
– 25/100 GA question-answer pairs
– When compared ratings with asker’s ratings from
Google Answers, they tend to give lower ratings but
the correlation is very high (0.847)
• Sent 100 IPL questions to Google Answers in July,
Oct., Nov. 2005
– Each day 4 questions were sent to GA (one from each
treatment)
• Results: total 76 answered out of 100 questions
Results
Higher price leads to longer answers
Answerer’s w/ higher reputation
provides longer answers
# questions
answered
Results
Higher prices does not make any significant
influence (very different from Harper et al. 2008)
Tip does not make any significant influence
Answerer’s w/ higher reputation provides
better answer quality
The Incentive Structure in an
Online Information Market
Daphne Ruth Raban
Journal of the American Society for Information
Science and Technology Volume 59, Issue 14,
pages 2284–2295, December 2008
Incentives
• Intangible incentives:
– Intrinsic motivation; e.g., self-efficacy (just activity itself),
altruism (enjoyment of helping others), warm glow (donation)
– Ratings, conversation (comments), expansion of social capital
• Tangible incentives:
– Monetary or economic incentives
• Tale of two markets (Heyman and Ariely 2004):
– Monetary markets: output proportional to monetary incentives
– Social markets (altruism): output is insensitive to incentives
(e.g., candy)
– Mixed markets tend to follow “monetary markets” (e.g., once
candy’s price is revealed, it acts like monetary markets)
• Crowding-out effect (Frey & Goette, 1999): monetary incentive
reduced the motivation to do volunteer work
Incentives
• Linkage between motivations/incentives and willingness to share
knowledge
– Intrinsic motivation for knowledge sharing within organization
(Osterloh and Frey 2000)
• Incentives are effective for explicit knowledge, but they will crowd out intrinsic
motivations for sharing tacit knowledge
– Korean managers: social norm (positive impact) vs. extrinsic incentives
(negative impact) (Bock et al., 2005)
• Intangible incentives increase contribution, fidelity, commitment,
and sense of belonging in the information-sharing system
• In online systems, people anticipate feedback/recognition from
others; in information sharing, this behavior is called “social
approval and observational cooperation” (Cheshire 2007)
Goals
• Independent variables:
– Tangible incentive: price, tips
– Intangible incentives: rating, comments
• Comments before submitting a formal answer
• Dependent variable: # answers per answerer (GAR)
• H1: Both tangible and intangible incentives will have a
positive effect on the number of answers provided by an
answerer in the Google Answers Web site.
• H2: Tangible and intangible incentives will have different
levels of contribution for frequent versus occasional
answerers in predicting the number of answers provided
Results
Other potential hidden variables (that couldn’t be drawn from the
dataset): personal gratification, sense of belonging to a community
Summary
• Askers value longer answers, quick responses, and
reliable sources
• Answerers learn over time (as such, receiving higher
ratings, slightly earning more)
• Answerers tend to specialize into several topics (which
may lower actual hourly wage)
• Financial incentives improves quantity but not quality
of answers (and reputation of repliers is related to
quality)
• Tangible (monetary) and intangible incentives have
positive impact on the number of answers per
answerer
Download