Running head: EXPERT VERSUS PEER REVIEWS

advertisement
Running head: EXPERT VERSUS PEER REVIEWS
*********************
DRAFT February 11, 2009
*********************
Preferences for Expert Versus Peer Reviews on the Internet
L. Mark Carrier a, Nancy A. Cheever b, and Larry D. Rosen a
a
Department of Psychology, California State University-Dominguez Hills, 1000 E. Victoria St.,
Carson CA 90747, United States
b
Department of Communications, California State University-Dominguez Hills, 1000 E. Victoria
St., Carson CA 90747, United States
E-mail addresses: L. Mark Carrier (lcarrier@csudh.edu), Nancy A. Cheever (ncheever@csudh.edu),
Larry D. Rosen (lrosen@csudh.edu).
Corresponding Author: L. Mark Carrier, Telephone: 00-1-310-243-3499; FAX: 00-1-310-516-3642.
DRAFT, Feb 11, 2009
Preferences for Expert 1
Preferences for Expert Versus Peer Reviews on the Internet
An online questionnaire queried consumer preferences for online expert versus peer reviews at
different types of websites and measured consumer reactions to artificial expert and peer
reviews. The data showed that negative reviews were more important and more helpful than
positive ones and that website type affected preferences. The results suggest a model in which
consumers discount products or services with negative reviews, seek peer-provided information
as a substitute for direct physical contact with products that normally require experience to
evaluate, and seek expert-provided information for evaluating products whose full attributes can
be ascertained from technical information about the products.
Keywords: Electronic Word-of-Mouth, Online Reviews, Consumer Decisions, Internet,
Consumer Choice
DRAFT, Feb 11, 2009
Preferences for Expert 2
Preferences for Expert Versus Peer Reviews on the Internet
1. Introduction
Online consumers have access to a variety of opinions about products, including those by
the sellers themselves, unbiased experts, and peers (Crabtree, 2007; Voight, 2007), and evidence
shows that shoppers make use of this online review information (Freedman, 2008; Senecal &
Nantel, 2004; Westerman, Tuck, Booth, & Khakzar, 2007) and that customer word-of-mouth
affects purchasing behavior (Chevalier & Mayzlin, 2006; Hu, Liu, & Zhang, 2008; Riegner,
2007). Online reviews of products or services could benefit shoppers for several reasons. First,
reviews might match customers’ natural inclinations to seek out additional information when
using the computer to shop (Schlosser, 2003). Second, online “recommendation agents” might
reduce information overload in the user’s cognitive system (Hu et al., 2008; Swaminathan,
2003). Third, user- and expert-generated reviews might foster website “trust,” leading to later
purchasing behavior (Sillence & Briggs, 2007). Fourth, online reviews by consumers provide
both informational and product popularity data that impact purchaser behavior (Park, Lee, &
Han, 2007).
2. Peer versus expert online reviews
Peer and expert reviews represent two different sources of online information for
consumers (Swaminathan, 2003). Peer reviews are tied to subjective customer word-of-mouth
and expert reviews are tied to mostly technical seller-generated information (Chen & Xie, 2008).
There is little research as to how these separate information sources are valued by consumers and
under what conditions they are valued differently. Cheung, Lee and Rabjohn (2008) hinted that
the difference between peers and experts as sources of reviews might have only a minimal effect
DRAFT, Feb 11, 2009
Preferences for Expert 3
upon customer reactions to reviews. On the other hand, books recommended online by peers are
more preferred than books recommended by experts, all other things being equal (Chen, 2008).
2. 1. Linking product type to preference for peer review
All information about “search” products can be acquired prior to purchase (e.g., a
portable stereo), whereas “experience” products are those for which the consumer must have
experience with the product in order to acquire all relevant information about it (e.g., a song
download from the Internet) (Klein, 1998; Nelson, 1970). Websites offering peer and expert
reviews vary in the types of products offered. Experience products may be associated with a need
for subjective information about the products and search products may be associated with a need
for technical information. Thus, a consumer’s preference for peer versus expert reviews may
depend upon whether technical information is being sought or subjective information is being
sought, leading to the following hypothesis.
H1: Variation in the type of website upon which reviews are offered will affect
consumers’ preferences for peer versus expert reviews.
2.2. The role of user experience
Increased experience with a website probably will be associated with increased
experience with the types of products or services offered. Increased experience with a type of
product or service could contribute to a consumer’s mental database of subjective knowledge
about that type of product or service, resulting in less of a need for seeking out subjective
information from others. Therefore, the following hypothesis was generated.
H2: Users with increased experience at a website will show reduced preference for peer
reviews.
2.3. The role of review valence
DRAFT, Feb 11, 2009
Preferences for Expert 4
Prior research suggests that the valence (negative versus positive) of online reviews
contributes to the value of reviews for online consumers, although the research results are mixed.
Positive reviews have a stronger effect than negative reviews on heeding advice from movie
reviews (Gershoff, Mukherjee, & Mukhopadhyay, 2003). In contrast, user evaluations of the
value of reviews are higher for negative reviews than for positive reviews (Park & Lee, 2009).
Because the review valence might contribute to the evaluation of review usefulness by
consumers, the valence of artificial reviews was manipulated in the present study, and the
following research question was generated.
RQ1: What is the effect of review valence upon consumers’ preferences for peers versus
experts?
An online survey was used to assess Internet users’ preferences for online reviews that
varied in their source (peer versus expert), their valence (negative or positive), and their subject
(different types of products and services). Respondents were asked to evaluate hypothetical
reviews from experts or from peers. Respondents indicated how valuable the reviews appeared to
be. Additionally, respondents rated their preference for peer (over expert) reviews on various
types of websites.
3. Method
3.1. Participants
A convenience sample of 414 respondents was formed in Spring 2007 by students in an
upper-division General Education course at a four-year university in Southern California.
Potential respondents were contacted through informal channels via the students. Respondents’
ages ranged from 18-years to 55-years old. The largest age group was 18-24 years old (38% of
participants) and the next largest age group was 25-30 years old (25%). The respondents were
DRAFT, Feb 11, 2009
Preferences for Expert 5
mostly females (58%). The ethnic backgrounds of the respondents were Asian (9%), Black
(25%), Hispanic (33%), White (27%), and other (6%).
3.2. Materials
Respondents indicated how often they used each of several different online information
sources: movie reviews, technology reviews of products costing less than $100, technology
reviews of products costing more than $100, book reviews, health issues, professor ratings,
Wikipedia, hotel ratings and evaluations, travel site evaluations, video hosting sites such as
YouTube, and blogs about politics. For each source, participants indicated their frequency of use:
1 = “At least a few times per week,” 2 = “About once a week,” 3 = “A few times a month,” 4 =
“About once a month,” 5 = “I rarely use it,” and 6 = “I never use it.” (“At least a few times per
week” and “About once a week” were later combined into “Weekly” to balance the cell sample
sizes.) Participants also indicated preferences for expert commentary versus peer commentary on
each online information source (1 = “Only Experts,” 2 = “Mostly Experts,”, 3 = “Equally
Experts and Peers,” 4 = “Mostly Peers,” and 5 = “Only Peers”).
Artificial review scenarios presented negative and positive hotel and professor reviews
differing by type of author (peer versus expert) (Table 1). The review presentation order started
with the professor reviews. After reading each review, participants rated its helpfulness and
importance in their decision-making process on a 5-point scale, from most important or helpful
(1) to least important or helpful (5). After each review pair, subjects chose which author (peer
versus expert) was most valuable in making their decision.
Last, participants read an artificial online review of a book called ''Professional Chef 30Minute Recipes'' by Carol Laschober, and rated the helpfulness and importance of the review in
two different situations, one identified as a peer (jackson9999@yahoo.com) and the second
DRAFT, Feb 11, 2009
Preferences for Expert 6
identified as an expert (foodeditor@hawthornepress.com). Helpfulness was rated on a 1 (“Very
Helpful”) to 4 (“Very Unhelpful”) scale. Importance was rated on a 1 (“Very Important”) to 4
(“Very Unimportant”) scale. Participants also chose which reviewer they would “trust” more.
3.3 Procedure
The General Education students solicited the participants. Respondents had to be at least
18 years of age and regular Internet users (at least five hours per week). A web link was provided
that linked to the questionnaire posted on SurveyMonkey.com, an online provider of survey
services. The participants remained anonymous and no incentive was provided for participation.
4. Results
4.1. Effects of demographic variables
Correlations between preferences and age, sex, and education level, were not statistically
significant, except for two results indicating that on Wikipedia.com and on video hosting
websites, males were more likely to prefer experts than were females. Based on these nonsignificant relationships, demographics were not considered further.
4.2. Preference for peer reviews by source
A between-subjects, 11 X 4 analysis of variance (ANOVA) was performed on the
preferences with information source as the first factor and frequency of use as the second factor
(Figure 1). There was a significant main effect of information source, F(10, 3275) = 76.53, p <
0.001, indicating that the overall preferences for peers versus experts depended upon the type of
information source (Table 2). The information sources formed five groups (post-hoc analyses
using Tukey’s B test). Health Issues websites showed the strongest preference for experts,
followed by Wikipedia. A next group of information sources, including Tech Reviews > $100,
Political Blogs, and Tech Reviews < $100, showed a smaller preference for experts. The next
DRAFT, Feb 11, 2009
Preferences for Expert 7
group of sources—Travel Sites, Hotel Ratings, Book Reviews, and Movie Reviews—showed
close to no preference for either experts or peers. A last group of websites that included Video
Hosting and Professor Ratings showed the strongest preferences for peers. Statistically, the
number of visits to an information source also impacted the preference for peers versus experts,
F(3, 3275) = 15.13, p < 0.001. Increasing frequency of use was related to increased preference
for experts in nearly all information sources (Figure 1). Website and frequency did not interact,
F(30, 3275) = 1.19, p = 0.224.
4.3. Artificial reviews
Two hundred and ninety-three participants (70.8% of the sample) completed the artificial
review items. The helpfulness ratings for each professor and hotel review scenario were
subjected to a 2 X 2 X 2 within-subjects ANOVA with valence (negative versus positive), author
(peer versus expert), and category (professor versus hotel) as factors (Figure 2). Negative
reviews were rated more helpful than positive reviews, F(1, 291) = 10.45, p < 0.005, peers were
more helpful than experts, F(1, 291) = 15.06, p < 0.001, and category and valence interacted
with each other, F(1, 291) = 7.06, p < 0.01. The effect of the negative reviews was reduced for
the professor reviews. The main effect of category was not significant, F(1, 291) = .067, p =
0.796, nor was the interaction between category and author, F(1, 291) = .512, p = .475, nor the
interaction between valence and author, F(1, 291) = 3.37, p = 0.072, nor the three-way
interaction between the factors, F(1, 291) = .177, p = 0.674.
A 2 X 2 X 2 ANOVA with valence, author, and category as factors examined the
artificial review importance ratings (Figure 3). Negative reviews were deemed more important
than positive reviews, F(1,291) = 62.61, p < 0.001, peers again were preferred over experts,
F(1,291) = 21.90, p < 0.001, and there was a significant interaction between valence and author,
DRAFT, Feb 11, 2009
Preferences for Expert 8
F(1,291) = 4.77, p < 0.05, showing that there was a reduced effect of peer versus expert within
the negative reviews. Further, hotel ratings were viewed as more important than professor
ratings, F(1,291) = 6.14, p < 0.05. The interaction between category and valence was not
significant, F(1, 291) = .324, p = 0.570, nor was the interaction between category and author,
F(1, 291) = .081, p = 0.776, nor the three-way interaction between the factors, F(1, 291) = .022,
p = 0.882.
For the negative professor review, 29.7% of the participants chose the expert review as
most valuable and 70.3% chose the peer review. For the positive review, 22.2% chose the expert
review and 77.8% chose the peer review. The preference for peers did not differ depending on
valence, 2(1) = 0.04, p = 0.85. Thus, three measures of preference for peer versus expert
reviews show a preference for peers, although the effect is slightly reduced with ratings of
importance of negative reviews.
4.4. Book review scenario
The expert review (M = 1.72; SD = 0.71) was significantly more helpful than the peer
review (M = 2.19; SD = 0.85); t(292) = 8.81, p < 0.001. This result contrasts with the professor
and hotel reviews, in which the peers were found to be more helpful than the experts. A similar
pattern was found for the importance ratings where the peer as reviewer (M = 2.35; SD = 0.85)
was rated as significantly lower than the expert (M = 1.95; SD = 0.75); t(292) = 7.76, p < 0.001.
Two hundred and forty-two people chose the expert as the reviewer to trust most, while only 51
people chose the peer. Thus, the expert as reviewer was more trusted than the peer as reviewer;
2(1) = 124.51, p < 0.001.
5. General discussion, implications, and limitations
5.1. General discussion
DRAFT, Feb 11, 2009
Preferences for Expert 9
This study measured consumer preferences for peer versus expert reviews in online
settings. Preferences for peer reviews were hypothesized to vary by the type of information
source and experience with an information source was hypothesized to increase the preference
for expert reviews. Additionally, a research question asked about the role of review valence in
preferences for peers versus experts. The study results showed that preferences for peers varied
by the kind of information source (Table 2) and those preferences were somewhat consistent for
information sources and artificial reviews, supporting Hypothesis 1. Experienced Internet users
more strongly preferred expert reviews than peer reviews, supporting Hypothesis 2. Finally,
there was a pattern of preference for negative over positive reviews (Research Question 1).
5.1.1. Theoretical explanations
Consumers might prefer experts for their objective knowledge about a product or service
such as medical information or encyclopedic information queries, but prefer peers for their
subjective knowledge associated with experience products (e.g., taking a college course).
Negative reviews were preferred over positive reviews in both helpfulness and importance
ratings suggesting that participants are using what Reed (2004) defined as a non-compensatory
decision-making model: “a strategy that rejects alternatives that have negative attributes without
considering positive attributes” (p. 356). The finding that experienced Internet users tend toward
preferring experts over peers shows that repeated use of an information source increases the
weight of expert reviews in the decision-making process. The user might gain individualized
subjective experience and exposure to the class of products through practice and now seek
objective information. For example, a user who has purchased enough books online to learn
about her own book preferences could then focus on objective information—hence, expert
review—that contains factual information about a book and maximizes the chances that her next
DRAFT, Feb 11, 2009
Preferences for Expert 10
book purchase matches her preferences. The development of “preference structures” of product
categories that represent the consumer’s evolving preferences within that product category
(Gershoff et al., 2003) could underlie this effect.
A different search strategy might be employed when subjective information is being
sought than when objective information is being sought. The data showed a smaller preference
for negative over positive reviews with subjective information seeking (for professor ratings
which showed that peers were preferred over experts), hinting at a decision-making strategy
balancing positive and negative information (i.e., a compensatory approach).
5.2. Limitations
The sample used in this study might not be representative of all Internet users. The
majority of the sample was young (18 to 30 years old) and the respondents were sampled from
the Southern California region. This region is known for having a relatively large Hispanic
population, as well as a relatively large proportion of persons who are immigrants or children of
recent immigrants into the United States.
5.3. Implications
The results of the present study have implications for consumers and for sellers in the
online environment. For consumers, it appears that positive reviews might be undervalued, so
paying more attention to positive reviews could be recommended for optimizing decisionmaking outcomes. For online businesses, one implication of the present results is that there
should be product alternatives offered for products with negative reviews, since consumers
appear to be influenced by such reviews. Another implication is that there should be reviews
posted (peer versus expert) that match the type of information that consumers might be seeking.
DRAFT, Feb 11, 2009
Preferences for Expert 11
References
Chen, Y. (2008). Herd behavior in purchasing books online. Computers in Human Behavior, 24,
1977-1992.
Chen, Y., & Xie, J. (2008). Online consumer review: Word-of-mouth as a new element of
marketing communication mix. Management Science, 54(3), 477-91.
Cheung, C. M. K., Lee, M. K. O., & Rabjohn, N. (2008). The impact of electronic word-ofmouth: The adoption of online opinions in online customer communities. Internet
Research, 18(3), 229-247.
Chevalier, J. A., & Mayzlin, D. (2006). The effect of word of mouth on sales: Online book
reviews. Journal of Marketing Research, XLIII, 345-54.
Crabtree, P. (November 25, 2007). Staying power. The San Diego Union Tribune. [retrieved
from the World-Wide Web at
http://ww.signonsandiego.com/uniontrib/20071125/news_lz1b25stay.html on April 10,
2008]
Freedman, L. (February, 2008). Merchant and customer perspectives on customer reviews and
user-generated content. Published online by the e-tailing group and Power Reviews.
[retrieved from the World-Wide Web at http://www.etailing.com/graphics/2008_WhitePaper_0204_4FINAL.pdf on July 2, 2008]
Gershoff, A. D., Mukherjee, A., & Mukhopadhyay, A. (2003). Consumer acceptance of online
agent advice: Extremity and positive effects. Journal of Consumer Psychology, 13(1&2),
161-170.
DRAFT, Feb 11, 2009
Preferences for Expert 12
Hu, N., Liu, L., & Zhang, J. J. (2008). Do online reviews affect product sales? The role of
reviewer characteristics and temporal effects. Information Technology and Management, 9,
201-214.
Klein, L. R. (1998). Evaluating the potential of interactive media through a new lens: Search
versus experience goods. Journal of Business Research, 41, 196–203.
Nelson, P. (1970). Information and consumer behavior. Journal of Political Economy, 78, 311–
329.
Park, C., & Lee, T. M. (2009). Information direction, website reputation and eWOM effect: A
moderating role of product type. Journal of Business Research, 62, 61-67.
Park, D-H., Lee, J., & Han, I. (2007). The effect of on-line consumer reviews on consumer
purchasing intention: The moderating role of involvement. International Journal of
Electronic Commerce, 11(4), 125-48.
Reed, S. K. (2004). Cognition: Theory and Applications (Sixth Edition). Belmont, CA:
Wadsworth/Thomson Learning.
Riegner, C. (2007). Word of mouth on the Web: The impact of Web 2.0 on consumer purchase
decision. Journal of Advertising Research, 47(4), 436-47.
Schlosser, A. E. (2003). Computers as situational cues: Implications for consumers product
cognitions and attitudes. Journal of Consumer Psychology, 13(1&2), 103-12.
Senecal, S., & Nantel, J. (2004). The influence of online product recommendations on
consumers’ online choices. Journal of Retailing, 80, 159-169.
Sillence, E., & Briggs, P. (2007). Please advise: Using the Internet for health and financial
advice. Computers in Human Behavior, 23, 727-48.
DRAFT, Feb 11, 2009
Preferences for Expert 13
Swaminathan, V. (2003). The impact of recommendation agents on consumer evaluation and
choice: The moderating role of category risk, product complexity, and consumer
knowledge. Journal of Consumer Psychology, 13(1&2), 93-101.
Voight, J. (November 12, 2007). Shoppers want more customer reviews. Adweek. [retrieved
from the World-Wide Web at
http://www.adweek.com/aw/national/article_display.jsp?vnu_content_id=1003671155 on
April 10, 2008]
Westerman, S. J., Tuck, G. C., Booth, S. A., & Khakzar, K. (2007). Consumer decision support
systems: Internet versus in-store application. Computers in Human Behavior, 23, 2928-44.
DRAFT, Feb 11, 2009
Preferences for Expert 14
Table 1. Artificial professor reviews.
Positive review by expert (a professor)
I observed Professor Jones several times this semester in his History 101
class. He gives clear lectures that include good handouts and information
displayed on a screen in front of the class. His syllabus is clear and sets all
dates for tests. His tests are fair and not too easy nor too difficult. He does
give weekly homework which requires several hours outside of class per
week.
Positive review by peer (a student)
I took History 101 from Professor Jones. I found him pretty easy to listen to
and enjoyed the lectures. It was helpful that he used both paper handouts
and slides. His tests were difficult but if you study you will do well. There
is a lot of homework but mostly it just requires time. It is not difficult.
Overall, I recommend that you take Dr. Jones for this class.
Negative review by expert
I observed Professor Smith several times this semester in his Political
Science 100 class. Overall, I found it difficult to follow his lectures. He
would start out with one idea and then mid-way through that, start talking
about something else that may or may not be relevant to that topic. His
syllabus did list dates for the midterm and final but there was no other
information given about these tests. The tests themselves were, in my
opinion, pretty difficult and the grades were not very good.
Negative review by peer
I took Political Science 101 from Professor Smith last semester. This guy
can’t seem to stay on topic. He starts talking about something and then
jumps to another topic and then back to the first. Very hard to follow. His
tests were much harder than I expected and I don’t think people did very
well on them. Overall, I would stay away from him. He is too confusing and
too difficult.
NOTE: Artificial hotel reviews were similarly constructed.
DRAFT, Feb 11, 2009
Preferences for Expert 15
Table 2.
Mean Preference Ratings for Experts Versus Peers
Information Source
Mean
SE
Health Issues
1.93
0.05
Wikipedia
2.35
0.06
Technology Reviews >
$100
2.61
0.05
Political Blogs
2.67
0.06
Technology Reviews <
$100
2.76
0.05
Travel Site Evaluations
2.98
0.04
Hotel Ratings
3.00
0.05
Book Reviews
3.03
0.05
Movie Reviews
3.15
0.05
Video Hosting
3.57
0.05
Professor Ratings
3.60
0.08
Note. All ratings used the following scale: 1=Only Experts, 2=Mostly Experts, 3=Equally
Experts and Peers, 4=Mostly Peers, 5=Only Peers
DRAFT, Feb 11, 2009
Preferences for Expert 16
Figure Captions
Figure 1. Preference data for the different information sources ranging from 1 (only experts) to 5
(only peers). The lines represent different subsets of users categorized on the basis of their
frequency of use of the information sources.
Figure 2. Average participant ratings of helpfulness of artificially-constructed reviews of
professors and hotels.
Figure 3. Average importance ratings of artificial hotel and professor reviews provided to
participants.
ch
Te
ch
Te
R
R
ok
>
<
R
s
s
s
00
$1
00
$1
ie
w
ev
R
H
ev
ie
ea
w
s
lth
Is
su
Pr
es
of
R
at
in
gs
W
ik
ip
H
ed
ot
ia
el
R
at
in
Tr
gs
av
el
Vi
Si
de
te
s
o
H
os
Po
tin
lit
g
ic
al
Bl
og
s
Bo
ie
w
ev
ie
w
ev
ie
M
ov
Prefer Experts (1) vs. Peers (5)
DRAFT, Feb 11, 2009
Preferences for Expert 17
4
3.5
3
2.5
2
Weekly
1.5
A Few Times a Month
About Once a Month
I Rarely Use It
1
Website
FIGURE 1.
DRAFT, Feb 11, 2009
Preferences for Expert 18
1.8
1.7
Helpful (1) vs. Unhelpful (4)
1.6
1.5
1.4
Professor Review - Bad
1.3
Professor Review - Good
Hotel Review - Bad
Hotel Review - Good
1.2
Expert
Peer
Writer
FIGURE 2.
DRAFT, Feb 11, 2009
Preferences for Expert 19
2
Professor Review - Bad
Professor Review - Good
Hotel Review - Bad
Hotel Review - Good
Important (1) vs. Unimportant (4)
1.9
1.8
1.7
1.6
1.5
1.4
Expert
Peer
Writer
FIGURE 3.
Download