To Each His Own - How Comparisons to Others Influence

advertisement
1
To Each His Own?
How Comparisons to Others Influence Consumer Self-Design
C. PAGE MOREAU
KELLY B. HERD*
* C. Page Moreau is Associate Professor of Marketing, Leeds School of Business, University of
Colorado, UCB 419, Boulder, CO 80309; email: page.moreau@colorado.edu. Kelly B. Herd is a
doctoral student in marketing at the Leeds School of Business, University of Colorado, UCB
419, Boulder, CO 80309; email: kelly.herd@colorado.edu. The authors would like to thank Don
Lehmann, Paul Herr, and Caleb Warren for their helpful comments on earlier drafts of this
manuscript.
2
The vast majority of consumer behavior research has examined how consumers respond to
products that are offered on a “take it or leave it” basis by the manufacturer. Self-design changes
the rules substantially, allowing consumers to have much more control over the product’s
characteristics. This research examines the factors influencing consumers’ evaluations of selfdesigned products and their behaviors during and following the self-design experience. Three
studies demonstrate that a superior fit between consumers’ underlying preferences and their
customized products cannot fully explain self-design evaluations. Consumers’ spontaneous
social comparisons with both designers of comparable “off-the-rack” products and other selfdesigners significantly influence evaluations as well.
3
In product categories ranging from running shoes, to pet beds, to ceiling fans, consumers
are becoming the designers of their own products. The slogan at NikeID, “perfection is
personal,” lures consumers into this role with the promise of a product that wholly embodies
their highly individual and often idiosyncratic tastes. “Self-design” (or “user-design”) is the
relevant construct emerging in the marketing literature to describe this voluntary shift in
responsibility from the producer to the consumer. While some limited research has examined
how the format of the self-design task influences consumer utility (e.g., Dellaert and Stremersch
2005; Randall, Terwiesch, and Ulrich 2007), little attention has focused on how the role shift
itself has influenced the consumer, both in terms of the self-design experience and evaluations of
the subsequent outcome.
Consumers who purchase self-designed products are effectively rejecting the
professionally-designed alternatives offered by the manufacturer in favor of ones they feel more
accurately meet their own needs and wants. Consumers believe they have much better access to
their own tastes and preferences than anonymous designers, and the empirical research to date
suggests that, holding product quality constant, most consumers show a dramatic preference for
their own individual designs (e.g., Franke and Piller 2004). Surprisingly limited research exists,
however, to explain such dominant preferences.
Superior fit alone may be responsible for the premium. Indeed, consumers who choose to
self-design often view the product as a reflection of themselves and value it accordingly based on
internal standards (Belk 1988; Dahl and Moreau 2007). However, when consumers choose to
assume the role of the designer, they implicitly relinquish professional expertise and talent, a
move which may have some subtle but significant effects on the evaluations of their selfdesigned products. As social comparison theory (Festinger 1954) has repeatedly shown, almost
4
every self-evaluation carries more weight in a comparative context (Mussweiler, Gabriel, and
Bodenhausen 2000).
In this paper, we use three studies to demonstrate that consumers’ social comparisons to
the designers of comparable products influence evaluations of their own creations, their behavior
following the self-design experience, and their product satisfaction. Since professionallydesigned, “off-the-rack” alternatives often serve as a basis of comparison for one’s own designs,
the first two experiments examine how social comparisons with the designers of these products
influence consumers’ self-evaluations. These experiments also identify two key moderators
useful in overcoming the negative effects of an upward comparison to a professional designer. A
third study examines how social comparisons to other self-designers influence evaluations and
behavior during and after a self-design experience. This study does so by using a real online
design task in which designs are created, orders are placed, and products are produced to
specification and delivered to the participants. The realism of this study provides a better
understanding of the social comparison influences at various stages of the self-design process.
Together, these studies extend social comparison theory by demonstrating how threats
generated by upward comparisons to professional designers can be both overcome and leveraged
to motivate subsequent behavior. To date, social comparison research has focused more on the
self-esteem and affective consequences of the comparisons rather than on their motivational or
behavioral outcomes (Lockwood and Pinkus 2008, 251). Further, prior research has focused
exclusively on the relationship between comparison information and evaluations of the self. We
propose that the influence of comparison information is more extensive, not only influencing
self-evaluations but also evaluations of products that are self-produced. Our research has
5
implications for firms striving to identify the self-design target consumer, optimize the design
experience, and increase satisfaction, community involvement, and positive word-of-mouth.
The following sections detail how the perceived fit and social comparisons are expected
to influence evaluations of self-designed products.
FIT AND SELF-DESIGNED PRODUCTS
Self-design, a special kind of product customization in which consumers designate
specific properties of the product (Randall et al. 2007), represents a substantial shift away from
the traditional consumer paradigm. To date, consumer behavior researchers have examined how
consumers form attitudes and preferences towards, choose among, consume, and respond postconsumption to products that are fixed in nature and offered in a set form by the manufacturer.
Self-design changes the rules substantially. Compared to traditional products, which are offered
on a 'take it or leave it' basis, self-design allows the customer to have a much more active role in
the specification of the outcome of the production process (GTM White paper, April 12, 2007,
3). Customization itself can take many forms, with customers now encouraged to specify
everything from the components of their deli sandwich to those in their laptop computer. In this
research, we focus exclusively on aesthetic rather than functional customization decisions
because they do not require technical expertise on the part of the consumer, removing any
significant concern for variance in consumer knowledge.
In theory, self-design creates a far closer fit between a consumer’s inherent preferences
and the actual product than would occur without a customization opportunity (Franke and Piller
2004; Simonson 2008). This superior fit is thought to occur primarily for two reasons. First, in
the absence of self-design, the consumer must choose among the standard products designed by
6
the manufacturer. Even in mature product classes where numerous options exist, research has
shown that a significant number of consumers remain dissatisfied because their unique needs
simply go unmet (Franke and Piller 2004; Franke and Reisinger 2003). Second, when self-design
is not available, professionals must gain access to consumers’ preference information in order to
incorporate it into their designs. Such information is notoriously difficult to encode, transfer, and
decode. Thus, consumers are unable to articulate, draw, or otherwise communicate preferences
even when professionals are attempting to design specifically for an individual (Franke and Piller
2004, 404; Simonson 2005; Toubia 2008). When consumers choose to self-design products, the
superior fit should increase both consumers’ willingness to pay and satisfaction with the product,
a proposition supported by some limited empirical studies (Deng and Hutchinson 2008; Franke
and Piller 2004) and in keeping with NikeID’s “perfection is personal” tagline.
SOCIAL COMPARISON AND SELF-DESIGNED PRODUCTS
As social comparison theory (Festinger 1954) has repeatedly shown, people have an
intense interest in gaining self-knowledge and spend a significant amount of time involved in
self-reflective, comparative thought (Csikszentmihalyi and Figurski 1982; Mussweiler and Ruter
2003). Thus, consumers’ evaluations of self-designed products are likely to be based, in part,
upon the outcomes of two key comparisons. First, and most obviously, is the comparison with
the style, execution, and other details of other comparable products. Second, and more subtly, is
the social comparison with the designers of those other products.
Consumers who undertake creative tasks are motivated, in part, by a sense of autonomy
(Dahl and Moreau 2007). As such, these consumers might be expected to evaluate their own
creations using only internal standards. Yet, research has shown that the innate drive to look
7
externally for comparison information is quite strong. Spontaneous comparisons are both
ubiquitous (Mussweiler, Ruter, and Epstude 2004) and “play so prominent a role in selfevaluations that they are even sought in situations in which objective standards are readily
available” (Mussweiler and Ruter 2003, 467; Klein 1997). Thus, we propose that self-designers
will spontaneously engage in social comparisons with the designers of comparable products and
that the outcome of these comparisons will influence evaluations of their self-designed products.
Professionals often have a significant advantage, either real or perceived, over consumers
in terms of their knowledge, training and experience. When a comparable product is designed by
a professional, the consumer self-designer faces an upward social comparison on these
dimensions. Much of the recent research on upward comparisons has focused on how individuals
cope with the threats they create (Argo, White, and Dahl 2006; Schwinghammer, Stapel, and
Blanton 2006), indicating that upward comparisons generate negative information about the self.
If these upward comparisons occur spontaneously, a concern for marketers and
manufacturers offering self-designed products would be to understand ways in which to avoid or
mitigate their negative influences. One possibility would be to change the direction of the
comparison. Rather than using professionals as default designers, marketers could highlight
default designs created by other consumers. While this practice may raise some interesting
ethical issues, it is a strategy employed by a growing number of companies including
Threadless.com (Heim, Schau and Price 2008; Humphreys and Grayson 2008),
Spoonflower.com, and Lego®. When a consumer spontaneously compares his/her own design to
one created by another consumer, there should be no significant social comparison differences
based on professional training and experience.
8
For many manufacturers, however, consumer-designed defaults are either implausible or
undesirable options. For these firms, the negative effects of an upward comparison could be
reduced by including some guidance and advice during the design process (Randall et al. 2005).
Ideally, the guidance would be designed to imbue the consumer with some of the feelings of the
training, skill and experience of the professional without taking away from the perceived
autonomy that motivated the self-design decision in the first place (Dahl and Moreau 2007).
Holding the actual design characteristics of the comparison product constant, the
preceding discussion leads to the following hypothesis which is tested in Study 1:
Hla:
When the default product is professionally-designed, evaluations of self-designed
products will be lower than when the default product is consumer-designed.
Hlb:
The provision of guidance during the design process will attenuate this effect.
STUDY 1
Design and Procedure
Participants were 105 undergraduates from a large North American university who
participated in this study for course credit. As a cover story, participants were told that the study
was about products targeted to members of their demographic group. To increase involvement,
participants were offered the opportunity to enter a lottery for the target product. In reality, the
stimuli used for all participants was an L.L. Bean backpack which was selected because it is both
relevant to the participant population and can be plausibly customized on only its aesthetic
dimensions.
Two factors were manipulated between-participants: (1) Designer of the default
backpack: (Professional vs. Amateur) and (2) Customization guidance (Present vs. Absent).
Critically, however, all participants viewed the exact same color version of the default backpack
9
from L.L. Bean (see appendix A). Upon arriving, participants were randomly assigned to one of
the four experimental conditions and given a packet containing the manipulations. After hearing
the cover story and the lottery option, participants were exposed to the picture of the default
backpack and the designer manipulation. After turning the page, all participants were then
informed that the L.L. Bean website offers consumers the option of customizing their backpacks.
Participants were asked whether, if they were truly in the market for a new backpack, they would
like the chance to customize the one they had just viewed (yes/no). If participants answered
“yes,” they proceeded to the customization task (described below) and subsequently to the
dependent measures, covariates, demographic measures, and lottery form. If participants
answered “no,” they proceeded straight to the demographic measures and lottery form. For those
choosing the customization task, the entire study took approximately 25 minutes to complete.
Independent Factors
Designer of the Default Product. All participants were shown a color picture of the
default backpack (see appendix A) along with the text: “L.L. Bean has just announced its new
backpacks for 2007. Rather than rely on marketing research, which is typically done, the
company decided to try a new approach for the design.” In the professional condition, the text
continued: “The company leaders picked representatives from the marketing department to
choose colors based on their preferences and the color combinations they believed would be well
accepted by the company’s target consumers. Here is the final product:” In the amateur
condition, the text stated, “The company sponsored a contest in which consumers each created a
color combination that they thought would be well-received by other customers. Here is one of
the award winners:”
10
It was crucial that, when combined with the picture, no perceived differences in the
perceived attractiveness or quality of the default backpack emerged across the two conditions.
Thus, a pre-test was conducted in which the designer manipulation was paired with the picture of
the default backpack. Thirty-five participants from the same participant population were
randomly assigned to one of the two designer manipulations described above, viewed the picture
of the default backpack, and responded to measures about the default product and its designer.
On six 9-point scales, participants reported their evaluations of the backpack. The items
included the degree to which participants agreed that the backpack was a good product and welldesigned, how stylish and attractive they thought it was, how much they thought they would
enjoy using it, and how likely they thought other students on campus would like the backpack.
All items loaded on a single factor and were summed to create an overall measure of evaluation
(M = 23.0; Range: 10 to 38;  = .88). Importantly, the pre-test revealed no differences in
evaluations of the backpack between those believing it was designed by professionals and those
believing it was designed by another consumer (M Amateur = 22.9 vs. M Professional = 23.0, NS).
Customization Guidance. All participants who chose to customize the backpack were
given a palate of 20 colors along with a black and white drawing of a backpack indicating the
areas where they could choose their colors (see appendix A). Below the drawing were spaces
where participants could indicate their color choices. In the “no guidance” condition,
participants were told: “If you chose ‘Yes’ to the customization option, please take a moment to
consider the color combination that you would choose.” In the “guidance” condition, the
following information was also provided, which was adapted directly from the L.L. Bean
website’s instructions for the “Custom Super Deluxe Backpack” (http://www.llbean.com/).
“Attached is a list of 20 different color choices. You may choose colors for the 5 parts of the
11
backpack, which are listed below. Please choose no more than three colors total. We suggest
you start with a body color and then add one or two more colors as an accent. Too many colors
can sometimes be overwhelming. Try darker colors for a conservative, timeless style. Or use
bright, high-energy colors to show off your distinct personality. Experiment with high contrast dark and light colors together can create depth and visual interest.”
A pre-test was conducted to understand the influence of the guidance manipulation on
those undertaking the customization task. Recall that ideally, the guidance would give consumers
feelings that would help them overcome negative effects of an upward comparison with
professionals while at the same time not severely limiting their perceptions of autonomy. Fortynine participants from the same population were randomly assigned to one of the two guidance
conditions. All participants completed the customization task and responded to a set of
dependent measures regarding their experiences during customization task. The items were
submitted to an exploratory factor analysis and three factors emerged.
Four items loaded on the first dimension, indicating how talented, confident, competent,
and intelligent participants felt while customizing the backpack. These items seemed to capture
perceptions of competence and were summed to create an index (M = 19.7; Range: 4 to 33;  =
.86). Four items loaded on the second dimension, which appeared to capture autonomy. These
items captured the extent to which the participant was able to express their creativity, be original
in their design, and had the autonomy and freedom to express their taste in the design (M = 22.9
Range: 4 to 36;  = .80). The third factor was related to effort, with the final four items
indicating how much effort the task required, how hard they concentrated, how hard they thought
about the task, and how overwhelming they found the task (M = 11.9 Range: 4 to 31;  = .78).
12
Each of the three indices was used as a dependent measure to test for differences between
the types of guidance. The only significant difference that emerged was on the competence
dimension. Participants receiving the guidance felt greater levels of competence during the
customization task than those for whom the guidance was absent (M Present = 20.9 vs. M Absent =
17.6, (F(1, 48) = 4.86, p < .05)). As intended, the manipulation did not influence perceptions of
autonomy (M Present = 23.7 vs. M Absent = 22.7, F(1, 48) = .76, NS), nor did it influence the
perceived amount of effort required (M Present = 12.9 vs. M Absent = 10.8, (F(1, 48) = 3.5, p > .05)).
Dependent Measures
Decision to Self-Design. Participants chose whether or not they wanted to customize the
default backpack. This decision did not affect whether or not they could enter into the lottery.
Evaluations of the Self-Designed Backpack. Participants’ evaluations of their selfdesigned backpacks were measured using the same six items used to evaluate the default
backpack in the pre-test, re-worded appropriately. All items loaded on a single factor and were
summed to create an overall evaluation index (M = 38.0; Range: 18 to 54;  = .92). Only the
participants who chose to customize the backpack provided data for these measures.
Results
Decision to Self-Design. Of the 105 participants, 90% chose to self-design, a result that
was rather surprising. While it was not blatantly emphasized, participants were free to leave the
experimental session when they had completed this study. Choosing to self-design the backpack
required more time, and even though the vast majority of those making this choice were not
going to actually receive the customized backpack, they still chose to invest the time. With such
13
low variance on this measure, however, it is not surprising that no significant differences
emerged between the proportion choosing to self-design after seeing the professional-designed
default (92%) versus the amateur-designed default (89%).
Evaluations of the self-designed backpack. A two-way ANOVA was used to test
hypothesis 1 and assess the influence of the two manipulated factors on participants’ evaluations
of their own self-designed backpacks. Participants’ attitudes towards L.L. Bean were used as a
covariate in the analyses. Consistent with hypothesis 1a, when the designer of the default product
was a professional as compared to another consumer, participants’ evaluations of their own selfdesigned products were lower (M Professional = 36.2 vs. M Amateur = 39.5, (F(1, 104) = 4.68, p <
.05)). This main effect, however, was qualified by a significant interaction (F(1, 104) = 4.05, p <
.05)). Follow-up contrasts reveal that participants who received the additional guidance did not
show evidence of the upward comparison’s negative effects (M Professional = 37.9 vs. M Amateur =
39.1, NS); those for whom the guidance was not provided, however, did indicate such an effect
(M Professional = 34.3 vs. M Amateur = 40.0, F(1, 49) = 7.45, p < .01). This pattern of results is
consistent with hypothesis 1b.
Discussion
The findings of this study are important on two levels. First, a comparison of the mean
evaluations of the default backpack (M = 23.0) to those of the self-designed backpacks (M =
38.0) highlights the premium value consumers place on products that they have self-designed, a
finding previously documented in the literature (Franke and Piller 2004). Prior studies that have
demonstrated such premiums have given participants the actual products (e.g., Swatch watches).
The replication here provides confidence in the realism of our manipulations.
14
Second, this study provides evidence that an upward social comparison can occur when
consumers take over design authority from a professional. Importantly, evidence of the social
comparison emerged in evaluations of the self-designed products, not in evaluations of the self.
The data indicate that participants facing an upward comparison to a professional processed the
social comparison information non-defensively, integrating the negative information into the
evaluations of their self-designed backpacks (Schwinghammer et al. 2006). The provision of
guidance apparently mitigated the amount of negative information generated by the comparison.
In contrast, participants engaged in a comparison with an amateur likely generated little negative
comparison information to begin with. As such, the guidance provided had no significant
influence on their evaluations.
Defensive versus Non-Defensive Processing
Since not all consumers will be receptive to guidance during self-design tasks, nor are all
firms willing or able to provide it, identifying other means for overcoming the negative influence
of upward comparisons is important. The non-defensive processing observed in study 1 resulted
in lower self-evaluations. Could consumers be encouraged to process the negative comparison
information defensively rather than non-defensively? Defensive processing occurs when the
need to protect self-esteem is salient (Taylor and Lobel 1989; Wood 1989) and is often
evidenced by self-serving behaviors designed specifically to protect the individual’s self-image
(e.g., Argo et al. 2006; Kunda 1990; Stapel and Koomen 2001; Tesser, Millar, and Moore 1988).
Although prior research as identified a number of defensive strategies, one tactic is particularly
relevant in the self-design context: diminishing the valuation of the comparison target (e.g., the
15
default backpack). Schwinghammer et al. (2006) argue that by denying the attractiveness of the
comparison target, the threat of an upward social comparison can be reduced (29).
According to the self-evaluation maintenance (SEM) model (Tesser 1988; Tesser et al.
1988), the more relevant the comparison target, the more important the social comparison
process is to one’s self-esteem (50). Recent research has demonstrated that relevant upward
comparisons are more threatening than less relevant ones, increasing the likelihood of defensive
processing (Argo et al. 2006). In study 1, upward comparisons produced non-defensive reactions
to the negative comparison information; in this study, however, the relevance of the default to
the participants’ own customization opportunity was not immediately apparent when it was
introduced. Recall that participants were exposed to the default backpack prior to learning that
any customization opportunity existed. Had they known of such an opportunity, the relevance of
the default would have likely been heightened, thus encouraging a more defensive response.
In the following study, we use the same stimuli, procedure, and manipulations as in study
1 with the following three important changes: 1) No guidance was provided to any of the
participants; 2) The relevance of the comparison target was manipulated by informing half of the
participants that they would have the opportunity to customize the backpack before they were
exposed to and evaluated the default (high relevance) vs. after they were exposed to and
evaluated the default (low relevance); 3) Evaluations of the default backpack were included to
provide evidence of the type of processing and appeared in the study as part of the relevance
manipulation (see (2) above; see appendix B for an overview of the procedure).
When the default is highly relevant (i.e., the customization opportunity presented before
the default introduction), evidence of defensive processing should exist for those facing an
upward comparison (i.e., the professional default) in response to the negative information
16
generated by the comparison. Comparisons to a more equivalent target (i.e., the amateur default)
should not generate significant negative information, and thus, little evidence of defensive
processing should emerge even for a highly relevant target. When the default is low in relevance
(i.e., the customization opportunity presented after the default introduction), there should be little
evidence of defensive processing regardless of comparison type, given that negative comparison
information is unlikely to be generated at all.
More formally,
H2a: When the default product is high in relevance, evaluations of the default will be
lower when it is professionally-designed than when it is consumer-designed.
H2b: When the default product is low in relevance, evaluations of the default will be
unaffected by the type of designer.
The design of this experiment also allows us to test whether first evaluating a potentially
threatening comparison target can reduce defensive processing on later self-evaluations.
Schwinghammer et al. (2006) describe their set of studies as “the first to explore direct
downgrading effects of the actual performance of a comparison other on the comparison
dimension itself” (29). In their research, however, the measures of the comparison’s
attractiveness always followed those reported for the self, both of which showed evidence of
defensive processing. What would happen if the order of the dependent measures were reversed?
If the opportunity to devalue the comparison target diminishes its potential threat, then
subsequent self-evaluations should reflect little evidence of defensive processing. The result
would be that the type of comparison (i.e., the default designer) is rendered irrelevant in selfevaluations. This effect should be observed for those facing a highly relevant default who have
had a prior opportunity to reduce the potential threat. For those facing the less relevant default
17
who did not have a chance to diminish the potential threat, the non-defensive processing effects
observed in study 1 should be replicated. Therefore, we make the following predictions:
H3a: When the default product is high in relevance, evaluations of the self-designed
product will be unaffected by the designer of the default product.
H3b: When the default product is low in relevance, evaluations of the self-designed
product will be lower when the default product is professionally-designed than
when it is consumer-designed.
STUDY 2
One-hundred and forty-six undergraduates participated in this study for course credit.
Measures
Decision to Self-Design. Participants chose whether or not they wanted to customize the
default backpack. This decision did not affect whether or not they could enter into the lottery.
Evaluations of the Default Backpack. Evaluations of the default backpack were
measured using the same instrument as that used in the pre-test for study 1.
Evaluations of the Self-Designed Backpack. Evaluations of the self-designed backpacks
were measured using the same instrument used in study 1. All items loaded on a single factor and
were summed to create an overall evaluation index (M = 34.9; Range: 16 to 54;  = .92).
Results
Decision to Self-Design. As in study 1, a high percentage (82%) of the 146 participants
chose to self-design. No differences in participation rates were observed across conditions.
Evaluations of the Default Backpack. A two-way ANOVA was used to test the effects of
the manipulated variables on participants’ evaluations of the default backpack. As in the prior
studies, participants’ attitudes towards L.L. Bean were used as a covariate in the analyses. The
results revealed only an interaction between the two manipulated factors (F(1, 145) = 6.45, p =
18
.01; See figure 1a). Consistent with hypothesis 2a, when the comparison target was highly
relevant, evaluations of the default backpack were lower when the designer was a professional as
compared to an amateur (M Professional = 24.1 vs. M Amateur = 27.6, (F(1, 145) = 4.58, p < .05).
When the comparison target was less relevant, however, there was no significant difference in
the evaluations of the default backpack (M Professional = 27.2 vs. M Amateur = 24.7, (F(1, 145) = 2.04,
p > .10) as predicted by hypothesis 2b. It is also important to note that evaluations of the
professionally-designed default differed significantly based on its relevance (M High Relevance =
24.1 vs. M Low Relevance = 27.2, (F(1, 145) = 3.65, p = .05), providing evidence that participants
diminished the value of the highly relevant upward target in an effort to reduce its threat.
---- Insert figure 1 about here ---Evaluations of the Self-Designed Backpack. A two-way ANOVA was also used to test the
effects of the manipulated variables on participants’ evaluations of their self-designed backpacks.
Consistent with hypothesis 3, the results revealed only an interaction between the two factors
(F(1, 119) = 5.25, p < .05; see figure 1b). When the comparison target was high in relevance,
participants had the chance to diminish the comparison threat, and thus no differences emerged
in the self-evaluations as predicted by hypothesis 3a (M Professional = 35.2 vs. M Amateur = 34.8,
(F(1, 119) = .29, NS). However, when the comparison target was low in relevance, no such
opportunity existed. Consistent with the findings from study 1, participants facing an upward
social comparison incorporated the negative information into their self-evaluations in a manner
consistent with non-defensive processing (M Professional = 31.8 vs. M Amateur = 38.1, (F(1, 119) =
6.88, p < .01). Importantly, when the default was professionally designed, its relevance
significantly influenced evaluations of the self-designed backpacks as well (M High Relevance = 35.2
19
vs. M Low Relevance = 31.8, (F(1, 119) = 3.75, p < .05). This finding provides more evidence that
diminishing the value of the highly relevant target can mitigate its threat.
Discussion. The results from study 2 demonstrate that allowing consumers to process
upward social comparison information defensively can enhance the evaluations of their selfdesigned products. Rather than incorporating the negative comparison information into their selfevaluations, defensive processors diminished the value of the comparison target.
Together, the first two studies consistently demonstrate that the non-defensive processing
of upward comparison information results in lower evaluations of one’s own design. The
provision of guidance (study 1) and the opportunity to diminish the valuation of the comparison
target with defensive processing (study 2) work in different ways to restore these evaluations. In
both studies, a single, salient comparison target was used, the design of which was held constant
for all participants. Other forms of social comparison, however, are likely to influence the
evaluations of self-designed products as well.
As business models increasingly encourage the creation and dissemination of usergenerated content, understanding how consumers evaluate their own products becomes even
more important. Threadless, for example, has experienced revenue growth of 500 percent per
year by running T-shirt design competitions on its online social network (Chafkin 2008). Over
700,000 members participate in the Threadless network by submitting their own T-shirt designs,
voting on those created by others, and purchasing the original products. Major corporations
including Pitney Bowes, Ford, and BMW are also experimenting with this business model in
various ways (Chafkin 2008). Therefore, consumers engaged in self-design are also likely to
encounter situations that would require social comparisons with the other designers engaged in
similar self-design activities.
20
One factor differentiating comparisons to other self-designers from those to off-the rack
products is that they often occur within the context of an explicit competition. The Threadless
community thrives based on the competitive spirit rampant among its self-designers (Chafkin
2008; Heim et al. 2008). Doritos sponsors a “Crash the Superbowl” competition encouraging
consumers to submit their own ads by committing to air the best ones during the game; European
shoe manufacturers sponsor the CEC Shoe Design Contest where the winner travels to Italy to
see their prototype produced; Tree Hugger sponsors a competition in which the winner’s “green”
furniture design is debuted at their annual trade show; and Spoonflower.com hosts a “fabric-ofthe-week” contest, selling the winner’s design on the site for the following seven days (Scelfo
2009). While firms across a wide range of industries are using competitions to generate and
consolidate consumers’ design ideas, little is known about the influence that a competitive
context has on consumers’ design experiences and evaluations of their self-designed products.
Further, the effects of social comparison processes within such a context are largely unexplored.
More generally, the relationship between behaviors (e.g., contest entry) and selfevaluation following a social comparison has received limited attention in the literature (Johnson
and Stapel 2007; Lockwood and Pinkus 2008). Thus, in the following study, we assess the
important behavioral consequences that result from social comparison processes and their
subsequent influence on self-evaluations.
The Behavioral Consequences of Social Comparison in Self-Design
Research examining the influence of upward social comparisons on individuals’ behavior
has produced equivocal results. In some cases, superior others serve as role models to motivate
self-improvement (e.g., Blanton, Buunk, Gibbons, and Kuyper 1999; Taylor and Lobel 1989); in
21
others, they reduce self-concepts and lead to diminished performance (Guay, Marsh and Boivin
2003). Johnson and Stapel (2007) propose and demonstrate that the inconsistencies in this prior
research can be attributed to the presence or absence of an opportunity to repair self-regard.
Specifically, these authors state, “when individuals feel threatened by an upward comparison
target, they will be motivated to protect or repair their self-evaluations, and one means for selfregard repair is improved performance” (1053).
Competing to publicly demonstrate one’s design ability may serve as an opportunity to
repair or enhance self-regard, an opportunity likely to be particularly appealing to those whose
self-evaluations are threatened by an upward comparison. While some design contests do offer a
cash reward to the winner, almost all of them offer some level of public recognition as a prize:
Doritos plays the winning ad on TV (along with the creator’s profile); Tree Hugger presents the
winning furniture design at the trade show (attributed to the designer); and Threadless displays
the winning T-shirts on its website and prints the name of the designer on the label of those sold
(Ogawa and Piller 2006). At Threadless, the appeal to artists is much less about the monetary
reward ($150 in 2004) and much more about the visibility associated with having your shirt
available for sale (Chafkin 2008). The benefits of a contest to one’s self-regard will only be
observed when the contest creates a repair opportunity for the person by allowing him/her to
work explicitly towards the goal of winning (e.g., if contest is announced prior to the self-design
opportunity). When the contest does not provide such a repair opportunity (e.g., if a contest is
announced after the design process is complete), the degree to which the person’s ability is
threatened is unlikely to influence the appeal of the contest. More formally,
H4a: When a contest provides a means for repairing threatened self-regard,
upward comparison targets (i.e., professional designers) will yield higher contest
participation rates than comparisons to more equivalent targets (i.e., amateur
designers).
22
H4b: When a contest does not provide a means for repairing threatened self-regard,
participation rates will be unaffected by type of comparison target.
Further evidence of this proposed mechanism can be found in a comparison between the
threat perceptions of contest entrants and non-entrants. Contests providing a means to repair selfregard (i.e., one that allows the person the opportunity to work towards the goal of winning)
should be more attractive to those perceiving a relatively high threat; those who do not perceive
such a significant threat would not share the same motivation to enter this type of contest
because their need to repair is not as strong. Conversely, threat perceptions are unlikely to be an
important factor determining an individual’s decision to enter a contest providing no repair
opportunity (i.e., one that allows the person to enter a previously-designed product). Thus,
H5a: When a contest provides a means for repairing self-regard, threat perceptions of
contest entrants will be higher than those of non-entrants.
H5b: When a contest does not provide a means for repairing self-regard, threat
perceptions will not differ according to entry decisions.
Evaluations of the Self-Designed Product
It has been argued that the self-design process enables consumers to create a far closer fit
between their inherent preferences and the actual product than would otherwise occur (Franke
and Piller 2004). Implicit in this argument is the assumption that consumers’ perceptions of fit
will influence their evaluations. However, studies 1 and 2 provide evidence that threats resulting
from social comparisons also influence these evaluations. By teasing these two components
apart, further evidence of the hypothesized comparison processes becomes available.
Perceived Fit and Evaluations. The perceived fit of a self-designed product reflects the
degree to which the designer was able to translate his or her unique conceptualization of an ideal
product design onto the actual product. For products where only the aesthetic elements are
23
customized, the product’s fit is the degree to which the self-designed product is perceived to
accurately reflect the consumer’s unique personality, taste, and style.
When consumers engage in a self-design task, maximizing fit is likely to be their primary
goal. An assessment of how well this goal was accomplished in the design task is likely to serve
as a critical input to overall evaluations. We also predict that such an assessment will be a key
input into a decision to enter a contest, regardless of whether the contest provides a repair
opportunity or not. Thus, we make the following predictions:
H6:
Perceptions of fit will be higher for contest entrants than for non-entrants.
H7a: Evaluations of the self-designed product will be higher for entrants than for nonentrants.
Self-Regard and Evaluations. While Johnson and Stapel (2007) did find that higher
threat perceptions led to higher performance on subsequent repair tasks, self-evaluations
following the task were not assessed. Further, their participants were not allowed to choose
whether or not to engage in the repair task. Rather, the studies manipulated the type of
comparison target (extreme upward, moderate upward), measured the effect on initial selfevaluations,1 and then measured the effect on task performance. We propose that offering the
opportunity to repair, not simply requiring it, serves as a more stringent test of their theory. By
enabling a comparison between the self-evaluations of those opting to repair and those declining
the opportunity, one can better determine the effects of the repair. Removing the “fit” component
from these evaluations will further isolate the influence that self-regard has on evaluations.
Assuming that self-regard is also an input to evaluations of self-designed products (an
assumption consistent with the results of our first two studies), we would expect that those who
1
The effect of the comparison target on self-evaluations was consistent with non-defensive processing: those
exposed to the extremely successful target reported significantly lower self-evaluations than those exposed to the
moderately successful one. The authors conclude from this finding that extreme target was threatening and the
moderate target non-threatening.
24
engage in a repair opportunity would benefit from increases in self-regard to a greater degree
than those who choose not to engage. No such differences would be expected when a contest
does not provide the ability to repair. Thus:
H7b: When a contest provides a means for repairing self-regard, this effect will be
increased.
When perceived fit is subtracted from overall evaluations, this pattern should still hold.
Willingness to Pay. Consumers’ willingness to pay for their self-designed product is
likely to be based on their overall evaluations of the product, not just its perceived fit. Therefore,
we make the following prediction:
H8a: Entrants’ willingness to pay for their self-designed products will be higher than
those of non-entrants.
H8b: When a contest provides a means for repairing self-regard, this effect will be
increased.
Effort
The performance differences observed by Johnson and Stapel (2007) were due to
differences in effort expended on the repair task. On average, higher levels of effort in a selfdesign task are likely to lead to greater perceptions of fit and self-evaluations. We have already
predicted that both of these factors are likely to influence a consumer’s decision to enter a
contest. If the theory underlying the predictions related to repair and self-regard are correct,
those entering a contest that provides a means of repair should exert more effort on the task than
those not entering it. Using time spent while engaged in the task as an indicator of effort, we
make the following prediction:
H9a: The time spent designing the product will be higher for entrants than for nonentrants.
H9b: When a contest provides a means for repairing self-regard, this effect will be
increased.
25
Objective Evaluations of Performance
The tasks used to assess performance following threats to self-regard have typically been
ones for which objective performance is easily determined (e.g., the number of remote associates
correctly identified on the RAT (Johnson and Stapel 2007)). In the realm of self-design,
however, such assessments are not so easily made. Thus, independent judges are required to
assess performance, consistent with the practice employed by firms such as Threadless.
What is the likely relationship between the judges’ evaluations and those reported by the
consumers themselves? We have argued that self-evaluations are based both on perceived fit (a
judgment made about the product itself) as well as the consumer’s self-regard. We now propose
that the judges’ assessments will align with the self-reported ratings of fit, because both
judgments are based on the products themselves. Judges are unlikely to observe either the
damage or repair of the designer’s self-regard. As such, we make the following prediction:
H10: Judges will provide higher evaluations of the self-designed products created by
contest entrants as compared to non-entrants.
Product Satisfaction
When time elapses between a design task and receipt of the actual product, we predict
that the importance of the self-regard component in consumers’ evaluations will dissipate, as the
perceived threat (and its potential remediation) becomes less immediate. Assuming that the
perceived fit component was relatively accurate, satisfaction with the real product should follow
a pattern similar to perceived fit at the time of design, leading to the following prediction:
H11: Contest entrants will be more satisfied with their self-designed products than nonentrants.
26
The following study is designed to test hypotheses 4-11 and to do so in the context of a
real self-design task with a company-sponsored contest. The study is longitudinal in nature, with
participants responding to the dependent measures at the time of the design task (time 1) and at
the time of product delivery (time 2).
STUDY 3
Stimuli and Procedure
Several criteria were used to select the product for this study: a) affordability (since
participants would receive the self-designed product created), b) relevance to the participant
population (North American undergraduates), and c) customization dimensions (products were
considered that offered aesthetic, but not functional, customization opportunities). Customizable
“skins” for electronic devices met all three criteria. MyTego.com, a firm based in Winnipeg,
Canada, agreed to work with us. MyTego provided us with unique coupon codes at 50% off their
face value and agreed to individually package, but batch ship our orders. This cooperation
allowed us to distribute the completed products to the participants personally and thereby collect
additional measures at the time of delivery.
Participants were 120 undergraduates from a large North American university who
participated in this study in exchange for both course credit and receipt of the self-designed
product. The study was administered using on-line survey software (Survey Monkey), and
participants completed the study at the time and location of their choosing (within a 48 hour
window). The intent was to create a study atmosphere that most closely approximated a “realworld” setting. While participants’ motivation for participating in the study was certainly
influenced by their course participation credit, seven other experiments were available to them at
the time. Thus, some self-selection was involved, further adding to the study’s realism.
27
Participants all received an email message containing their id number, coupon code, and
link to the on-line survey. Upon opening the survey, participants were asked to agree to complete
the study in one sitting and in a private setting relatively free from distractions. They were then
provided an overview of the study, exposed to the manipulations (where appropriate), and
directed to open the MyTego.com website in another browser where they proceeded to design
their skin. MyTego allows customers to upload their own pictures to use in their designs, but we
instructed our participants not to do this but rather to use the library of design options provided
on the website. Even with this restriction, participants could choose one or more of the 300+
stored designs, select the parts of them they wanted, and alter the size and orientation to create
their design. A check of the completed skins confirms that these instructions were followed. On
MyTego, consumers can experiment with various designs, viewing the outcomes as they work.
Once they had finished designing their skin, participants completed the dependent
measures using the online survey. MyTego forwarded pictures of the completed skins to us,
which we could link to each participant by coupon code. Independent judges provided ratings of
the skins while we were awaiting their production and delivery. When the shipment of skins
arrived, participants were emailed and informed of the location and times for pick-up. They were
also told that they would need to complete a short (one page) follow-up survey.
Design
Two factors were manipulated between-participants: (1) The timing of an announcement
of a contest (Repair Opportunity: Contest announced prior to the design task vs. No Repair
Opportunity: Contest announced after the design task) and (2) The nature of the social
28
comparison (Professional designers vs. Amateurs). A control condition was also included which
provided no contest opportunity.
Independent Factors
Timing of the Contest Announcement. Half of the participants were informed of the
contest prior to going to the MyTego website to begin designing their skin (repair opportunity);
the other half were informed of the contest just after finishing designing their skin (no repair
opportunity). All dependent measures (including their decision to enter the contest) were
collected after participants had completed the design process.
Nature of the comparison. The type of competition faced in the contest was manipulated
between participants (professional designers vs. amateur peers). When informed of the contest,
all participants were given the following information:
The web site where you’ll be customizing your skin does a lot of business with college
students, and they want to better understand and communicate with the customers in this
group. To do so, they have identified <name of the university where the study took
place> as a representative market. The company is sponsoring a contest for the best skin
design.
In the professional condition, the following statement was then included:
Up until now, professional designers have competed to have their designs displayed on
the company’s home page. This year, the company is opening up the contest by including
a small percentage of designs produced by college students.
All participants were then given a bullet-point list of the contest specifics, detailed below for the
professional [amateur] conditions:


The company will post the winning design as an award-winner on their home page for the
first 6 months of 2009.
The winner will have the option of how (or if) they would like their name displayed along
with the design. No one will be able to purchase that exact design, but it will be
prominently displayed.
29


The total number of contest participants, primarily professional designers, is expected to
be around 100. [The only possible entrants into the contest will be BCOR students
participating in this study at <the university name>. The number of entrants is expected
to be around 100.]
A panel of professional designers [a panel of other students participating in this study]
will select the best design from among those entered.
Dependent Measures at Time 1
Decision to Enter the Contest. Upon completion of the design task, all participants who
were not in the control condition were asked to indicate whether or not they wanted to enter their
skin into the competition.
Perceived Threat Presented by the Competition. The participants who were not in the
control condition were asked how tough they thought the competition would be for the contest
(M = 5.8; Range: 2 to 9).
Fit of the Self-Designed Skins. Fit was measured using three items which captured the
degree to which the skin reflected the participant’s own unique personality, taste, and style, the
extent to which the skin would make a statement about the participant, and the degree to which
the skin could be considered “one of a kind” (M = 12.8; Range: 3 to 21;  = .83).
Evaluations of the Self-Designed Skins. Participants’ evaluations of their self-designed
skins were measured using a set of five items adapted from those in the prior studies. On sevenpoint scales, participants indicated the degree to which they agreed that their skin was welldesigned, how stylish and attractive it was, how much they thought they would enjoy using it,
and how satisfied they were with their design. All items loaded on a single factor and were
summed to create an overall evaluation measure (M = 27.1; Range: 11 to 35;  = .91). To
further establish the independence of this construct, items used to measure fit and items used to
measure evaluation were submitted to an exploratory factor analysis. Two factors emerged,
consistent with those reported, with a correlation between the composite measures of .53.
30
Willingness to Pay. In an open-ended question, participants reported what they would be
willing to pay for their self-designed skin if they hadn’t been participating in this study ((M =
$8.31; Range = $0 to $25).
Time. The survey software captured the time each participant started and finished. From
this information, we computed the total time (in minutes) spent by each participant. This time
measure includes time spent both designing the skin and completing the questionnaire (M = 34.8
minutes; Range: 16 to 68).2
Dependent Measure at Time 2
Satisfaction with the Skin. Participants reported how satisfied they were with their skin,
how much they would enjoy using it, and how proud they were of their design (M = 20.7; Range
= 10 to 27; α = .92).
Ratings by Independent Judges
Twenty-four new participants, drawn from the same population, served as independent
judges of the skins in exchange for course credit. Participants were randomly assigned to either
evaluate the skin itself or to assess the effort put forth by its designer. To limit the likelihood of
fatigue, each participant rated only half of the designs (60 skins). Thus, each skin had 6 judges
for each dimension. Ratings were averaged across each of the 6 judges on each dimension to
create a composite score for the skin’s evaluation and for the designer’s effort.
2
While all participants agreed to complete the study in one sitting, the start and finish times clearly indicate that ten
participants did not abide by this constraint. Thus, when the time recorded exceeded 100 minutes, the time variable
was indicated as missing in the analyses.
31
Survey Monkey was used to collect this data as well. Once the judges logged in, they
were asked to view a collage containing pictures of all 120 skins produced in the study and to
respond to a short series of questions to ensure that they had, in fact, studied the collage (see
appendix C). This step was taken to better calibrate each judge for the task. Following these
questions, each skin was displayed on its own page along with the measures:
Evaluations. Judges assigned to this task rated each skin on four items, indicating how
much they liked the skin, were impressed with its design, thought it was one of the best in the
set, and how much they would like to have it themselves (assuming it fit their own device). Each
of the four items was averaged across judges, and the four averages were summed to create an
evaluation for each skin (M = 14.1; Range = 6 to 25; α = .93).
Effort. Judges assigned to this task rated the designer of each skin on two items,
indicating how much effort was put towards designing the skin and how much time they thought
the designer spent (selecting a point on the seven-point scale which ranged from 10 to 60
minutes). Each item was averaged across judges, and the two averages were summed to create
an effort rating for each self-designed skin (M = 7.5; Range = 2 to 15; r = .95).
Results
The following section presents the results of the hypothesis tests. Table 1 provides a full
listing of the results for each experimental condition, collapsed across the contest entry decision.
----- Insert table 1 about here ----Decision to Enter the Contest. A logistic regression model was used to predict the
likelihood that the participant would choose to enter their design into the contest. The
independent factors (the timing of the contest and the type of comparison) and their interaction
32
were used as predictors. The results revealed only a significant interaction between the variables
(χ2 (1, 92) = 4.51, p < .05), in a pattern consistent with hypothesis 4 (see figure 2). When the
contest provided a repair opportunity, 62% of those facing an upward comparison to professional
designers chose to enter the contest compared to 32% of those facing a comparison to the other
undergraduates. When the contest offered no repair opportunity, the type of comparison had no
significant influence on the contest entry decision (professional: 33% vs. amateur: 38%).
---- Insert figure 2 about here ---Two-way ANOVAs were used to test the remaining hypotheses, with the timing of the
contest and the decision to enter the contest serving as the independent variables. One-way
ANOVAs were used for comparisons to the control.
Perceived threat presented by the competition. The results revealed a main effect of the
entry decision on the perceived toughness of the competition, with entrants perceiving a greater
threat than non-entrants (M Entrants = 6.60 vs. M Non Entrants = 5.27, (F(1, 92) = 8.53, p < .01)). This
main effect, however, was qualified by a significant interaction (F(1, 92) = 4.48, p < .05)).
Consistent with hypothesis 5, entrants in the contest with a repair opportunity reported that the
competition would be tougher than the non-entrants (M Entrants = 7.1 vs. M Non Entrants = 4.9, (F(1,
92) = 10.70, p < .01); when the contest did not offer a repair opportunity, no differences emerged
as a result of the entry decision (M Entrants = 6.06 vs. M Non Entrants = 5.57, F(1, 92) = .9, NS).
Fit of the self-designed skins. As predicted by hypothesis 6, only a main effect of the
entry decision emerged in this analysis. Entrants reporting a higher perceived fit than nonentrants (M Entrants = 14.8 vs. M Non Entrants = 11.7, (F(1, 92) = 7.64, p < .01). Neither entrants nor
non-entrants reported a perceived fit significantly different from that reported by the control
group (M Control = 13.00; both Fs(1, 119) < 1.30, p > .10).
33
Evaluations of the self-designed skins. As predicted by hypothesis 7a, a main effect of
the entry decision emerged. As with fit, entrants reported higher evaluations than non-entrants
(M Entrants = 29.8 vs. M Non Entrants = 25.8, (F(1, 92) = 16.13, p < .01)). However, unlike fit, this
main effect was qualified by a significant interaction (F(1, 92) = 6.65 p < .01)). Consistent with
hypothesis 7b, the advantage was larger for entrants in the contest with a repair opportunity than
for those in the contest without one: (M Repair Entrants = 29.9 vs. M Repair Non Entrants = 23.5, F(1, 92) =
12.01, p < .01; M No Repair Entrants = 29.6 vs. M No Repair Non Entrants = 27.1, F(1, 92) = .38, NS).
Importantly, when fit was subtracted from overall evaluations, the interaction remained
significant (F(1, 92) = 4.50, p < .01), and the contrasts followed a similar pattern (M Repair Entrants =
15.5 vs. M Repair Non Entrants = 12.2, F(1, 92) = 5.71, p < .01; M No Repair Entrants = 14.4 vs. M No Repair
Non Entrants
= 14.5, F(1, 92) = .28, NS). We attribute this net effect to the influence of self-regard
on evaluations. A comparison to the evaluations of the control group also provides evidence that
the opportunity to enter a contest with a repair opportunity can significantly influence selfevaluations. Those participants entering such a contest reported higher evaluations than control
participants (M Repair Entrants = 29.9 vs. M Control = 26.6, F(1, 119) = 6.95, p < .01), while those
choosing not to enter reported lower evaluations (M Repair Non Entrants = 23.5 vs. M Control = 26.6, F(1,
119) = 4.44, p < .05). No significant differences were observed among control participants and
for those given the opportunity to enter a contest without a repair opportunity (See figure 3a).
---- Insert figure 3 about here --Willingness to pay. A significant main effect of the entry decision (F(1, 92) = 5.63, p <
.05) and a marginally significant interaction (F(1, 92) = 3.34, p < .10) emerged from this
analysis. Consistent with hypothesis 8a, contest entrants reported a higher willingness to pay
than non-entrants (M Entrants = $10.14 vs. M Non Entrants = $7.50). Consistent with hypothesis 8b,
34
this effect was increased when the contest offered a repair opportunity. In this case, participants
were willing to pay over $4.00 more for their designs than non-entrants (M Repair Entrants = $11.00
vs. M Repair Non-Entrants = $6.79, (F(1, 92) = 7.44, p <.01). Considering that each participant’s
coupon code carried a known face value of $10.00, the magnitude of this effect is rather
surprising. No significant differences were apparent when the contest did not offer a repair
opportunity (M Entrants = $9.00 vs. M Non Entrants = $8.23, F(1, 92) = .42, NS). Only participants
who chose to enter the contest with a repair opportunity were willing to pay an amount
significantly different from that of the control group (M Repair Entrants = $11.00 vs. M Control = $7.52,
(F(1, 119) = 5.20, p < .05; see figure 3b).
Time. A main effect of contest entry emerged in this analysis, with entrants spending
more than seven minutes longer on the task than non-entrants, consistent with hypothesis 9a (M
Entrants
= 41.9 vs. M Non Entrants = 34.3, F(1, 84) = 7.47, p < .01). While the interaction was not
significant, an inspection of the means reveals a pattern consistent with hypothesis 9b. Those
entrants for whom the contest provided a means for repair spent over 11 minutes longer on the
task than their non-entrant counterparts (M Repair Entrants = 43.8 vs. M Repair Non Entrants = 32.3, F(1,
84) = 6.29, p < .01) and also significantly more time than those in the control group (M Control =
34.0, F(1, 109) = 5.67, p < .05). That premium fell to under four minutes for those in the contest
without the repair opportunity (M No Repair Entrants = 39.7 vs. M No Repair Non Entrants = 35.8, F(1, 84) =
.99, NS) and was no different from the control (see figure 3c).
Judges’ Ratings of Performance. As predicted by hypothesis 10, only a main effect of
the contest entry decision significantly influenced the judges’ evaluations. Judges reported higher
evaluations of the designs produced by contest entrants than of those produced by non-entrants
35
(M Entrants = 17.4 vs. M Non Entrants = 14.2, F(1, 92) = 5.05, p < .05) and those in the control group
(M Entrants = 17.4 vs. M Control = 14.4, F(1, 119) = 3.72, p < .05).
Recall that a separate set of judges rated the perceived effort expended by each skin’s
designer. No explicit prediction was made for this measure, but it provided a way to triangulate
on the “effort” construct indicated by time spent on the task. As with time, only a main effect of
the contest entry decision emerged, with entrants judged as having expended more effort than
both non-entrants (M Entrants = 9.2 vs. M Non Entrants = 6.9, F(1, 92) = 6.01, p < .05) and control
group participants (M Control = 7.2, F(1, 119) = 4.30, p < .05).
Satisfaction with the skin. Surprisingly, only 61% of the participants picked up their
skins, thus leaving a much smaller sample size. To determine whether this decision was also
influenced by the timing of the contest and the entry decision, a logistic regression was used with
the two factors as predictors. A significant interaction emerged (χ2 (1, 92) = 3.98, p < .05).
Ninety percent of participants who entered the contest with the repair opportunity picked up their
skins compared to 50% of the non-entrants and 52% in the control group. Conversely, 68% of
the entrants in the contest without a repair opportunity collected their skin compared to 54% of
the non-entrants. Taking the opportunity to repair one’s self-regard seemingly increased
attachment to the self-designed product. Interestingly, this effect did not carry through to the
satisfaction ratings. Only the main effect of the contest entry decision influenced satisfaction,
with entrants reporting higher levels of satisfaction at pick-up (M Entrants = 21.7 vs. M Non Entrants =
19.1, F(1, 55) = 4.64, p < .05). Entrants’ satisfaction ratings were no different from those
reported by the control group (M Control = 21.6).
36
Discussion
Theoretically, this study extends recent work in social comparison theory by
demonstrating that opportunities to repair one’s self-regard, when taken, do result in higher
evaluations of self-designed products. While Johnson and Stapel (2007) demonstrated the link
between a comparison threat, diminished self-evaluations, and subsequent effort in a repair task,
they neither provided participants with the option of undertaking the task nor measured selfevaluations following it. Thus, this study extends the limited work examining the connection
between behavioral outcomes and self-evaluations following a social comparison threat.
Managerially, this study provides more insight into the success of firms such as
Threadless, who host design contests which reward consumers for their creativity and design
ability. As the summary results in table 1 indicate, the announcement a contest does little to
enhance or detract from consumers’ self-design evaluations, experience, and satisfaction.
However, when the contest allows consumers to work towards the goal of winning, contest
entrants report higher self-evaluations, are willing to pay more for the product, and spend more
time involved in the design task than they would had no contest been announced.
The results here reveal that participants facing an upward comparison target were more
likely to enter the contest, but only when the contest provided an opportunity to repair their
threatened self-regard. These higher threats resulted from comparisons on dimensions important
to the person. In this case, those dimensions were likely design skill and creative ability.
Therefore, this finding is useful in segmenting the self-design market. Specifically, the people
who enter contests offering repair opportunities are likely those for whom design skill and
creative ability are important; coincidently, these people are also likely to be attractive members
37
of the target market. Contests without a repair opportunity may be unable to discriminate on
these important dimensions. .
GENERAL DISCUSSION
In 2006, TIME magazine named “you” its Person of the Year “for seizing the reins of the
global media, for founding and framing the new digital democracy, and for working for nothing
and beating the pros at their own game” (Grossman, 12/13/2006). As consumers increasingly
assume roles traditionally performed by professionals, understanding the factors that influence
them is critical for managers whose goals are to identify these consumers, design marketing
strategies to communicate with them, and develop new interfaces to optimize their experiences.
This research focuses on one arena in which such a role shift is occurring: self-design.
Theoretical Contributions and Opportunities for Future Research
The vast majority of research in consumer behavior has examined how consumers
respond to products offered on a 'take it or leave it' basis by the manufacturer. Self-design
changes the rules substantially, allowing consumers to play a much more active role in the
process. Together, these three studies demonstrate that superior fit alone does not explain
consumers’ dominant preferences for self-designed products. More subtle factors, such as social
comparisons to other designers, influence consumers’ evaluations of their customized products.
The first two studies indicate that upward comparisons to professionals tend to be
processed non-defensively, resulting in lower evaluations of self-designed products. Providing
guidance and prompting defensive processing are both effective ways to diminish the influence
of the negative information generated by these upward comparisons. The third study
demonstrates that when defensive processing is prompted (e.g., by an explicit competition), the
38
upward comparison can actually enhance evaluations of self-designed products when
accompanied by an opportunity to repair self-regard. It does so by increasing participation in the
repair opportunity, which subsequently leads to higher evaluations of self-designed products,
greater willingness to pay, and higher long-run satisfaction. Ratings from independent judges
confirmed these effects.
The theoretical opportunities in this growth area are abundant. A broader understanding
of consumers’ motivation to self-design is needed. Factors related to the product, to the brand,
and to the immediate situation are also likely to play a significant role in determining whether
consumers will choose to assume the designer role. Gaining this knowledge would further our
theoretical understanding of consumer creativity and would also provide much-needed
segmentation information for managers.
Further research is also needed to understand the influences that social factors and group
dynamics have on both the design process itself as well as subsequent evaluations of the
customized product. Self-design communities are enjoying tremendous growth, and consumers
working together play a significant role in both the design and evaluation of the products sold in
these commercial venues. Product-related and brand-related factors are also likely to be
important aspects of self-design to examine in the future. Whether the product is intended for
largely private or public consumption, whether the product is largely hedonic or functional, and
whether or not the customizable components may significantly influence consumers’ motivation
to self-design, their self-design experience, and their satisfaction with the outcome.
39
Managerial Implications
The findings presented here have implications for companies in the process of
determining whether and how to allow their customers to participate in self-design. As the
results demonstrate, consumers will compare their design skills to those of a relevant comparison
designer when evaluating their own designs. The extent to which the company can manage the
type of information that results from that comparison may influence consumers’ ultimate
satisfaction with both the self-design experience and the outcome. Our results show that
companies may influence the nature of the comparison information by controlling either the
comparison target or the guidance available and influence the way in which consumers process
the comparison information by facilitating defensive or non-defensive strategies.
Limitations
Because our studies were intentionally constrained to focus on customization decisions
that required limited functional knowledge and technical expertise, the generalizability is limited.
Future work could relax these requirements to better understand their influence on consumers’
decisions to customize, their perceptions of the process, and ultimate evaluation of and
satisfaction with the outcome. A comprehensive examination of consumers’ motivations to
engage in self-design would also help overcome the limitation of the student sample. Overall, our
hope is that the current work will generate additional interest and research to better understand
why consumers are choosing to take on new roles in the value chain, and when they do, how
those new roles are affecting both them and the broader market.
40
APPENDIX A:
The Default Backpack for Studies 1 & 2
Self-Design Task
Black
Red
Navy Blue
Green
Purple
Yellow
Orange
Pink
Brown
Violet
Light Pink
Sky Blue
Dark Green
Maroon
Cream
Periwinkle
Army Green
Light Blue
Gray
White
41
APPENDIX B:
Procedure for Study 2
42
APPENDIX C:
Self-Designed Skins (Study 3)
43
FIGURE 1:
The Influence of Comparison Target and Relevance on Evaluations (Study 2)
a)
Evaluations of the Default Backpack
28
27.6
27.2
Evaluations of the Default Backpack
27
26
25
24.7
24
24.1
23
22
High Relevance
Low Relevance
Relevance of the Comparison Target
Professional
b)
Amateur
Evaluations of the Self-Designed Backpack
39
38.1
38
Evaluations of the Self-Designed Backpacks
37
36
35.2
35
34
34.8
33
32
31.8
31
30
29
28
High Relevance
Low Relevance
Relevance of the Comparison Target
Professional
Amateur
44
FIGURE 2:
The Influence of Social Comparison Target and Type of Contest
on the Contest Entry Decision (Study 3)
65%
62%
60%
Percentage Entering the Contest .
55%
50%
45%
40%
38%
35%
33%
32%
30%
25%
20%
Repair Opportunity
No Repair Opportuntiy
Type of Contest
Professional
Amateur
45
FIGURE 3:
The Influence of Type of Contest and Contest Entry Decision
on Evaluations, Willingness to Pay, and Time Spent on Self-Designed Products (Study 3)
a)
Evaluations of the Self-Designed Products
29.9
Evaluations of Self-Designed Products .
30
29.6
27.1
26.6
25
23.5
20
Control
Entrants
Non-Entrants
Entrants
Non-Entrants
(Repair)
(Repair)
(No Repair)
(No Repair)
Evaluations
b)
Willingness to Pay for the Self-Designed Products
$12.00
$11.00
$11.00
Willingness to Pay
$10.00
$9.00
$9.00
$8.23
$8.00
$7.52
$6.79
$7.00
$6.00
$5.00
Control
Entrants
Non-Entrants
Entrants
Non-Entrants
(Repair)
(Repair)
(No Repair)
(No Repair)
WTP
46
FIGURE 3 (continued):
c)
Total Time Spent (in minutes) on the Self-Design Task
45
43.8
43
41
39.7
Time Spent (Minutes) .
39
37
35
35.8
34
33
32.3
31
29
27
25
Control
Entrants
Non-Entrants
Entrants
Non-Entrants
(Repair)
(Repair)
(No Repair)
(No Repair)
Time
47
TABLE 1
Study 3 Results:
Means and Standard Deviations by Condition (Collapsed Across Contest Entry Decision)
Dependent
Measure
Control
Professional
With Repair
Amateur
With Repair
Professional
No Repair
Amateur
No Repair
% Entering the
Contest
n/a
62%a
(.50)
32%
(.48)
33%
(.47)
38%
(.49)
Perceived Threat
n/a
6.1
(2.4)
5.8
(1.9)
5.9
(1.9)
5.5
(2.1)
Fit
13.0
(4.6)
12.2
(5.7)
12.1
(4.0)
13.8
(4.3)
12.8
(4.8)
Evaluation of
Self-Design
26.6
(4.7)
27.5
(5.1)
25.6
(6.7)
27.8
(5.2)
27.5
(5.6)
Willingness to Pay
$7.52
(5.2)
$9.58
(6.7)
$8.09
(5.0)
$8.65
(4.9)
$7.95
(4.0)
Time (minutes)
34.0
(11.8)
41.9
(15.2)
33.8
(13.4)
38.9
(12.8)
35.4
(9.5)
Judged Evaluation
14.4
(5.1)
16.4
(5.7)
14.0
(6.1)
15.7
(5.4)
15.0
(7.4)
Judged Effort
7.2
(3.5)
8.5
(3.8)
7.5
(3.9)
7.9
(3.4)
6.5
(3.5)
% Picking Up
Their Product
52%
(.51)
64%
(.49)
68%
(.48)
58%
(.50)
64%
(.49)
Satisfaction
21.6
(4.4)
18.9b
(2.7)
21.6
(4.2)
21.2
(5.3)
19.8
(4.9)
a
b
Different from all other conditions, controlling for contest entry decision (p < .05)
Different from all other conditions except amateur/no repair, controlling for contest entry
decision (p < .01)
48
REFERENCES
Argo, Jennifer J., Katherine White, and Darren W. Dahl (2006), "Social Comparison Theory and
Deception in the Interpersonal Exchange of Consumption Information," Journal of
Consumer Research, 33 (1), 99-108.
Belk, Russell W. (1988), "Possessions and the Extended Self," Journal of Consumer Research,
15 (2), 139-168.
Berger, Jonah and Chip Heath (2007), "Where Consumers Diverge from Others: Identity
Signaling and Product Domains," Journal of Consumer Research, 34 (2), 121-34.
Blanton, Hart, Frederick X. Gibbons, Bram P. Buunk, and Hans Kuyper (1999), “When BetterThan-Others Compare Upward: Choice of Comparison and Comparative Evaluation as
Independent Predictors of Academic Performance,” Journal of Personality & Social
Psychology, 76 (3), 420-30.
Chafkin, Max (June 2008), "The Customer Is the Company," in Inc. Magazine.
Csikszentmihalyi, M. and Figurski, T. J. (1982), “Self-awareness and Aversive Experience in
Everyday Life,” Journal of Personality, 50, 15-28.
Dahl, Darren W. and C. Page Moreau (2007), "Thinking Inside the Box: Why Consumers Enjoy
Constrained Creative Experiences," Journal of Marketing Research, 44 (3), 357-69.
Dellaert, Benedict G. C. and Stefan Stremersch (2005), "Marketing Mass-Customized Products:
Striking a Balance between Utility and Complexity," Journal of Marketing Research, 42
(2), 219-27.
Deng, Xiaoyan and J. Wesley Hutchinson (2008), "Just Do It Yourself: The Roles of Outcome
Accuracy, Process Affect, and Pride of Authorship in the Evaluation of Self-Designed
Products,” Unpublished Manuscript.
49
Festinger, Leon (1954), "A Theory of Social Comparison Processes," Human Relations, 7 (2),
117-40.
Franke, Nikolaus and Frank Piller (2004), "Value Creation by Toolkits for User Innovation and
Design: The Case of the Watch Market," Journal of Product Innovation Management, 21
(6), 401-15.
Franke, Nikolaus and Heribert Reisinger (2003), “Remaining Within-Cluster Variance:
A Meta-Analysis of the Dark Side of Cluster Analysis,” Working Paper, Vienna
University of Economics and Business Administration.
Grossman, Lev (2006), “Time’s Person of the Year: You,” in TIME, (December 13, 2006.
GTM White Paper (2007), “INTERACTivism™ How To Reach A Mass Market When
Individual Customization Rules,” April 12, 2007.
Guay, F., Marsh, H.W., and Boivin, M. (2003), “Academic Self-Concept and Academic
Achievement: A developmental perspective on their causal ordering,” Journal of
Educational Psychology, 95(1), 124-136.
Heim, Allison, Hope Jensen Schau, and Linda L. Price (2008), “Threadless: Co-Creating a Brand
Through Distinction and Democracy,” Consumer Culture Theory Conference, June 1922, Boston, MA.
Humphreys, Ashlee and Kent Grayson (2008), “The Intersecting Roles of Consumer and
Producer: Contemporary Criticisms and New Analytic Directions,” Sociology Compass,
2, 1-18.
Johnson, Camille S. and Diederik A. Stapel (2007), "No Pain, No Gain: The Conditions under
Which Upward Comparisons Lead to Better Performance," Journal of Personality &
Social Psychology, 92 (6), 1051-67.
50
Klein, William M. (1997), "Objective Standards Are Not Enough: Affective, Self-Evaluative,
and Behavioral Responses to Social Comparison Information," Journal of Personality
and Social Psychology, 72 (4), 763-74.
Kunda, Ziva (1990), "The Case for Motivated Reasoning," Psychological Bulletin, 108 (3), 48098.
Lockwood, Penelope and Rebecca T. Pinkus (2008), “The Impact of Social Comparisons on
Motivation,” in Handbook of Motivation Science, ed. James Y. Shah and Wendi L.
Gardner, New York, NY: Guilford Press, 251-64.
Mussweiler, Thomas, Gabriel Shira, and Galen V. Bodenhausen (2000), "Shifting Social
Identities as a Strategy for Deflecting Threatening Social Comparisons," Journal of
Personality and Social Psychology, 79 (3), 398-409.
Mussweiler, Thomas and Katja Ruter (2003), "What Friends Are For! The Use of Routine
Standards in Social Comparison," Journal of Personality and Social Psychology, 85 (3),
467-81.
Mussweiler, Thomas, Katja Ruter, and Kai Epstude (2004), "The Man Who Wasn't There:
Subliminal Social Comparison Standards Influence Self-Evaluation," Journal of
Experimental Social Psychology, 40 (5), 689-96.
Norman, Donald A. (2004), Emotional Design: Why We Love (or Hate) Everyday Things,
New York, NY: Basic Books.
Ogawa, Susumu and Frank T. Piller (2006), "Reducing the Risks of New Product Development,"
MIT Sloan Management Review, 47 (2), 65-71.
Randall, Taylor, Christian Terwiesch, and Kafrl T. Ulrich (2005), "Principles for User Design of
Customized Products," California Management Review, 47 (4), 68-85.
51
Randall, Taylor, Christian Terwiesch, and Karl T. Ulrich (2007), "User Design of Customized
Products," Marketing Science, 26 (2), 268-80.
Scelfo, Julie (2009), “Fabric: You design it, they print it,” New York Times, 1/8/2009, D3.
Schwinghammer, Saskia A., Diederik A. Stapel, and Hart Blanton (2006), "Different Selves
Have Different Effects: Self-Activation and Defensive Social Comparison," Personality
and Social Psychology Bulletin, 32 (1), 27-39.
Simonson, Itamar (2005), "Determinants of Customers' Responses to Customized Offers:
Conceptual Framework and Research Propositions," Journal of Marketing, 69 (1), 32-45.
Simonson, Itamar (2008), "Will I Like A "Medium" Pillow? Another Look a
Constructed and Inherent Preferences," Journal of Consumer Psychology, 18 (3), 155-69.
Stapel, Diederik A. and Willem Koomen (2001), "I, We, and the Effects of Others on Me: How
Self-Construal Level Moderates Social Comparison Effects," Journal of Personality and
Social Psychology, 80 (5), 766-81.
Taylor, Shelley E. and Marci Lobel (1989), “Social Comparison Activity under Threat:
Downward Evaluation and Upward Contacts,” Psychological Review, 96 (4), 569-75.
Tesser, Abraham (1988.), "Toward a Self-Evaluation Maintenance Model of Social Behavior," in
Advances in Experimental Social Psychology, Vol. 21, ed. Leonard Berkowitz, San
Diego, CA: Academic Press, 181-227.
Tesser, Abraham, Murrary Millar, and Janet Moore (1988), "Some Affective Consequences of
Social Comparison and Reflection Processes: The Pain and Pleasure of Being Close,"
Journal of Personality and Social Psychology, 54 (1), 49-61.
Tian, Kelly Tepper, William O. Bearden, and Gary L. Hunter (2001), "Consumers' Need for
52
Uniqueness: Scale Development and Validation," Journal of Consumer Research, 28 (1),
50-66.
Toubia, Olivier (2008), “New Product Development,” Handbook of Technology Management,
forthcoming.
Wood, Joanne V. (1989), "Theory and Research Concerning Social Comparisons of Personal
Attributes,” Psychological Bulletin, 106 (2), 231-48.
53
Download