Uploaded by ozlemkoral1

MacKenzie2001JCR (1)

advertisement
See discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/24099085
Opportunities for Improving Consumer Research Through Latent Variable
Structural Equation Modeling
Article in Journal of Consumer Research · June 2001
DOI: 10.1086/321954 · Source: RePEc
CITATIONS
READS
169
632
1 author:
Scott B. MacKenzie
Indiana University Bloomington
73 PUBLICATIONS 117,895 CITATIONS
SEE PROFILE
Some of the authors of this publication are also working on these related projects:
The Dangers of Poor Construct Conceptualization View project
All content following this page was uploaded by Scott B. MacKenzie on 13 February 2015.
The user has requested enhancement of the downloaded file.
Reflections and Reviews
Opportunities for Improving Consumer
Research through Latent Variable Structural
Equation Modeling
SCOTT B. MACKENZIE*
This article discusses several advantages of latent variable structural equation
modeling (LVSEM), and the potential it has for solving some fundamental problems
hindering research in the field. The advantages highlighted include the ability to
control for measurement error; an enhanced ability to test the effects of experimental manipulations; the ability to test complex theoretical structures; the ability
to link micro and macro perspectives; and more powerful ways to assess measure
reliability and validity. My hope is to sensitize researchers to some of the key
limitations of currently used alternative methodologies, and demonstrate how
LVSEM can help to improve theory testing and development in our discipline.
M
ore than 25 years ago, Jöreskog, Keesling, and Wiley
(working independently) made a major breakthrough
in the field of multivariate analysis when they developed a
general framework for the analysis of covariance structures
that integrated psychometric factor analytic models with
econometric structural equation models, and permitted latent
variables to be related to each other in a simultaneous equations fashion. A few years later, the impact of this breakthrough began to be felt in the field of consumer research,
spurred on by the development of the first computer program
to implement this general method (LISREL) and the publication of Bagozzi’s (1980) influential book Causal Models
in Marketing.
Although at least some of the initial interest in latent
variable structural equation modeling (LVSEM) was driven
by the myth that it had a unique capacity to extract information about causal relationships from correlational data,
its greatest value lies in its potential to improve theory development and testing in the field by changing the way we
think about things. Because LVSEM requires us to explicitly
specify measurement relationships as well as structural relationships, it forces us to think more carefully about the
relationships between our constructs and measures, and to
recognize these relationships as hypotheses in their own
right. Because it provides a ready means of estimating and
testing complex theoretical structures, it encourages us to
broaden the scope of our theoretical models by thinking in
terms of entire systems of conceptual relationships that better represent the complex environments to which we hope
our theories apply. And finally, by making it easier to rigorously test mediating processes, investigate the systematic
effects of nonhypothesized factors (e.g., methodological or
theoretical) on hypothesized relationships, and compare hierarchically related theoretical structures, it encourages us
to pit one theory/model/process against another to advance
theoretical knowledge in the field. Thus, as noted by Bagozzi
(1980), LVSEM offers the discipline a unified, scientifically
precise approach for developing and testing theory that integrates metatheoretical criteria with methodological considerations better than any other method currently available.
*Scott B. MacKenzie is the IU Foundation Professor and professor of
marketing at the Kelley School of Business at Indiana University, Bloomington, IN 47405 (mackenz@indiana.edu). He received a B.A. in psychology (1976), an M.B.A. in marketing (1978), and a Ph.D. in marketing
(1983) from the University of California, Los Angeles. He is a former
winner of the Maynard Award from the Journal of Marketing; has chaired
the American Marketing Association (AMA) Summer Educators’, AMA
Winter Educators’, and Society for Consumer Psychology conferences; and
currently serves on the editorial boards of the Journal of Consumer Research, Journal of Consumer Psychology, Journal of Marketing Research,
Journal of Marketing, and Journal of the Academy of Marketing Science.
159
䉷 2001 by JOURNAL OF CONSUMER RESEARCH, Inc. ● Vol. 28 ● June 2001
All rights reserved. 0093-5301/2002/2801-0011$03.00
160
But the diffusion of these ideas into the field has been
slow, and some of the most powerful capabilities of LVSEM
are not being used in consumer research. It has been more
than 20 years since LVSEM was first introduced to the field,
yet since then only about 6 percent of the papers published
in the Journal of Consumer Research have tested latent
variable structural equation models. Not only is this rate
lower than for Journal of Marketing or Journal of Marketing
Research, but it shows no signs of increasing. This low rate
of usage is both surprising and disturbing, because LVSEM
is particularly well suited for testing the kinds of complex
systems of conceptual relationships often specified by theories in our discipline. Although there are several potential
reasons why LVSEM is not used more widely in our discipline, one may be that researchers do not fully appreciate
its benefits. Consequently, this article discusses several major advantages of LVSEM, and the potential it offers for
advancing research in the field. My goal is to sensitize researchers to some of the key limitations of currently used
alternative methodologies and stimulate them to rethink how
they can improve theory testing and development in our
discipline via LVSEM.
ADVANTAGES OF LATENT VARIABLE
STRUCTURAL EQUATION MODELING
The Ability to Control for Measurement Error
Perhaps the most important advantage of LVSEM methods is the ability to take measurement error into account.
This is important because most measures used in consumer
research reflect not only the construct they are intended to
represent, but also random and systematic measurement error. Random error is due to the inherent difficulties in accurately measuring abstract constructs, and systematic error
can be due to contaminating/confounding factors (e.g., nonhypothesized constructs, social desirability, self-generated
validity, or implicit theories), common method factors (e.g.,
scale type, rater, or context), response biases (e.g., leniency,
yea-saying, or nay-saying), or anything other than the hypothesized construct that has a systematic effect on the construct measures. Both types of measurement error threaten
the validity of consumer research findings, and they are an
especially serious concern whenever the constructs of interest are abstract or difficult to measure—which is more
often than we tend to admit.
It is widely recognized that measurement error causes two
problems when examining the relationship between a predictor and a criterion variable in a simple regression. First,
measurement error in the predictor variable artificially attenuates the estimate of the slope of the relationship between
the predictor and criterion variable. Second, measurement
error in either the predictor or the criterion variable artificially reduces the proportion of variance in the criterion
variable accounted for by the predictor. Thus, measurement
error increases the probability of type II errors when testing
hypotheses and may be one of the main reasons why the
proportion of variance accounted for is usually quite low in
JOURNAL OF CONSUMER RESEARCH
consumer research studies (see, e.g., Peterson, Albaum, and
Beltramini 1985).
However, the problems caused by measurement error get
even worse when other predictor variables are added to the
model (e.g., in the case of multiple regression) and when
the model is placed into a larger theoretical context (e.g.,
in the case of multiple equation systems). Although the slope
coefficient is always attenuated by measurement error in a
simple regression, this generalization does not hold in multiple regression or multiple equation systems. Measurement
error in even a single predictor variable can inflate or deflate
any/all of the other regression coefficients in a multiple
regression equation (depending upon the reliability of the
measures, and the magnitudes and signs of the correlations
among the predictors). In fact, in multiple equation systems,
measurement error can inflate or deflate not only estimates
of the relations between the predictor and criterion constructs, but also estimates of the relations among the criterion
constructs, and the structural error terms. Thus, estimated
coefficients from multiple regression and multiple equation
models that ignore measurement error can be higher or lower
than the true coefficients, thus leading to both type I and
type II errors.
At a minimum, this suggests that the potential consequences of ignoring random or systematic measurement error can be quite serious. Just how serious is illustrated by
Cote and Buckley (1987) who calculated the average amount
of trait, method (i.e., systematic), and random error variance
present in 70 multitrait, multimethod studies. They found
that in marketing studies the measures averaged approximately 68 percent trait variance, 16 percent systematic measurement error, and 16 percent random measurement error;
and in the psychological/sociological research studies the
measures contained about 36 percent trait variance, 29 percent systematic measurement error, and 35 percent random
measurement error. The figures for a typical JCR study
would probably be somewhere in between. Thus, approximately one-third to two-thirds of the variance in a typical
consumer research measure is due to measurement error.
As noted by Cote and Buckley (1988), the impact of this
much measurement error on estimates of the relationships
between constructs can be easily calculated. Their analysis
indicates that on average, the true relationship between two
constructs is typically 2.4 times greater than the estimated
relationship between the constructs’ imperfect measures.
Obviously, an underestimation of this magnitude in a simple
regression would have a huge effect on the type II error
rate. Moreover, because the amount of measurement error
varies depending on the type of construct being studied
(perhaps due to the difficulty of measurement or abstractness), and by discipline (perhaps due to differences in the
emphasis on construct validation and measurement procedures), so will the extent of the bias. Thus, the bias for some
types of consumer research is even greater than the overall
average they cite. For example, it can be shown that the
bias factor for psychological/sociological research is 2.8, for
attitude research is 3.4, and for personality/individual dif-
IMPROVING CONSUMER RESEARCH
ference research is 2.6. These bias estimates suggest that
the common practice of ignoring measurement error is having a substantial (but largely unnoticed) detrimental impact
on research in the field.
Finally, it is important to remember that the biasing effect
of measurement error can increase as well as decrease the
observed relationships. Table 1 uses Cote and Buckley’s
(1987) estimates of the average amount of trait, method,
and random error variance, and the average method intercorrelation, to calculate the impact of measurement error on
the observed correlation between measures of different types
of constructs (e.g., attitude, personality, aptitude, and job
performance/satisfaction). For example, the entry in the first
column of the first row indicates that even though two attitude constructs are perfectly correlated, the observed correlation between their measures will be only .524 because
of measurement error. Similarly, the entry in the last column
of the first row indicates that even though two attitude constructs are completely uncorrelated, the observed correlation
between their measures will be .226 because of measurement
error. Both of these numbers are depressing, but for different
reasons. The entries in the first column are depressing because they show that even though two traits are perfectly
correlated, typical levels of measurement error will cut the
observed correlation between their measures in half. The
last column is depressing because it shows that even when
two constructs are completely uncorrelated, measurement
error will inflate the observed correlation between their measures causing it to be greater than zero. Indeed, these numbers are rather scary because some of them are not very
different from the effect sizes often reported in JCR.
Fortunately, the biasing effects of measurement error can
be controlled in several ways. Random error can be statistically controlled by developing multiple measures of constructs and using LVSEM to test the hypothesized relationships among them. If multiple measures cannot be obtained,
it is also possible to partially control for random error by
fixing the measurement error term at some reasonable value
based on prior research (e.g., a reliability estimate from
another study could be used if the measure is a scale score)
or theory (e.g., the error in a self-reported measure of gender
is likely to be smaller than the error in a self-reported measure of attractiveness). Systematic error can be controlled
through changes in the design of the study or by modeling
the source of the error through the addition of a first-order
method factor. With the exception of a few multitrait, multimethod studies, JCR articles have not attempted to statistically control for systematic error variance.
Improved Experimental Research
Another advantage of structural equation modeling with
latent variables is that it has the potential to fundamentally
improve experimental research in the field. In part, this is
because measurement error can be taken into account. Just
because a study is done in a lab setting does not guarantee
that the measures are free from error. This is especially true
for the kinds of variables frequently examined in consumer
161
TABLE 1
RELATION BETWEEN “TRUE” AND OBSERVED CORRELATION
FOR “AVERAGE” MEASURES BY TYPE OF CONSTRUCT
“True” correlation (R 2)
Observed correlation
1.00 (1.00)
.50 (.25)
.00 (.00)
Attitude—attitude
Attitude—personality
Attitude—aptitude
Attitude—job performance/satisfaction
Personality—personality
Personality—aptitude
Personality—job performance/satisfaction
Aptitude—aptitude
Aptitude—job performance/satisfaction
Job performance/satisfaction—job performance/satisfaction
.524 (.275)
.516 (.266)
.524 (.275)
.375 (.141)
.345 (.119)
.352 (.124)
.226 (.051)
.175 (.031)
.180 (.032)
.507 (.257)
.526 (.277)
.532 (.283)
.321 (.103)
.331 (.110)
.336 (.113)
.134 (.018)
.135 (.018)
.139 (.019)
.529 (.280)
.539 (.290)
.316 (.100)
.341 (.116)
.103 (.011)
.144 (.021)
.534 (.285)
.320 (.102)
.106 (.011)
.539 (.290)
.306 (.094)
.074 (.005)
NOTE.—This analysis assumes there are no trait # method interactions and
is based on Cote and Buckley’s (1987) estimates of the average trait variance,
average method variance, and the average method correlations for the two
types of constructs.
research experiments (e.g., beliefs, emotions, attitudes, satisfaction, brand loyalty, brand equity, materialism, ethnocentrism, need for cognition, involvement, and product
knowledge).
However, there are other important advantages for experimental research as well. One is that the general framework
is extremely flexible. Indeed, since LVSEM is a generalization
of both regression and factor analysis, it incorporates most
linear modeling methods (including ANOVA and ANCOVA)
as special cases. This means that models that relax some of
the traditional restrictions of ANOVA or ANCOVA can be
estimated. For example, LVSEM can be used to examine the
effect of a treatment manipulation on a dependent variable,
while controlling for covariates that are related to the treatment manipulation and for measurement error. Alternatively,
LVSEM can also be used to estimate carryover effects in
repeated measures designs in a straightforward fashion, while
also taking measurement error into account. These analytical
issues cannot be examined with ANOVA or ANCOVA because they violate assumptions of the models.
Another key advantage is that LVSEM allows more rigorous tests of the hypothesized effects of experimental manipulations. Figure 1, panel A, illustrates this point. As
shown in the figure, most experimental manipulations are
intended to influence some conceptual variable that cannot
be measured or manipulated without error (see e.g., Cook
and Campbell 1979; Sawyer, Lynch, and Brinberg 1995). It
is hypothesized that the treatment manipulation will influence the dependent variable because of the impact of the
manipulation on this particular conceptual independent variable. In other words, the hypothesized (or intended) effect
of the manipulation is represented by paths g11 and b21 in
Figure 1. Path g11 represents the validity of the experimental
JOURNAL OF CONSUMER RESEARCH
162
FIGURE 1
TESTING THE HYPOTHESIZED EFFECTS OF EXPERIMENTAL MANIPULATIONS WITH LATENT VARIABLE STRUCTURAL EQUATION
MODELING
manipulation, and path b21 represents the impact of the conceptual independent variable on the conceptual dependent
variable. Path g21 in the figure represents the fact that it is
possible for a manipulation to influence the dependent variable for unintended reasons, which have nothing to do with
the hypothesized effect of the manipulation. Confounds like
this sometimes result from the laudable desire to make the
stimulus manipulations realistic, and sometimes simply from
the fact that a manipulation is often a blunt instrument, and
it is difficult to create treatment conditions that influence
the conceptual independent variable without influencing
anything else.
Traditional ANOVA cannot separate these intended and
unintended effects, but LVSEM can. If the model in Figure
1, panel A, is estimated, the hypothesized effect of the manipulation (i.e., the indirect path g11b21) can be estimated
and tested (cf. Brown 1997) while controlling for the nonhypothesized effects of the manipulation (i.e., path g21). If
the indirect path g11b21 is significant and path g21 is not,
then the entire effect of the manipulation on the dependent
variable is for the hypothesized reason. If the indirect path
g11b21 is significant and the direct path g21 is too, then only
part of the effect of the manipulation on the dependent variable is for the hypothesized reason. In this event, the standardized path coefficients can be examined to determine the
relative strengths of the hypothesized and nonhypothesized
IMPROVING CONSUMER RESEARCH
effects. And in the unhappy event that the indirect path g11b21
is nonsignificant while the direct path g21 is significant, it
would indicate that the manipulation influenced the dependent measure for reasons other than that which was hypothesized. It is important to note that the latter outcome
can occur even though the manipulation is shown to have
a significant effect on the manipulation check measures (i.e.,
path g11 is significant).
Furthermore, this type of thinking can and should be extended to the case of multiple manipulations and to interactions between manipulations. In the case of multiple main
and interaction effects on the same conceptual independent
variable, the extension is relatively straightforward as shown
in Figure 1, panel B. In this figure, the hypothesized main
and interaction effects of the manipulations on the conceptual dependent variable are represented by the indirect paths
from the manipulation and interaction dummy variables to
the conceptual dependent variable that go through the conceptual independent construct (i.e., paths g11b21, g13b21, and
g12b21). Confounding effects would be represented in this
model by direct paths from the manipulation and interaction
dummy variables to the conceptual dependent variable (not
shown in Fig. 1, panel B).
In the case of multiple main and interaction effects on
different conceptual independent variables, the modeling
gets a bit more complicated. As shown in Figure 1, panel
C, the main effect of each manipulation on the conceptual
dependent measure should be modeled as being mediated
by its effect on the conceptual independent variable it was
intended to influence (paths g11b31 and g23b32). The interaction effect is captured by the indirect effects of the interaction dummy variable on the conceptual dependent variable (paths g12b31 and g22b32). Moreover, with this model,
not only can the hypothesized effects of the manipulations
and their interaction be tested, but so can two types of unintended effects. One type of confound is the effect of the
manipulation on the other conceptual independent variable
measured in the study. This type of confound would be
represented by a direct path from manipulation A to conceptual independent variable IVB and/or a direct path from
manipulation B to conceptual independent variable IVA (neither path is shown in Fig. 1, panel C), and it is the type of
confound that Perdue and Summers (1986) and Cook and
Campbell (1979) have cautioned researchers to check for.
As previously discussed, the nonhypothesized effects of any
unmeasured variables would be represented by direct paths
from the manipulations and interaction to the conceptual
dependent variable (not shown in Fig. 1, panel C).
However, it is important to recognize that if the interaction
is modeled in this way the interaction of the treatment manipulations on the conceptual dependent variable is driven
by the interaction of the treatment manipulations on the
conceptual independent variables. That is not quite the same
as being driven by the interaction between the conceptual
independent variables themselves. Figure 1, panel D, depicts
a way to model the interaction between the conceptual independent variables themselves. In this model, the hypoth-
163
esized main effect of manipulation A is captured by the
indirect path g11b41, and the hypothesized main effect of
manipulation B is captured by the indirect path g23b42. The
IVA # IVB construct represents the interaction between the
conceptual independent variables, and its indicators are the
cross products of the measures of the conceptual independent variables. The effect of this latent variable interaction
on the conceptual dependent variable is represented by path
b43. Although constraints must be placed on the factor loadings and measurement error terms for the indicators of the
latent variable interaction, models like this can be estimated using LVSEM programs and maximum likelihood or
weighted least squares estimators; or regression programs
and two-stage least squares estimators (see Schumacker and
Marcoulides 1998).
The bottom line is that, although the models shown in
Figure 1 more accurately represent what is going on in many
experiments, they nevertheless are rarely used. This should
change. Not only would modeling the effects of manipulations in this way permit more rigorous hypothesis tests,
but it would also allow (a) the testing of alternative explanations of the effects of the same manipulation (e.g., processing goals, ad format, etc.); (b) examinations of the relative magnitudes of the effects of alternative mediating
processes triggered by the same manipulation; and (c) the
efficiency of manipulations to be improved by providing
estimates of the impact of the manipulation on hypothesized
and nonhypothesized factors.
Enhanced Testing of Theoretical Structures
Another advantage of LVSEM is the ability to compare
complex theoretical models, involving whole systems of
conceptual relationships. When the competing models are
hierarchically related, a direct statistical test of the differences between the two theoretical structures is possible. But
even when they are not, other means of comparison are
available (Browne and Cudeck 1993). Latent variable structural equation modeling also allows researchers to conveniently compare the performance of a theoretical model across
multiple populations, contexts, or times, thus making it easier to test the boundary conditions of a theory, and to evaluate the generalizability of the hypothesized relationships
and/or parameter estimates. For example, comparing the performance of a model across treatment and control groups
can provide insight into the effects of a manipulation on the
means of the latent constructs and measures, or the slopes
of the relationships between the measures and latent constructs or among the latent constructs themselves (see, e.g.,
MacKenzie and Spreng 1992). Comparing the performance
of a model across developmental and validation samples
provides insight into the robustness of the developmental
sample parameter estimates (see e.g., MacKenzie, Lutz, and
Belch 1986). Comparing the performance of a model across
cultures provides insight into whether the estimates or the
relationships are culturally specific (see, e.g., Steenkamp and
Baumgartner 1998).
164
The Ability to Link Micro and Macro
Perspectives
Another advantage of LVSEM is that it permits the estimation of multilevel models that integrate micro- and macrolevel perspectives on consumer behavior. Multilevel
models explicitly recognize that microlevel phenomena are
embedded in higher level macro contexts, and that macrolevel phenomena often emerge through the interaction and
dynamics of lower level micro elements (see Klein and Kozlowski 2000). Figure 2 illustrates some of the potential
inter- and intralevel relationships possible. Consumer researchers have tended to emphasize either a micro- or macrolevel perspective without recognizing the interaction between the two. The micro perspective is rooted in
psychological research that focuses on variations among individual characteristics and their effects on individual reactions (Fig. 2, path 1). This approach implicitly assumes
that a macrolevel focus on groups of individuals tends to
mask the variations in individual behavior that are of primary interest to psychologically oriented researchers. In
contrast, the macro perspective is rooted in anthropological
and sociological research that assumes that there are substantial regularities in social behavior that transcend the apparent differences among individual actors (Fig. 2, path 2).
According to this approach, it is possible to focus on aggregate or collective responses and to ignore individual variation, because it is assumed that situational and demographics factors will lead people to behave similarly.
Obviously, neither the micro nor macro perspective by
itself can adequately account for consumer behavior. The
macro perspective neglects the means by which individual
cognition, affect, behavior, and their interactions give rise
to higher level phenomena like group norms, cultural values,
segment preferences, market shares, and so on (Fig. 2, paths
3a and 4a). Here there is a danger of anthropomorphism
because reference groups, market segments, and subcultures
do not behave—people do. The micro perspective is guilty
of neglecting contextual, social, and embedding factors that
can determine or constrain individual differences (Fig. 2,
path 3b), individual behaviors (Fig. 2, path 5), or the effects
of individual differences on individual behaviors (Fig. 2,
path 6). It is also guilty of ignoring the question of how
individual behavior translates into macrolevel collective responses of interest to managers and public policy makers
(Fig. 2, path 4a).
Consequently, consumer research would benefit from the
ability to estimate multilevel models that link micro- and
macrolevel variables. This can be done using contextual
analysis, hierarchical linear modeling, or LVSEM. However,
contextual analysis and hierarchical linear modeling have
three key shortcomings. One is that they ignore measurement error. Another is that they cannot be easily extended
to multiequation systems at the individual (or group) level.
And both require the criterion variable to be measured at
the lowest level of interest to the researcher (although the
predictors can be at either this level or a higher level). This
JOURNAL OF CONSUMER RESEARCH
FIGURE 2
MULTILEVEL MODEL LINKAGES
means they are inappropriate for situations where researchers are interested in the joint effects of individual and group
level variables on group level phenomena.
Fortunately, these problems can be alleviated through the
use of LVSEM. Although the technical details of how
LVSEM can be used to estimate multilevel models are complex and cannot be elaborated on here, several examples
can be found in the literature (e.g., Kaplan and Elliott 1997).
For the present purposes, the important point is that LVSEM
can be used to model the nonindependence caused by grouping factors and to estimate the effects of group level factors
on individual level parameters (both latent variable means
and structural relationships), while also taking measurement
error into account and permitting systems of equations to
be simultaneously estimated. None of the other commonly
used methods of estimating group level effects can do all
of this.
This capability opens the door to investigating an interesting new set of cross-level phenomena. For example, multilevel models could be used to examine the effects of group
level variables like social norms and cultural values on individual beliefs, attitudes, and purchase intention, and on
the relationships among these constructs. Alternatively,
when studying advertising effects, a conceptual model could
be estimated for all the individuals who saw a particular
advertisement. If different groups of individuals saw different advertisements, one could then investigate whether
the means and/or path coefficients of this model are related
to characteristics of the advertisements, the media context
in which the ad was placed, and so on. In other words,
individuals could be grouped within advertisements or media contexts.
In addition, although it might not be initially obvious,
multilevel structures also occur in longitudinal studies when
a time series of measurements is taken on a number of
different individuals. In this instance, the repeated observations of the focal variable(s) can be thought of as being
grouped within individuals so the individual himself/herself
IMPROVING CONSUMER RESEARCH
is the higher level grouping factor. This presents some interesting possibilities. For example, researchers could estimate a linear and quadratic trend in some variable of interest
(e.g., attention, customer satisfaction, attitude, attitude accessibility, brand perceptions) over time for each person,
and then investigate whether these trajectories are related
to individual differences (e.g., cognitive ability, affectivity,
product involvement, brand usage, need for cognition,
global or instrumental values). Alternatively, using this type
of design it might also be possible to estimate the effects
of (a) a person’s need for achievement, need for recognition,
vanity, public self-focus, or self-esteem on trends in his/her
attitudes toward consumption produced by repeated exposures to a particular type of television programming; or (b)
an individual’s need for stimulation, need for uniqueness,
affect intensity, or risk aversion on trends in his/her satisfaction with a product produced by repeated usage experiences. Examples of how to estimate growth curve models
like these can be found in the LVSEM research literature
(e.g., Duncan and Duncan 1995).
Better Assessment of Measures
A final advantage of LVSEM is that it has the potential
to improve scale development in the field by providing statistical tests of construct dimensionality, new indices of construct/item reliability, and more rigorous procedures for
evaluating discriminant validity. Indeed, it already has in
many respects, and it is now commonplace for studies developing new scales to report the results of a confirmatory
factor analysis. However, progress in this area has been
undermined by two problems. One problem is that researchers often fail to appreciate the theoretical implications of
their measurement model specifications. Even though
LVSEM can accommodate different types of measures (Bollen and Lennox 1991), often a reflective indicator model
(where causality flows from the construct to the measures)
is blindly applied to constructs for which a formative indicator model (where causality flows from the measures to
the construct) would be more appropriate. When this happens, the desire to achieve high levels of construct and item
reliability causes items to be dropped that are necessary to
adequately tap the domain of the construct. Thus, content
validity is needlessly sacrificed for reliability, simply because the researcher has failed to think about the appropriate
direction of causality between the constructs and measures.
The other problem is that researchers often ignore what is
known about the reliability and validity of their measures
when testing their hypotheses. This happens whenever scale
scores are created by summing or averaging several imperfect measures and are used to represent the latent constructs
in the hypothesis tests. The problem with this practice is
that scale scores do not adequately represent the latent constructs because they ignore measurement error (Bollen and
Lennox 1991). As previously discussed, this leads to some
very thorny problems.
165
CONCLUSIONS
A critical examination of the consumer research literature
reveals a few unhealthy trends. Too many studies overlook
measurement error, even though it depresses R2 and increases the probability of type I and type II errors. Latent
variable structural equation modeling is almost never used
to analyze experimental data, even though it is better than
ANOVA in many instances. It is unusual to find direct tests
of the differences between hierarchically related models in
the literature, or rigorous statistical comparisons of the performance of a conceptual model across multiple populations,
contexts, or times. It is also rare to see conceptual models
that integrate micro- and macrolevel perspectives, even
though cross-level influences undoubtedly occur. And measurement model mis-specifications involving a reversal of
the true direction of causality between constructs and measures are all too common. Although these criticisms certainly do not apply to all consumer research, they nevertheless represent a fairly serious indictment of a great deal
of it.
Fortunately, LVSEM provides an opportunity to alleviate
these problems. Thus, it is surprising that it is not used more
often in consumer research. But what accounts for this neglect? Is it that our measures are perfectly reliable, our manipulations are uncontaminated by error or confounds, our
models do not need to be cross-validated, or that macrolevel
factors do not influence microlevel phenomena? I doubt
anyone believes that. Is it that LVSEM programs like LISREL, EQS, Amos, and MPlus are too cumbersome and difficult to use? Although this may once have been true, it is
not anymore. Recent versions of all these programs have
added sophisticated graphical user interfaces that permit researchers to specify their models literally in pictorial form,
and the programs themselves now generate the commands
necessary to connect the variables to the appropriate constructs, and estimate the specified relationships between the
constructs. (Amos is particularly good in this regard.) Or is
it that LVSEM is too sensitive (i.e., too likely to reject the
hypothesized model)? Once again, although this may once
have been true due to over reliance on the chi-square statistic
as the index of model fit, it is not true anymore. Recent
simulations by Hu and Bentler (1999) demonstrate that a
reasonable balance of type I and type II error rates can easily
be achieved when a combination of absolute and incremental
fit indices is used, and the cutoff criteria are adjusted as
they recommend. Or is LVSEM neglected simply because
a great deal of our empirical research comes from experiments, and ANOVA is thought to be the best way to analyze
experimental data? Hopefully, the preceding discussion has
demonstrated that this is not always true—especially when
the independent and dependent variables cannot be measured/manipulated without error and/or when causal relationships among the dependent variables are of interest.
Therefore, I would like to close by offering the following
suggestions for improving the conduct of consumer research
through LVSEM. First, model conceptual variables as latent
factors with multiple indicators whenever possible in order
JOURNAL OF CONSUMER RESEARCH
166
to control for measurement error. Second, think carefully
about the appropriate direction of causality between each
construct and its measures, and make sure that the measurement model specification is consistent with it. Third,
measure the conceptual independent variables in experiments whenever they cannot be manipulated without error
or confounds, and use them as mediators in the tests of the
impact of the manipulations on the dependent variables.
Fourth, test your models against competing theoretical specifications, and evaluate their performance across samples,
situations, and times. And finally, seriously consider whether
macrolevel phenomena influence the microlevel phenomena
specified by your theories, and model their influences when
appropriate. Together, these recommendations are a constructive way to address some fundamental problems that
have weakened consumer research for far too long.
[David Glen Mick served as editor for this article.]
REFERENCES
Bagozzi, Richard P. (1980), Causal Models in Marketing, New
York: Wiley.
Bollen, Kenneth A. and Richard Lennox (1991), “Conventional
Wisdom on Measurement: A Structural Equation Perspective,” Psychological Bulletin, 110, 305–314.
Brown, Roger L. (1997), “Assessing Specific Mediational Effects
in Complex Theoretical Models,” Structural Equation Modeling, 4 (2), 142–156.
Browne, Michael W. and Robert Cudeck (1993), “Alternative Ways
of Assessing Model Fit,” in Testing Structural Equation Models, ed. Kenneth A. Bollen and J. Scott Long, Newbury Park,
CA: Sage.
Cook, Thomas D. and Donald T. Campbell (1979), Quasi-Experimentation: Design and Analysis Issues for Field Settings,
Boston: Houghton Mifflin.
Cote, Joseph A. and M. Ronald Buckley (1987), “Estimating Trait,
Method, and Error Variance: Generalizing across 70 Construct
Validation Studies,” Journal of Marketing Research, 24 (August), 315–318.
——— and M. Ronald Buckley (1988), “Measurement Error and
Theory Testing in Consumer Research: An Illustration of the
View publication stats
Importance of Construct Validation,” Journal of Consumer
Research, 14 (March), 579–582.
Duncan, Terry E. and Susan C. Duncan (1995), “Modeling the Processes of Development via Latent Variable Growth Curve Methodology,” Structural Equation Modeling, 2 (3), 187–213.
Hu, Li-tze and Peter M. Bentler (1999), “Cutoff Criteria for Fit
Indexes in Covariance Structure Analysis: Conventional Criteria versus New Alternatives,” Structural Equation Modeling, 6 (1), 1–55.
Kaplan, David and Pamela R. Elliott (1997), “A Didactic Example
of Multilevel Structural Equation Modeling Applicable to the
Study of Organizations,” Structural Equation Modeling, 4 (1),
1–24.
Klein, Katherine J. and Steve W. J. Kozlowski (2000), Multilevel
Theory, Research and Methods in Organizations: Foundations Extensions, and New Directions. San Francisco: JosseyBass.
MacKenzie, Scott B., Richard J. Lutz, and George E. Belch (1986),
“The Role of Attitude toward the Ad as a Mediator of Advertising Effectiveness: A Test of Competing Explanations,”
Journal of Marketing Research, 23 (May), 130–143.
_____ and Richard A. Spreng (1992), “How Does Motivation Moderate the Impact of Central and Peripheral Processing on
Brand Attitudes and Intentions?” Journal of Consumer Research, 18 (March), 519–529.
Perdue, Barbara C. and John O. Summers (1986), “Checking the
Success of Manipulations in Marketing Experiments,” Journal of Marketing Research, 23 (November), 317–326.
Peterson, Robert A., Gerald Albaum, and Richard F. Beltramini
(1985), “A Meta-Analysis of Effect Sizes in Consumer Behavior Experiments,” Journal of Consumer Research, 12
(June), 97–103.
Sawyer, Alan G., John G. Lynch, Jr., and David L. Brinberg (1995),
“A Bayesian Analysis of the Information Value of Manipulation and Confounding Checks in Theory Tests,” Journal of
Consumer Research, 21 (March), 581–595.
Schumacker, Randall E. and George A. Marcoulides (1998), Interaction and Nonlinear Effects in Structural Equation Modeling, Mahwah, NJ: Erlbaum.
Steenkamp, Jan-Benedict E. M. and Hans Baumgartner (1998),
“Assessing Measurement Invariance in Cross-National Consumer Research,” Journal of Consumer Research, 25 (June),
78–90.
Download