April 1999 - WordPress.com

advertisement
April 1999 // Volume 37 // Number 2 // Tools of the Trade // 2TOT3
Cronbach's Alpha: A Tool for Assessing the Reliability
of Scales
Abstract
Summated scales are often used in survey instruments to probe underlying constructs
that the researcher wants to measure. These may consist of indexed responses to
dichotomous or multi-point questionnaires, which are later summed to arrive at a
resultant score associated with a particular respondent. Usually, development of such
scales is not the end of the research itself, but rather a means to gather predictor
variables for use in objective models. However, the question of reliability rises as the
function of scales is stretched to encompass the realm of prediction. One of the most
popular reliability statistics in use today is Cronbach's alpha (Cronbach, 1951).
Cronbach's alpha determines the internal consistency or average correlation of items in
a survey instrument to gauge its reliability. This paper will illustrate the use of the
ALPHA option of the PROC CORR procedure from SAS(R) to assess and improve upon
the reliability of variables derived from summated scales.
J. Reynaldo A. Santos
Extension Information Technology
Texas Agricultural Extension Service
Texas A&M University
College Station, Texas
Internet address: j-santos@tamu.edu
Introduction
Reliability comes to the forefront when variables developed from summated scales are
used as predictor components in objective models. Since summated scales are an
assembly of interrelated items designed to measure underlying constructs, it is very
important to know whether the same set of items would elicit the same responses if the
same questions are recast and re-administered to the same respondents. Variables
derived from test instruments are declared to be reliable only when they provide stable
and reliable responses over a repeated administration of the test.
The ALPHA option in PROC CORR provides an effective tool for measuring Cronbach's
alpha, which is a numerical coefficient of reliability. Computation of alpha is based on
the reliability of a test relative to other tests with same number of items, and measuring
the same construct of interest (Hatcher, 1994). This paper will illustrate the use of the
ALPHA option of the PROC CORR procedure from SAS(R) to assess and improve upon
the reliability of variables derived from summated scales.
Procedure
Sixteen questions using Likert-type scales (1 = strongly agree; 6 = strongly disagree)
from a national agricultural and food preference policy survey were administered
nationwide. Usable survey forms, totaling 1,111, were received and processed using the
PROC FACTOR and PROC CORR procedures of SAS. Three common factors were
extracted during factor analysis and were interpreted to represent "subsidy policy"
factors, "regulatory policy" factors, and "food safety policy" factors (Santos, Lippke, &
Pope, 1998).
To make the demonstration on Cronbach's alpha possible, SB8, which was a variable
previously deleted during factor analysis, was restored in the data set. SB8 was used to
demonstrate how a poorly selected item on a summated scale can affect the resulting
value of alpha. It should be noted here that factor analysis is not required in the
determination of Cronbach's alpha.
After factor analysis, it is a common practice to attach a descriptive name to each
common factor once it is extracted and identified. The assigned name is indicative of
the predominant concern that each factor addresses. In SAS, a RENAME
FACTOR(i)='descriptive name' statement would do the job. In this example, this can be
accomplished by
RENAME FACTOR1=SUBSIDY
FACTOR2=REGULATE
FACTOR3=FSAFETY;
While labeling is critical, it definitely makes for an easy identification of which construct
is running on what particular procedure. At this point, the named common factors can
now be used as independent or predictor variables. However, most experienced
researchers would insist on running a reliability test for all the factors before using them
in subsequent analyses.
Cronbach's Alpha: An Index of Reliability
If you were giving an evaluation survey, would it not be nice to know that the
instrument you are using will always elicit consistent and reliable response even if
questions were replaced with other similar questions? When you have a variable
generated from such a set of questions that return a stable response, then your
variable is said to be reliable. Cronbach's alpha is an index of reliability associated with
the variation accounted for by the true score of the "underlying construct." Construct is
the hypothetical variable that is being measured (Hatcher, 1994).
Alpha coefficient ranges in value from 0 to 1 and may be used to describe the reliability
of factors extracted from dichotomous (that is, questions with two possible answers)
and/or multi-point formatted questionnaires or scales (i.e., rating scale: 1 = poor, 5 =
excellent). The higher the score, the more reliable the generated scale is. Nunnaly
(1978) has indicated 0.7 to be an acceptable reliability coefficient but lower thresholds
are sometimes used in the literature.
For this demonstration, observed variables were used that precipitated a latent
construct earlier labeled "REGULATE" to run on Cronbach's alpha analysis. The following
SAS statements initiated the procedure:
PROC CORR ALPHA NOMISS;
VAR SB2 SB3 SB4 SB8 SF1 SF2 SG2;
RUN;
Where:
Label
Description
SB2 ==> Continuation of conservation benefits
SB3 ==>
Continuation of government regulation
on water quality
SB4 ==> Require farmers to plant grass strips
SB8 ==>
Require farmers to keep pesticide application
records
SF1 ==> Storage & cooking instructions for meat products
SF2 ==> Strengthen food inspection
SG2 ==> More nutritional information on food label
The first statement invoked the procedure PROC CORR that implements the option
ALPHA to do Cronbach's alpha analysis on all observations with no missing values
(dictated by the NOMISS option). The VAR statement lists down all the variables to be
processed for the analysis. Incidentally, the listed variables, except SB8, were the ones
that loaded high (i.e., showed high positive correlation) in factor analysis. The output
from the analysis is shown in Table 1.
Table 1
Output of alpha analysis for the items included in the "REGULATE" construct
Correlation Analysis
Cronbach Coefficient Alpha
for RAW variables
: 0.76729
for STANDARDIZED variables: 0.77102
Raw Variables
Std. Variables
Deleted
Correlation
Correlation
Variable
with Total
Alpha
with Total
Alpha
--------------------------------------------------------SB2
0.365790
0.764471
0.358869
0.772209
SB3
0.356596
0.765262
0.350085
0.772623
SB4
0.444259
0.779964
0.434180
0.781626
SB8
0.185652
0.808962
0.176243
0.816080
SF1
0.426663
0.761443
0.443533
0.769178
SF2
0.401001
0.763201
0.418211
0.773390
SG2
0.419384
0.762229
0.434247
0.770623
The raw variable columns were used instead of the standardized columns since the
variances showed a limited spread (data not shown). Had there been a mixture of
dichotomous and multi-point scales in the survey, we would have had relatively
heterogeneous variances in which case the use of standardized variables would have
been more appropriate. As it is, the procedure output has an overall raw alpha of .77
(rounded from .76729 from the top of table) which is good considering that .70 is the
cutoff value for being acceptable.
The printed output facilitates the identification of dispensable variable(s) by listing down
the deleted variables in the first column together with the expected resultant alpha in
the same row in the third column. For this example, the table indicates that if SB8 were
to be deleted then the value of raw alpha will increase from the current .77 to .81. Note
that the same variable has the lowest item-total correlation value (.185652). This
indicates that SB8 is not measuring the same construct as the rest of the items in the
scale are measuring. With this process alone, not only was the author able to come up
with the reliability index of the "REGULATE" construct but he also managed to improve
on it. What this means is that removal SB8 from the scale will make the construct more
reliable for use as a predictor variable.
Conclusion
This paper has demonstrated the procedure for determining the reliability of summated
scales. It emphasized that reliability tests are especially important when derivative
variables are intended to be used for subsequent predictive analyses. If the scale shows
poor reliability, then individual items within the scale must be re-examined and modified
or completely changed as needed. One good method of screening for efficient items is
to run an exploratory factor analysis on all the items contained in the survey to weed
out those variables that failed to show high correlation. In fact in this exercise, SB8 had
been previously eliminated when it showed low correlation during factor analysis. It was
intentionally reinstated to demonstrate how the ALPHA option in PROC CORR procedure
would flag and mark it out for deletion to generate an improved alpha.
References
Cronbach, L. J. (1951). Coefficient alpha and the internal structure of tests.
Psychometrika. 16, 297-334.
Hatcher, L. (1994). A step-by-step approach to using the SAS(R) system for factor
analysis and structural equation modeling. Cary, NC: SAS Institute.
Nunnaly, J. (1978). Psychometric theory. New York: McGraw-Hill.
Santos, J.R.A., Lippke, L., and Pope, P. (1998). PROC FACTOR: A tool for extracting
hidden gems from a mountain of variables. Proceedings of the 23rd Annual SAS Users
Group International Conference. Cary, NC: SAS Institute Inc.
Acknowledgement
Data for this paper were derived from the "1994 National Agricultural and Food Policy
Preference Survey" conducted by Lawrence Lippke (Texas Agricultural Extension
Service) and Benny Lockett (Cooperative Extension Program, Prairie View A&M
University).
Trademark Information
SAS is a registered trademark of the SAS Institute Inc. in the USA and other countries.
(R) indicates USA registration.
Copyright © by Extension Journal, Inc. ISSN 1077-5315. Articles appearing in the Journal become the property
of the Journal. Single copies of articles may be reproduced in electronic or print form for use in educational or
training activities. Inclusion of articles in other publications, electronic sources, or systematic large-scale
distribution may be done only with prior electronic or written permission of the Journal Editorial Office, joeed@joe.org.
If you have difficulties viewing or printing this page, please contact JOE Technical Support
© Copyright by Extension Journal, Inc. ISSN 1077-5315. Copyright Policy
Download