Effects of Diversity Experiences on Critical

advertisement
Chad Loes
Ernest Pascarella
Paul Umbach
Effects of Diversity Experiences on
Critical Thinking Skills: Who Benefits?
The benefits of diversity experiences on a range of college outcomes
have been well documented (e.g., Adams & Zhou-McGovern, 1994;
Astin, 1993; Chang, 1999, 2001; Chang, Astin, & Kim, 2004; Chang,
Denson, Saenz, & Misa, 2006; Gurin, Dey, Hurtado, & Gurin, 2002;
Hu & Kuh, 2003; Hurtado, 2001; Jayakumar, 2008; Kuklinski, 2006;
Milem, 2003; Pascarella & Terenzini, 2005). However, for one particularly salient goal of higher education—the development of critical
thinking (Ennis, 1993; Giancarlo & Facione, 2001; Pascarella & Terenzini, 2005)—we are only beginning to understand the potential influence of involvement in diversity experiences during college. Gurin et
al. (2002) make a convincing argument for why exposure to diversity
experiences might foster the development of more complex forms of
thought, including the ability to think critically. Drawing on research
that speaks to the social aspects of cognitive development, they point
out that students will be more likely to engage in effortful and complex modes of thought when they encounter new or novel situations that
challenge current and comfortable modes of thinking. This can often
happen in classroom settings, but it can also occur in other contexts
when students encounter others who are unfamiliar to them, when these
encounters challenge students to think or act in new ways, when people
The research on which this study was based was supported by a generous grant from
the Center of Inquiry in the Liberal Arts at Wabash College to the Center for Research
on Undergraduate Education at the University of Iowa.
Chad Loes is a professor at Mount Mercy University, Ernest Pascarella is the
Petersen Professor of higher education at The University of Iowa, and Paul Umbach is
a professor at North Carolina State University.
The Journal of Higher Education, Vol. 83 No. 1 (January/February 2012)
Copyright © 2012 by The Ohio State University
2
The Journal of Higher Education
and relationships change and produce unpredictability, and when students encounter others who hold different expectations for them.
The body of research that focuses on the potential influence of diversity experiences on the development of the capacity to think critically
is quite small. In one of the earliest studies that attempted to estimate
the link between exposure to diversity experiences and growth in critical thinking skills, Dey (1991) analyzed multi-institutional, longitudinal data from the 1985–89 iteration of the Cooperative Institutional
Research Program (CIRP). Dey sought to identify the college experiences that influenced student self-reported growth in critical thinking
ability. With statistical controls in place for such confounding influences
as SAT verbal scores, secondary school experiences, intellectual selfesteem, self-rated mathematics ability, and institutional characteristics,
self-reported growth in critical thinking skills during college had a statistically significant, positive relationship with the frequency with which
a student was involved in discussions of racial-ethnic issues over the
preceding year. Similar results have also been reported by other scholars using a different iteration of the longitudinal CIRP data from that
analyzed by Dey. Kim (1995, 1996, 2002) and Hurtado (2001) both
analyzed data from the 1987–91 CIRP sample. Both investigators statistically controlled for such alternative influences as institutional selectivity, secondary school achievement, academic self-concept, study
engagement, and leadership experiences during college. With such statistical controls in place, different measures of student engagement in
diversity activities (e.g., studying with someone from a different racial/
ethnic background, taking an ethnic studies course, attending a racialcultural awareness workshop) tended to have statistically significant,
positive net relationships with student self-reported growth in critical
thinking skills (Hurtado, 2001) and on a composite scale combining student self-reported growth during college in both critical thinking skills
and problem solving skills (Kim, 1995, 1996, 2002). Essentially similar
results have been reported by Gurin (1999) using a related measure of
self-reported “complexity of thinking,” and Chang, Denson, Saenz, and
Misa (2006) with a self-reported measure of “cognitive development.”
The work of Dey (1991), Chang et al. (2006), Gurin (1999), Hurtado
(2001), and Kim (1995, 1996) is pioneering in that it alerted scholars
to the possibility that such a central intellectual goal of postsecondary education as improved critical thinking skills might be fostered by
a student’s exposure to diversity experiences. However, while student
self-reports can be revealing and important outcomes, objective standardized instruments that more directly assess critical thinking skills are
generally viewed as more psychometrically valid measures (Pascarella,
Effects of Diversity Experiences 3
2001; Pike, 1996). Inquiry that attempts to estimate the impact of diversity experiences on critical thinking skills using objective standardized
measures is extremely limited. Terenzini and his colleagues (Terenzini,
Springer, Yeager, Pascarella, & Nora, 1994) analyzed data from the first
year of the National Study of Student Learning (NSSL), a longitudinal, 23-institution study that was conducted between 1992 and 1995.
Among the instruments used in the NSSL to measure student cognitive
growth during college was a 40-minute test of critical thinking skills
from the Collegiate Assessment of Academic Proficiency (CAAP). The
CAAP was developed by the American College Testing Program (ACT)
(1990). After introducing controls for a battery of potential confounding
influences (e.g., precollege critical thinking scores, student demographic
characteristics and aspirations, the characteristics of the institution attended, work responsibilities, patterns of coursework taken, and fullor part-time enrollment), Terenzini et al. found that attending a racialcultural awareness workshop had a small, but statistically significant
and positive relationship with first-year gains in tested critical thinking
skills.
An extension of the work of Terenzini et al. (1994), however, suggests that the nature of the influence of diversity experiences may be
more complex than that indicated by the Terenzini et al. findings. Pascarella, Palmer, Moye, and Pierson (2001) reanalyzed the NSSL data
to include student growth in tested critical thinking skills over both the
first year of college and the first three years of college. They were specifically interested in estimating the net effects of specific diversity experiences on growth in critical thinking. Statistical controls were introduced for an extensive array of potential confounding variables, such
as precollege critical thinking scores, a measure of precollege academic
motivation, student demographic characteristics, institutional selectivity, and other college experiences, and separate regression estimates
were run for white students and students of color. Net of the controls,
Pascarella et al. found that the positive link between participation in
specific diversity activities (e.g., attending a racial-cultural awareness
workshop, making friends with someone from a different race), and critical thinking gains was significantly stronger for White students than for
their non-White counterparts.
The findings of Pascarella et al. (2001), which were based on data
collected 15 years ago, suggest the possibility that the nature of the impact of involvement in diversity experiences on the development of critical thinking skills may be conditional rather than general. That is, the
same benefits may not accrue equally to all students, but instead may
differ in magnitude (and perhaps even direction) for different kinds of
4
The Journal of Higher Education
students. In short, the characteristics with which students enter postsecondary education may shape the cognitive benefits of involvement in
diversity activities or experiences in nontrivial ways.
The Pascarella et al. (2001) study was not the first or only investigation to yield results suggesting that the influence of diversity experiences on student development in college might vary for different kinds
of students. Work by Antonio, Chang, Hakuta, Kenny, Levin, and Milem
(2004), Chang (1999), and Gurin et al. (2002) has all suggested that the
effects of diversity experiences may be conditional rather than general.
Our review of the literature, however, yielded no investigation in the
ensuing decade and a half that attempted to empirically validate this line
of reasoning by replicating and extending the Pascarella et al. findings
with standardized measures of critical thinking. This is somewhat surprising given mounting evidence that growing diversity in the American
postsecondary student population is likely to give rise to an increased
number of conditional effects in college impact research (Pascarella,
2006; Pascarella & Terenzini, 2005).
The present study sought to replicate and extend the findings of Pascarella et al. (2001) with respect to the conditional effects of diversity
experiences on growth in critical thinking skills. To this end, we analyzed data from the initial year of the Wabash National Study of Liberal Arts Education (WNSLAE), a 19-institution, longitudinal investigation of the institutional characteristics and specific college experiences
that enhance growth in a range of liberal arts oriented outcomes, one of
which was a standardized measure of critical thinking skills. Specifically, it sought to determine if the effects on growth in critical thinking
skills of two dimensions of diversity involvement, classroom diversity
and interactional diversity, were general or conditional. Based on the
earlier findings of Pascarella et al., we anticipated that the net effects
of classroom diversity and interactional diversity would differ in magnitude for white students and students of color. We also anticipated that
these two dimensions of diversity involvement might have conditional
impacts on critical thinking gains for students who entered college with
different levels of precollege critical thinking or different levels of
tested precollege academic preparation.
The present study also sought to address two problematic methodological features in the Terenzini et al. (1994) and Pascarella et al.
(2001) investigations. First, both previous investigations used singleitem measures of diversity experiences which are of questionable reliability. The present study employed diversity experience scales which
combined multiple items. Second, the Terenzini et al. and Pascarella
et al. studies analyzed multi-institutional data, but did not take into ac-
Effects of Diversity Experiences 5
count the nesting or clustering effect. The nesting or clustering effect
assumes that students within each institution would tend to behave in
a more similar manner than students across institutions. Thus, the error
terms for the prediction model are correlated, which violates one of the
assumptions of Ordinary Least Squares regression and results in underestimated standard errors in regression estimates (Ethington, 1997;
Raudenbush & Bryk, 2001). Therefore, we accounted for the nested nature of the data by using appropriate regression procedures that adjust
for this clustering (Groves et al., 2004).
Research Methods
Samples
Institutional sample. The sample in the study consisted of incoming
first-year students at 19 four-year and two-year colleges and universities located in 11 different states from 4 general regions of the United
States: Northeast, Southeast, Midwest, and Pacific Coast. Institutions
were selected from more than 60 colleges and universities responding to
a national invitation to participate in the Wabash National Study of Liberal Arts Education (WNSLAE). Funded by the Center of Inquiry in the
Liberal Arts at Wabash College, the WNSLAE is a large, longitudinal
investigation of the effects of liberal arts colleges and liberal arts experiences on the cognitive and personal outcomes theoretically associated
with a liberal arts education. The institutions were selected to represent
differences in college and universities nationwide on a variety of characteristics including institutional type and control, size, location, and
patterns of student residence. However, because the study was primarily
concerned with the impacts of liberal arts colleges and liberal arts experiences, liberal arts colleges were purposefully over-represented.
Our selection technique produced a sample with a wide range of academic selectivity, from some of the most selective institutions in the
country to some that were essentially open admissions. There was also
substantial variability in undergraduate enrollment, from institutions
with entering classes between 3,000 and 6,000, to institutions with entering classes between 250 and 500. According to the 2007 Carnegie
Classification of Institutions, three of the participating institutions were
considered research universities, three were regional universities that
did not grant the doctorate, two were two-year community colleges, and
11 were liberal arts colleges.
Student sample. The individuals in the sample were first-year, fulltime undergraduate students participating in the WNSLAE at each of
the 19 institutions in the study. The initial sample was selected in either
6
The Journal of Higher Education
of two ways. First, for larger institutions, it was selected randomly from
the incoming first-year class at each institution. The only exception to
this was at the largest participating institution in the study, where the
sample was selected randomly from the incoming class in the College
of Arts and Sciences. Second, for a number of the smallest institutions
in the study—all liberal arts colleges—the sample was the entire incoming first-year class. The students in the sample were invited to participate in a national longitudinal study examining how a college education
affects students, with the goal of improving the undergraduate experience. They were informed that they would receive a monetary stipend
for their participation in each data collection, and were also assured in
writing that any information they provided would be kept in the strictest
confidence and never become part of their institutional records.
Data Collection
Initial data collection. The initial data collection was conducted in
the early fall of 2006 with 4,501 students from the 19 institutions. This
first data collection lasted between 90–100 minutes and students were
paid a stipend of $50 each for their participation. The data collected included a WNSLAE precollege survey that sought information on student demographic characteristics, family background, high school experiences, political orientation, educational degree plans, and the like. Students also completed a series of instruments that measured dimensions
of intellectual and personal development theoretically associated with a
liberal arts education. One of these was the 40-minute critical thinking
test of the Collegiate Assessment of Academic Proficiency (CAAP). In
order to minimize the time required by each student in the data collection, and because another outcome measure was used which required
approximately the same administration time, the CAAP critical thinking
test was randomly assigned to half the sample at each institution.
Follow-up data collection. The follow-up data collection was conducted in spring 2007. This data collection took about two hours and
participating students were paid an additional stipend of $50 each. Two
types of data were collected. The first was based on questionnaire instruments that collected extensive information on students’ experience
of college. Two complementary instruments were used: the National
Survey of Student Engagement (NSSE) (Kuh, 2001) and the WNSLAE Student Experiences Survey (WSES). These instruments were
designed to capture student involvement in a broad variety of different
activities during college (e.g., coursework, clubs, studying, interactions
with other students, involvement in cultural/social activities, and the
like). An extensive part of the instruments focused on diversity topics
Effects of Diversity Experiences 7
in coursework and the extent to which a student’s nonclassroom interactions and activities focused on diversity or diversity-related topics.
The second type of data collected consisted of follow-up (or posttest)
measures of instruments measuring dimensions of intellectual and personal development, including the CAAP critical thinking test, that were
first completed in the initial data collection. All students completed the
NSSE and WSES prior to completing the follow-up instruments assessing intellectual and personal development. Both the initial and followup data collections were administered and conducted by ACT (formerly
the American College Testing Program).
Of the original sample of 4,501 students who participated in the fall
2006 testing, 3,081 participated in the spring 2007 follow-up data collection, for a response rate of 68.5%. These 3,081 students represented
16.2% of the total population of incoming first-year students at the 19
participating institutions. Because the CAAP critical thinking test was
randomly assigned to about half the original sample of 4,501 students,
the sample we actually analyzed for this study was a subsample of the
3,081 students participating in the follow-up data collection. In the
CAAP subsample itself, complete and useable data were available for
1,354 students. To provide at least some adjustment for potential response bias by sex, race, academic ability, and institution in the sample
of students, a weighting algorithm was developed. Using information
provided by each institution on sex, race, and ACT score (or appropriate SAT equivalent or COMPASS score equivalent for community college students), follow-up participants in the subsample of 1,354 students
were weighted up to each institution’s incoming first-year undergraduate population by sex (male or female), race (Caucasian, African American/Black, Hispanic/Latino, Asian/Pacific Islander, or other), and ACT
(or equivalent score) quartile. While applying weights in this manner
has the effect of making the subsample of 1,354 students that we analyzed representative of the population from which it was drawn by sex,
race, and ACT (or equivalent) score, it cannot adjust completely for
non-response bias.
Variables
Dependent variable. The dependent variable was end of first-year
scores on the critical thinking test from the Collegiate Assessment of
Academic Proficiency (CAAP) developed by the American College
Testing Program (ACT). The critical thinking test is a 40-minute, 32item instrument designed to measure a student’s ability to clarify, analyze, evaluate, and extend arguments. The test consists of four passages
in a variety of formats (e.g., case studies, debates, dialogues, experi-
8
The Journal of Higher Education
mental results, statistical arguments, editorials). Each passage contains
a series of arguments that support a general conclusion and a set of
multiple-choice test items. Essentially, the test requires students to read
passages commonly found in higher education curricula. After reading
the passages, students are required to choose a multiple-choice response
that best supports a general conclusion about the series of arguments
presented in the passage. The test is divided into three sections: analysis
of elements of an argument, evaluation of an argument, and extension of
an argument. The internal consistency reliabilities for the CAAP critical thinking test range between 0.81 and 0.82 (ACT, 1991). It correlates
0.75 with the Watson-Glaser Critical Thinking Appraisal (Pascarella,
Bohr, Nora, & Terenzini, 1995).
Independent variables. There were two independent variables in the
study. The first was a scale measuring exposure to topics of diversity in
classes (classroom diversity), while the second was a scale that assessed
extent of participation in diversity-oriented experiences and discussions
with diverse peers (interactional diversity). The classroom diversity
scale consisted of three items that asked students the number of courses
taken during the first year that focused on diverse cultures and perspectives, issues of women/gender, and issues of equality/justice. The alpha,
internal consistency reliability for the scale was 0.68. The interactional
diversity scale consisted of nine items and had an alpha reliability of
0.80. Representative items included: how often a respondent had serious
conversations with students from a different race or ethnicity, how often
the respondent participated in a racial or cultural awareness workshop,
and how often the respondent had serious conversations with students
who were very different from the respondent in religious beliefs, political opinions, or personal values. Both scales were standardized across
the entire sample (N = 3,081). The specific items constituting each scale
are shown in Table 1. The classroom diversity and interactional diversity scales had a rather modest correlation of 0.29.
Control variables. A particular methodological strength of the Wabash National Study of Liberal Arts Education is that it is longitudinal in design. This permitted us to introduce a wide range of statistical controls, not only for institutional characteristics and student background/precollege traits, but also for other student experiences during
the first year of college. Our control variables in the present study were
the following:
• The structural diversity of the institution attended (percent students
of color)
• Institutional type
Effects of Diversity Experiences 9
•
•
•
•
•
•
•
•
Parental education
Male (vs. female)
Person of color (vs. white)
Tested precollege academic preparation
Precollege critical thinking score
Lived on campus (vs. off campus)
Liberal arts emphasis of first-year coursework
Time spent preparing for class
Detailed operational definitions of all control variables are shown in
Table 1.
Data Analysis
The data analyses were carried out in two steps. Step one sought to
determine if there were any significant general net effects of the two
diversity experience scales on end of first-year critical thinking. We thus
regressed end of first-year critical thinking scores on the classroom diversity and interactional diversity scales plus all the control variables
listed previously (i.e., institutional structural diversity and type, parental
education, race, sex, precollege academic preparation and critical thinking scores, place of residence during college, the liberal arts emphases
of first year coursework, and study time). Individuals were the level of
analysis. However, instead of using Ordinary Least Squares regression,
we accounted for the nested nature of the data (i.e., random samples
of students at 19 institutions) by employing regression procedures that
adjust standard errors for the nesting or clustering effect (Groves et
al., 2004). Specifically, we employed the regression option (svy) in the
Stata software statistical package that takes into account the nesting or
clustering effect and computes more robust standard errors for individual predictors than Ordinary Least Squares. Because regression procedures adjusting for the nesting or clustering effect yield larger standard
errors than Ordinary Least Squares regression, and because we had a
very conservative prediction model that controlled not only for precollege critical thinking, but also for precollege ACT (or equivalent) score,
we used an alpha level of 0.10 to identify statistically reliable effects.
However, we consider such effects identified at p < 0.10 to be marginally significant and identify them as such in our discussion.
The second step of the analyses sought to determine if the net effects
of the two diversity scales on critical thinking were conditional on race,
tested precollege academic preparation, or level of precollege critical
thinking skills. To detect the presence of such conditional effects, we
created six cross-product terms, multiplying both diversity scales with
Percentage of students of color (non-White) at each individual institution (N = 19) in the WNSLAE sample
Research university, regional institution, community college or liberal arts college, three dummy variables (1, 0) with liberal arts colleges
always coded zero
Average of mother’s and father’s formal education. Response options were: less than high school diploma; high school diploma or GED; some
college, no degree; Associate’s degree; Bachelor’s degree; Master’s degree; Law degree; Doctoral degree
Coded: 1 = male, 0 = female
Coded: 1 = person of color, 0 = White
Structural diversity of
the institution
attended
Institutional type
Parental education
Male
Person of color
A student’s precollege score on the critical thinking module of the Collegiate Assessment of Academic Proficiency (CAAP)
Coded: 1 = lived on campus, 0 = lived off campus
The total number of courses during the first year of college taken in traditional liberal arts areas: “Fine Arts, Humanities, and Languages” (e.g.,
art, music, philosophy, foreign language, religion, history); “Mathematics/Statistics/Computer Science”; Natural Sciences” (e.g., chemistry,
physics); and “Social Science” (e.g., anthropology, economics, psychology, political science, sociology)
The average number of hours per week a student reported that he or she spent preparing for class
A three-item scale which assessed exposure to topics of diversity in first-year classes (alpha reliability = 0.68). The constituent items were:
• Number of courses taken in first year of college that focus on diverse cultures and perspectives
• Number of courses that focus on women/gender
• Number of courses that focus on equality/justice
Precollege critical
thinking (fall 2006)
Lived on campus
Liberal arts emphasis
in coursework
Time spent preparing
for class
Classroom diversity
Tested precollege
A student’s ACT score, SAT equivalent, or COMPASS equivalent score for community college students. The score was provided by each
academic preparation participating institution
Operational Definition
Variable Name
Table 1
Variable Definitions
Operational Definition
A nine-item scale which assessed extent of participation in diversityoriented experiences and discussions with diverse peers (alpha reliability = 0.80). The constituent items were:
• How often the respondent had serious discussions with student affairs professionals whose political, social, or religious opinions were
different from own
• Extent to which the respondent’s institution encourages contact among students from different economic, social, and racial or ethnic
backgrounds
• How often the respondent had serious conversations with students from a different race or ethnicity
• How often the respondent had serious conversations with students who are very different from the respondent in religious beliefs, political opinions, or personal values
• How often the respondent participated in a racial or cultural awareness workshop during this academic year
• How often the respondent attended a debate or lecture on a current political/social issue
• How often the respondent had discussions regarding inter-group relations with diverse students while attending this college
• How often the respondent had meaningful and honest discussions about issues related to social justice with diverse students while attend this college
• How often the respondent shared personal feelings and problems with diverse students while attending this college
A student’s end of first-year score on the CAAP critical thinking module
Variable Name
Interactional diversity
End of first-year
critical thinking
(spring 2007)
Table 1 (Continued)
Variable Definitions
12
The Journal of Higher Education
race, tested precollege academic preparation, and precollege critical
thinking, respectively. These cross-product terms were then added to the
general effects model. A statistically significant increase in explained
variance (R2), over and above the general effects model, would indicate
the presence of conditional effects, which could then be examined. As
with our analyses in step one, we continued to use regression procedures that adjusted standard errors for the nesting effect. All analyses
in steps one and two describe above are based on the weighted sample,
adjusted to the actual sample size for correct standard errors in significance tests.
In all analyses of end of first-year critical thinking we had a statistical control for a precollege measure of critical thinking. This means
that, with the exception of precollege critical thinking, the estimate
effects of all variables in our regression specifications are identical to
what they would be if we were predicting first-year gains or growth
in critical thinking. Put another way, in the presence of a control for a
pretest, the metric regression coefficients and significance tests for all
predictors in the equation other than the pretest are exactly the same,
irrespective of whether the dependent variable is a simple posttest or a
gain/growth score (i.e., posttest-pretest). As long as one has a pretest/
posttest design with longitudinal data one does not need a gain score
as the dependent variable to actually predict gains. This has been explained and demonstrated empirically by Pascarella, Wolniak, and
Pierson (2003). Consequently, in our analyses the estimated net effects
(and statistical significance) of the diversity experience scales on end of
first-year critical thinking are exactly the same as what they would have
been had we been predicting first-year gains or growth in critical thinking. Therefore, despite the fact that we were predicting end of first-year
critical thinking, it is quite reasonable to also interpret the results of our
analyses as an estimate of the effects of diversity experiences on firstyear gains or growth in critical thinking skills (Pascarella, Wolniak, &
Person, 2003).
It could be argued that hierarchical linear modeling (HLM) might
have been another approach to analyzing our multi-institutional data.
However, two considerations militated against the use of HLM. First,
we had only 19 aggregates—far fewer than is typically recommended
for getting adequate statistical power with level-two (between institution) variables (Ethington, 1997; Raudenbush & Bryk, 2001). More importantly, since our concern was with individual-level student experiences and not aggregate-level between-college effects, there was no important conceptual justification for using HLM. Furthermore, we were
using regression procedures which, like HLM, adjust standard errors for
Effects of Diversity Experiences 13
the nesting or clustering effect within each institution and produce results essentially identical to those produced by HLM when individuals
are the unit of analysis. While we do include institutional-level variables in our analyses (i.e., structural diversity and institutional type),
this was primarily for control purposes. Our major concern remained
the estimation of influences on individual-level student experiences.
Finally, since the nature of the data we analyzed was correlational
rather than experimental, we relied on statistical controls to identify
the presence of potential causal influences of diversity experiences on
critical thinking. Throughout this paper we employ causal terms such as
“general effect” or “conditional effect.” These terms, however, should
be interpreted or understood in a statistical, rather than an experimental sense. A statistically significant effect or influence uncovered in our
analyses means that, given the alternative explanations for which we
have controlled statistically, one cannot reject the possibility of a causal
relationship between involvement in diversity experiences and first-year
development in critical thinking.
Results
The means and standard deviations for all variables are shown in
Table 2 (the intercorrelation matrix for all variables is shown in Appendix A). As Table 2 indicates, the sample was 20% students of color
and 80% White students. This relatively small percentage of students
of color in the sample may have attenuated scores on our measures of
diversity experiences and reduced the likelihood of uncovering a net
link between exposure to such experiences and the acquisition of critical
thinking skills—a potential limitation of the study. However, as Umbach and Kuh (2006) point out, simply having relatively small numbers
of students of color does not prevent high levels of interaction across
race. The intercorrelation matrix (Appendix A) suggests that there is
very little multi-collinearity among the predictor variables. With the exception of the correlation between tested precollege academic preparation and precollege critical thinking (r = 0.71), and the correlation between attendance at a community college and living on campus (-0.62),
all the correlations among the predictor variables were 0.44 or lower.
Moreover, since the results discussed below indicate that both tested
precollege academic preparation and precollege critical thinking had
consistent and substantial positive net effects on end of first year critical thinking, it would appear that, despite their high correlation, these
two predictors are laying claim to different parts of the variance in the
dependent measure.
14
The Journal of Higher Education
The regression estimates are shown in Table 3. Column 1 in Table
3 summarizes the general effects model (step one in the analyses). As
column 1 indicates, the entire regression model explained slightly more
than 71% of the variance in first-year critical thinking skills. However,
in the presence of statistical controls for all other variables in the model,
neither classroom diversity nor interactional diversity had more than a
chance estimated impact on critical thinking skills. Thus, our findings
fail to offer support for any net general effect of diversity experiences
on first-year development in critical thinking.
The addition of the cross-product terms for classroom diversity ×
race, precollege academic preparation, and precollege critical thinking
to the general effects equation was associated with an increase in R2 of
about 0.01%, which was nonsignificant at 0.05. However, the addition
of the same three cross-product terms involving the interactional diversity scale was associated with an increase in R2 of about 1.0%, which
was statistically significant at p < 0.001. Two of the three individual
conditional effects involving interactional diversity were statistically
Table 2
Means and Standard Deviations for All Variables (N = 1,354)
Structural diversity of the institution attended
Mean
SD
17.91
16.46
Research university
0.35
0.48
Regional university
0.24
0.43
Community college
0.18
0.39
Parental education
4.27
1.47
Male
0.46
0.50
Person of color
0.20
0.40
Tested precollege academic preparation
24.73
4.90
Precollege critical thinking
62.41
5.30
Lived on campus
0.74
0.44
Liberal arts emphasis in coursework
6.15
2.19
Time spent preparing for class
4.32
1.62
Classroom diversity
-0.10
0.72
Interactional diversity
-0.15
0.63
End of first-year critical thinking
62.63
5.80
Note. The mean scores for the classroom diversity and interactional diversity scales are slightly negative. This is
due to the fact that each scale was standardized with the mean set at zero across the entire follow-up sample (N
= 3,081). The mean score of the two diversity scales for the subsample of 1,354 students with complete data who
completed the CAAP critical thinking test was slightly lower than the average for the entire sample.
-0.740 (-0.028)
0.182 (0.048)
0.281 (0.022)
-0.220 (-0.014)
-1.245 (-0.047)
Research university
(vs. liberal arts college)
Regional university
(vs. liberal arts college)
0.530*** (0.426)
0.580*** (0.546)
-0.072 (-0.026)
-0.245** (-0.075)
-0.034 (-0.002)
0.375*** (0.301)
0.599*** (0.564)
-0.743 (-0.041)
0.006 (0.002)
-0.133 (-0.040)
0.032 (0.004)
0.371 (0.042)
0.714***
Male
Person of color
Tested precollege academic
preparation
Precollege critical thinking
Lived on campus
Liberal arts emphasis of
coursework
Time spent preparing for class
Classroom diversity
Interactional diversity
R2
0.551***
0.717***
0.530* (0.061)
e
-0.055 (-0.007)
-0.175 (-0.053)
0.060 (0.022)
0.084 (0.004)
0.579*** (0.546)
0.400*** (0.322)
-0.662 (-0.056)
0.130 (0.034)
-0.333 (-0.012)
-0.318 (-0.021)
0.219 (0.017)
-0.004 (-0.011)
4 White Students
(N = 1,083)
c
b
a
The first number is the metric, or unstandardized, regression coefficient, while the bottom number in parentheses is the standardized coefficient or β.
ACT or ACT equivalent ≤ 26.
ACT or ACT equivalent > 26.
d, e Coefficients with the same superscript are significantly different in magnitude at p < 0.05.
*p < 0.10. **p < 0.05. ***p < 0.01
0.584***
-0.370 (-0.042)
d
0.925*** (0.106)
d
0.147 (0.021)
0.094 (0.028)
0.076 (0.028)
-1.516*** (-.084)
0.638*** (0.601)
0.299*** (0.240)
-0.101 (-0.007)
-0.165 (-0.014)
0.067 (0.017)
-0.414 (-0.015)
-0.398 (-0.027)
0.188 (0.015)
-0.013 (-0.034)
3 High Precollege
Academic Preparationc
(N = 741)
-0.162 (-0.023)
-0.591 (-0.032)
0.111 (0.008)
-0.605 (-0.052)
0.152* (0.040)
-0.596 (-.051)
Community college
(vs. liberal arts college)
Parental education
-0.398* (-0.027)
0.537* (0.043)
-0.011 (-0.028)
-0.014** (-0.036)
Structural diversity of the
institution attended
2 Low Precollege
Academic Preparationb
(N = 613)
1 General Effects Model
(N = 1,354)
Predictor
Conditional Effects Models
Table 3
Regression Estimates for General and Conditional Effects on End of First-Year Critical Thinking a
0.705***
-0.178e (-0.020)
0.189 (0.027)
0.054 (0.016)
-0.204* (-0.075)
-1.636*** (-0.090)
0.658*** (0.620)
0.282*** (0.227)
-0.012 (-0.001)
0.176** (0.046)
-1.642 (-0.062)
0.430 (0.029)
0.464 (0.037)
-0.046** (-0.047)
5 Students of Color
(N = 271)
16
The Journal of Higher Education
significant: interactional diversity × tested precollege academic preparation (t = 2.92, p < 0.01), and interactional diversity × race (t = 2.31, p <
0.05).
In order to determine the nature of the significant conditional effects
involving interactional diversity, we divided the sample at approximately the median into subsamples of “high” (ACT or ACT equivalent > 26, N = 741) and “low” (ACT or ACT equivalent ≤ 26, N = 613)
tested precollege academic preparation, and into subsamples of White
students (N = 1,083) and students of color (N = 271), and then reestimated the general effects model for each subsample. The nature of the
conditional effects was determined by comparing the magnitudes of the
regression coefficients between the subsamples.
Columns 2 through 5 in Table 3 summarize the conditional effects
models. These conditional effects models were the separate general effects equations rerun for the four specific subsamples: “low” and “high”
tested precollege academic preparation, and White students and students
of color. Columns 2 and 3 summarize the separate regression equations
for the “low” precollege academic preparation (ACT or ACT equivalent ≤ 26) and “high” precollege academic preparation (ACT or ACT
equivalent > 26). As the two equations clearly show, the estimated net
effect of interactional diversity on end of first-year critical thinking differed substantially between the “low” and “high” precollege academic
preparation subsamples. For the “low” academic preparation subsample,
interactional diversity had a statistically significant, positive net effect
on first-year critical thinking (b = 0.925, p < 0.01, β = 0.106). For the
“high” academic preparation subsample, however, the effect of interactional diversity on critical thinking was negative, but statistically nonsignificant (b = -0.370, p > 0.10, β = -0.042). Thus, interactional diversity had its strongest, positive effect on the development of critical
thinking skills in the first year of college for students with relatively low
levels of tested precollege academic preparation. As level of precollege
academic preparation increased, the positive impact of interactional diversity on critical thinking became less positive.
Columns 4 and 5 in Table 3 summarize the separate regression estimates for White students and students of color, respectively. As the two
equations demonstrate, there were pronounced differences in the estimated net effects of interactional diversity for White students and students of color. For White students, interactional diversity had a marginally significant and positive net effect (b = 0.530, p < 0.10, β = 0.061)
on first-year critical thinking. For students of color, however, the net effect of interactional diversity on critical thinking was slightly negative,
Effects of Diversity Experiences 17
though statistically nonsignificant (b = -0.178, p > 0.10, β = -0.020).
This finding is generally consistent with the earlier finding of Pascarella
et al. (2001) that diversity experiences have a stronger positive influence on the development of critical thinking for White students than for
students of color.
Summary and Discussion
This study sought to extend the findings of previous research (Pascarella et al., 2001) which suggested that the effects of diversity experiences on one salient dimension of cognitive growth during college,
the development of critical thinking skills, do not accrue equally to all
students. To this end, we analyzed the first-year data from the Wabash
National Study of Liberal Arts Education (WNSLAE)—a 19-institution
longitudinal investigation of the college experiences fostering growth in
student cognitive and personal development. Specifically, we estimated
the net effects of student exposure to classroom and interactional diversity experiences on end-of-first-year scores on a standardized measure of critical thinking skills. We introduced statistical controls for an
extensive array of potential confounding influences. These included
institutional characteristics, student demographic and family variables,
precollege critical thinking scores and tested academic preparation, and
first-year college experiences such as residence arrangement, the liberal
arts emphasis of coursework, and time spent preparing for class. In the
presence of these controls, measures of student exposure to diversity
topics in the classroom and student involvement in interactional diversity experiences had no statistically reliable general effects on the development of first-year critical thinking.
The failure to uncover significant general effects of diversity experiences on critical thinking, however, masked important conditional effects of interactional diversity based on student demographic and precollege characteristics. Specifically, the net effects on critical thinking
of student involvement in interactional diversity activities differed significantly in magnitude (and direction) for White students versus students of color, and for students who entered college with different levels of tested academic preparation (i.e., ACT or ACT equivalent score).
Engagement in interactional diversity activities (e.g., interactions with
students of a different race, attending a racial/cultural awareness workshop, and the like) had a marginally significant positive net influence
on the development of critical thinking skills for White students, but
a slightly negative, though nonsignificant, effect for students of color.
This finding essentially replicates the earlier evidence reported by
18
The Journal of Higher Education
Pascarella et al. (2001), based on data from the 1992–95 National Study
of Student Learning, which led them to posit that the cognitive benefits
linked to involvement in diversity experiences during college may be
more likely to accrue to White students than to their minority counterparts. The finding is similar to that reported by Gurin (1999) and Gurin
et al. (2002) using a measure of “thinking complexity” as the dependent
variable; although it does not appear that Gurin actually tested the conditional effect between white and African American students for statistical significance. It is also consistent with Jayakumar’s (2008) recent
findings concerning the particular salience of interactions with diverse
peers during college for the post-college cross-cultural competencies of
white students.
The argument advanced by Gurin et al. (2002) that student cognitive
growth may be stimulated by meaningful encounters with new and unfamiliar racial groups provides a useful point of departure for understanding just why we found that exposure to diversity experiences contributes more to the development of critical thinking for white students
than for students of color. Approximately 80% of the White students in
our sample reported attending secondary schools that were composed
“totally” or “mostly” of White students. This was in sharp contrast to
students of color in the sample—only 30% of whom who attended secondary schools that were composed “totally” or “mostly” of students of
color. (Indeed, 42% of the students of color attended secondary schools
that were “totally” or “mostly” composed of White students.) Thus, as
opposed to their peers who were students of color, for the vast majority of White students in our sample the initial year of college may have
been the first real opportunity they had to encounter substantial numbers
of fellow students from different racial and cultural backgrounds. Consequently, the novelty and challenge of interpersonal encounters with
diversity may have led to a greater impact on white students than on
students of color, who were more likely to be familiar with a racially
diverse academic environment.
Because only 20% of our sample was non-White, we did not have
sufficient numbers of racial subgroups (e.g., African American, Latino,
Asian) within the students of color category to conduct a more fine
grained analysis of the conditional effect involving race—a limitation
of the study. It may well be, of course, that the impacts of diversity experiences on the acquisition of critical thinking skills are discernably
different for the racial subgroups within the larger category of students
of color.
The second conditional effect we uncovered was even more pronounced than that involving interactional diversity and race. Students
Effects of Diversity Experiences 19
who entered college with relatively low levels of tested academic preparation (i.e., ACT or ACT equivalent) derived substantially greater critical thinking benefits from engagement in interactional diversity activities than did their counterparts with relatively high levels of precollege
academic preparation. This suggests that involvement in interactional
diversity experiences may have a compensatory influence on the development of critical thinking skills during the first year of college. That
is, in terms of facilitating growth in first-year critical thinking skills,
such diversity experiences may be most beneficial for students who are
relatively less prepared to acquire critical thinking skills from formal
academic experiences when they enter postsecondary education. Conversely, for students who are relatively well prepared for the academic
demands of college, involvement in diversity experiences may have
little impact on the development of critical thinking skills. It should be
pointed out, however, that this conditional effect is based on a single
sample and awaits independent replication.
The above conditional effect should not be confused with regression
artifacts—the likelihood that students with low levels of academic preparation, which correlated 0.71 with precollege critical thinking, would
demonstrate greater gains in first-year critical thinking than their counterparts with high levels of precollege academic preparation. Rather, our
conditional effect suggests that the positive influence of involvement
in interactional diversity experiences on critical thinking is more pronounced for less academically prepared students than for their more academically well prepared peers. It is possible, however, that an artificial
ceiling effect might have contributed to the conditional effect involving
precollege academic preparation and interactional diversity. Students
with high levels of precollege academic preparation (i.e., ACT or equivalent score > 26) might make smaller gains in first year critical thinking
than their less well-prepared counterparts (i.e., ACT or equivalent score
< 26) because they entered college with higher levels of critical thinking
skills to begin with. This would act to attenuate variance in end of firstyear critical thinking scores for those with ACT (or equivalent) scores
> 26 (Hays, 1973). (For example, a substantial number of students in
this “high” precollege academic preparation group could get exactly the
same score on the end of first-year critical thinking test because they
answered all the test questions correctly.) In turn, the attenuated variance in end of first-year critical thinking for those with ACT scores >
26 might then lead to an artificially smaller net positive association between interactional diversity and end of first-year critical thinking for
that group of students than for their less academically well-prepared
counterparts.
20
The Journal of Higher Education
In our analyses, however, we found no substantial evidence of attenuated variance in end of first-year critical thinking for the more academically well prepared subsample of students. The actual variances
in end of first-year critical thinking scores for those with ACT scores
< 26 (variance = 23.62, SD = 4.86) and those with ACT scores > 26
(variance = 18.43, SD = 4.29) differed only by chance (F = 1.28, p >
0.10). Indeed, as shown in Table 3, the overall explained variances for
the “low” and “high” academic preparation groups were almost identical (0.584 and 0.551, respectively). Furthermore, for the “high” precollege academic preparation subsample the interactional diversity scale
did not have a smaller positive effect on end of first-year critical thinking than it did for the “low” academic preparation subsample. Rather,
it actually had a small negative effect (-0.370). Such evidence suggests
that the significant conditional effect we uncovered involving level of
tested precollege academic preparation and interactional diversity experiences is not totally explainable by attenuated variance in the dependent variable attributable to a ceiling effect for the most academically
well prepared students. However, irrespective of whether or not one attributes the significant conditional effect to a ceiling effect for the most
academically well prepared, it does not change the fact that for students
with “low” precollege academic preparation (ACT or equivalent < 26)
interactional diversity did, in fact, have a significant, positive net influence (b = 0.925, p < 0.01) on critical thinking scores at the end of the
first year of college.
Taken in total our findings clearly underscore the potential complexity involved in mapping and understanding the various cognitive impacts of diversity experiences. In this sense it complements a small, but
growing body of evidence suggesting that the effects associated with
diversity and diversity experiences might well be conditional rather than
general (e.g., Antonio et al., 2004; Chang, 1999, 2001; Gurin, 1999;
Gurin et al., 2002). It may well be that diversity experiences have few
important general cognitive effects for all students. Rather, the cognitive
benefits of diversity experiences may be substantially shaped by the different characteristics of the students who engage in them. From a methodological standpoint, this suggests the importance of estimating conditional effects in any future research that seeks to thoroughly understand
the cognitive benefits of engagement in diversity experiences. Focusing
only on general effects may mask the cognitive benefits of such experiences for important subgroups of students.
Finally, it is worth considering why the only significant cognitive effects of diversity experiences we uncovered involved student involvement in interactional diversity rather than exposure to diversity topics in
Effects of Diversity Experiences 21
the classroom. One possible explanation is that our scale of classroom
diversity was simply too superficial to capture genuine classroom exposure to diversity experiences. Our classroom diversity scale measured
only the extent to which coursework taken included diversity and diversity-related topics. What may be more important in terms of influencing the development of critical thinking skills is the extent to which diversity experiences are actually woven into the pedagogical approaches
used in courses (Gurin, 1999; Mayhew, Wolniak, & Pascarella, 2008).
At the same time, the fact that interactional diversity (which measured
diversity experiences that might often occur outside the classroom) had
significant positive effects on growth in first-year critical thinking skills
for important student subgroups perhaps underscores the likelihood
that much cognitive growth in college is socially based (Pascarella &
Terenzini, 2005). In some contexts, for some students, what happens in
nonclassroom settings may be as important as what actually occurs in
classes. Moreover, the cognitive impact of what happens in these nonclassroom settings may manifest itself as early as the first year of postsecondary education.
Policy Implications
Our findings reinforce the argument that engagement in diversity experiences may have important implications for the intellectual development of substantial numbers of students during the first year of college.
Thus, an institutional policy based on programmatic efforts to weave
exposure to diverse individuals, ideas, and perspectives into students’
lives may serve to enhance the intellectual mission of a college. Such an
institutional policy appears to be increasingly justifiable by the evidence
rather than simply by rhetoric. That said, our findings also add to a body
of literature and research evidence suggesting that exposure to diversity
should not be regarded as a programmatic “silver bullet” equally influential for all students. From the standpoint of the development of critical thinking skills some types of students appear to benefit more from
engagement in diversity experiences than do others. Consistent with
prior evidence, the results of this study suggest that White students may
be particularly open to the positive influence of diversity experiences
on critical thinking. Similarly, diversity experiences may be particularly
beneficial for students who are the least well prepared academically for
college. Therefore, while diversity programming may be implemented
broadly, it may be particularly important to target these specific subgroups of students.
22
The Journal of Higher Education
References
Adams, M., & Zhou-McGovern, Y. (1994). The sociomoral development of undergraduates in a “social diversity” course: Theory, research, and instructional applications.
Paper presented at the annual meeting of the American Educational Research Association, New Orleans, LA.
American College Testing Program (ACT). (1990). Report on the technical characteristics
of CAAP; Pilot year 1: 1988–89. Iowa City, IA: Author.
Antonio, A., Chang, M. J., Hakuta, K., Kenny, D., Levin, S., & Milem, J. (2004). Effects
of racial diversity on complex thinking in college students. Psychological Science, 15,
507–510.
Astin, A. W. (1993). What matters in college? Four critical years revisited. San Francisco:
Jossey-Bass.
Chang, M. J. (1999). Does racial diversity matter?: The educational impact of a racially
diverse undergraduate population. Journal of College Student Development, 40(4),
377–395.
Chang, M. J. (2001). Is it more than about getting along?: The broader educational implications of reducing students’ racial biases. Journal of College student Development,
42(2), 93–105.
Chang, M. J., Astin, A. W., & Kim, D. (2004). Cross-racial interaction among undergraduates: Some consequences, causes, and patterns. Research in Higher Education, 45(5),
529–553.
Chang, M. J., Denson, N., Saenz, V., & Misa, K. (2006). The educational benefits of sustaining cross-racial interaction among undergraduates. The Journal of Higher Education, 77, 430–455.
Dey, E. (1991). Community service and critical thinking: An exploratory analysis of collegiate influences. Paper presented at the conference on Setting the Agenda for an Effective Research Strategy for Combining Service and Learning in the 1990s, Wingspread
Conference Center, Racine, WI.
Ennis, R. H. (1993). Critical thinking assessment. Theory into Practice, 32(3), 179–186.
Ethington, C. (1997). A hierarchical linear modeling approach to studying college effects.
In J. Smart (Ed.), Higher education: Handbook of theory and research (Vol. 12, pp.
165–194). New York: Agathon Press.
Giancarlo, C. A., & Facione, P. A. (2001). A look across four years at the disposition
toward critical thinking among undergraduate students. The Journal of General Education, 50(1), 29–55.
Groves, R., Fowler, F., Couper, M., Lepkowski, J., Singer, E., & Tourangeau, R. (2004).
Survey methodology. Hoboken, NJ: Wiley-Interscience.
Gurin, P. (1999). Expert report of Patricia Gurin. In University of Michigan (Ed.), The
compelling need for diversity in education, Gratz et al., v Bollinger et al., No. 97-75237
and Grutter et al. v Bollinger et al., No. 97-75928. Ann Arbor: University of Michigan.
Retrieved from http://www.umich.edu/~urel/admissions/legal/expert/gurintoc.html
Gurin, P., Dey, E., Hurtado, S., & Gurin, G. (2002). Diversity and higher education: Theory and impact on educational outcomes. Harvard Educational Review, 72(3), 330–366.
Hays, W. (1973). Statistics for the social sciences. New York: Holt, Rinehart and Winston.
Effects of Diversity Experiences 23
Hu, S., & Kuh, G. D. (2003). Diversity experiences and college student learning and personal development. Journal of College Student Development, 44(3), 320–334.
Hurtado, S. (2001). Linking diversity and educational purpose: How diversity affects the
classroom environment and student development. In G. Orfield (Ed.), Diversity challenged: Evidence on the impact of affirmative action (pp. 187–203). Cambridge, MA:
Harvard Educational Publishing Group.
Jayakumar, U. M. (2008). Can higher education meet the needs of an increasingly diverse
and global society? Campus diversity and cross-cultural workforce competencies. Harvard Educational Review, 78(4), 615–651.
Kim, M. (1995). Organizational effectiveness of women-only colleges: The impact of college environment on students’ intellectual and ethical development (Unpublished doctoral dissertation). University of California, Los Angeles.
Kim, M. (1996, April). The effectiveness of women-only colleges for intellectual development. Paper presented at the annual meeting of the American Educational Research
Association, New York.
Kim, M. M. (2002). Cultivating intellectual development: Comparing women-only colleges and coeducational colleges for educational effectiveness. Research in Higher Education, 43(4), 447–481.
Kuh, G. (2001). The National Survey of Student Engagement: Conceptual framework and
overview of psychometric properties. Bloomington: Indiana University, Center for Postsecondary Research.
Kuklinski, J. H. (2006). The scientific study of campus diversity and students’ educational
outcomes. Public Opinion Quarterly, 70(1), 99–120.
Mayhew, M., Wolniak, G., & Pascarella, E. (2008). How educational practices affect the
development of life-long learning orientations in traditionally-aged undergraduate students. Research in Higher Education, 49(4), 337–356.
Milem, J. (2003). The educational benefits of diversity: Evidence from multiple sectors.
In M. J. Chang, D. Witt, J. Jones, & K. Hakuta (Eds.), Compelling interest: Examining
the evidence on racial dynamics in colleges and universities. Stanford, CA: Stanford
University Press.
Pascarella, E. (2001). Using student self-reported gains to estimate college impact: A cautionary tale. Journal of College Student Development, 42, 488–492.
Pascarella, E., Bohr, L., Nora, A., & Terenzini, P. (1995). Cognitive effects of 2-year and
4-year colleges: New evidence. Educational Evaluation and Policy Analysis, 17(1),
83–96.
Pascarella, E., Palmer, E., Moye, M., & Pierson, C. (2001). Do diversity experiences influence the development of critical thinking? Journal of College Student Development, 42,
257–271.
Pascarella, E., & Terenzini, P. (2005). How college affects students: A third decade of
research. San Francisco: Jossey-Bass.
Pascarella, E., Wolniak, G., & Pierson, C. (2003). Explaining student growth in college
when you don’t think you are. Journal of College Student Development, 44, 122–126.
Pascarella, E. T. (2006). How college affects students: Ten directions for future research.
Journal of College Student Development, 47(5), 508–520.
24
The Journal of Higher Education
Pascarella, E. T., & Terenzini, P. T. (1991). How college affects students: Findings and
insights from twenty years of research. San Francisco: Jossey-Bass.
Pike, G. R. (1996). Limitations of using students’ self-reports of academic development
as proxies for traditional achievement measures. Research in Higher Education, 37(1),
89–114.
Raudenbush, S., & Bryk, A. (2001). Hierarchical linear models: Applications and data
analysis methods. Thousand Oaks, CA: Sage Publications.
Terenzini, P., Springer, L., Yaeger, P., Pascarella, E., & Nora, A. (1994). The multiple influences on students’ critical thinking skills. Paper presented at the annual meeting of the
Association for the Study of Higher Education, Orlando, FL.
Umbach, P., & Kuh, G. (2006). Student experiences with diversity at liberal arts colleges:
Another claim for distinctiveness. Journal of Higher Education, 77, 169–192.
-0.125
-0.122
-0.077
0.043
0.143
-0.135
10. Live On Campus
11. Liberal Arts Classes
12. Class Prep. Time
13. Classroom Diversity
14. Interact. Diversity
15. Critical Thinking T-2
0.294
0.057
-0.074
0.161
0.298
0.125
0.299
0.375
0.035
0.024
0.187
-0.135
-0.274
1.000
2
-0.190
-0.106
-0.097
-0.118
-0.172
-0.093
-0.175
-0.204
0.086
-0.053
-0.097
-0.098
1.000
3
-0.153
-0.125
-0.089
-0.111
-0.217
-0.615
-0.145
-0.297
-0.089
-0.003
-0.199
1.000
4
0.305
0.000
-0.011
0.103
0.239
0.195
0.332
0.407
-0.215
0.105
1.000
5
0.067
-0.034
-0.102
-0.142
0.042
-0.019
0.088
0.111
-0.061
1.000
6
-0.238
0.233
0.086
-0.043
-0.095
-0.037
-0.251
-0.219
1.000
7
0.693
0.049
-0.028
0.198
0.442
0.313
0.712
1.000
8
0.785
0.052
-0.034
0.137
0.391
0.177
1.000
9
0.186
0.136
0.050
0.152
0.211
1.000
10
0.380
0.131
0.041
0.170
1.000
11
Note. Critical Thinking T-1 is the Precollege Critical Thinking Score, while Critical Thinking T-2 is the End of First-Year Critical Thinking Score
-0.122
-0.049
6. Male
9. Critical Thinking T-1
-0.102
5. Parental Education
0.382
-0.127
4. Inst. Type: Com. Coll.
-0.124
0.245
3. Inst. Type: Regional U.
8. Tested Academic Prep.
0.097
2. Inst. Type: Research U.
7. Person of Color
1.000
1. Inst. Structrl. Diversity
1
Appendix A
Correlations among All Variables
0.153
0.109
0.046
1.000
12
-0.023
0.291
1.000
13
0.036
1.000
14
1.000
15
Copyright of Journal of Higher Education is the property of Ohio State University Press and its content may not
be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written
permission. However, users may print, download, or email articles for individual use.
Download