Assessing the Academic Competence of College Students:

advertisement
Assessing the Academic Competence of
College Students:
Validation of a Self-Report Measure of Skills
and Enablers
Stephen N. Elliott
Wisconsin Center for Education Research
University of Wisconsin-Madison
James C. DiPerna
Lehigh University
Abstract
This study investigated the criterion-related validity of the Academic Competence Evaluation ScalesCollege (ACES-College), a self-report measure of academic competence. A nationally representative
sample of 250 students attending two- and four-year colleges completed the ACES-College and provided
GPA information and their ACT or SAT scores. A subsample of 31 of the students were self-identified as
having been formally classified with a learning disability. Students’ scores on the ACES provided
evidence to partially support the prediction that the ACES was moderately correlated with their GPAs and
college admission test scores. In addition, strong evidence was found in the form of a significant MANOVA
and classification analyses for the prediction that ACES scores may be used to reliably differentiate a large
percentage of students with learning disabilities from students without a disability. These findings provide
good evidence for the criterion-related validity of scores from the ACES-College. The results are discussed
in terms of previous research on the ACES and the potential for the ACES to be used to facilitate
assessment and intervention services for college students experiencing learning difficulties.
This study examined the validity of a self-report instrument designed to facilitate a meaningful
assessment of college students’ academic competence and the strategies they use to improve academic
functioning. The research reported here is part of a program of research on the assessment of behaviors
that contribute to the academic competence of students from kindergarten through college (DiPerna, 1997,
1999; DiPerna & Elliott, 1999). Besides advancing a clearer understanding of the construct of academic
competence, the research has resulted in the publication of the Academic Competence Evaluation ScalesCollege or ACES-College (DiPerna & Elliott, 2001). This article reports the criterion-related validity of
the ACES-College for a sample of students representative of the college population and including students
with learning disabilities.
Definition and Model of Academic Competence
Academic competence, as measured by the ACES-College, is defined as a multidimensional
construct composed of the skills, attitudes, and behaviors of a learner that contribute to academic success.
As such, academic competence includes many of the critical skills — reading, writing, calculating, solving
problems, attending, questioning, and studying — needed for academic success (DiPerna, 1997; DiPerna
& Elliott, 1999). An examination of the empirical literature indicates that researchers have defined this
construct somewhat inconsistently. For example, several researchers have not explicitly defined the
construct or have used “academic competence” interchangeably with terms such as “academic
performance” and “academic ability” (e.g., Henggeler, Cohen, Edwards, Summerville, & Ray, 1991;
Rotheram, 1987).
The results of our research have indicated that the skills, attitudes, and behaviors contributing to
academic competence fall into one of two domains: academic skills or academic enablers (DiPerna &
Elliott, 2000). Academic skills are the basic and complex skills that are a central part of academic curricula
at the elementary and secondary levels of education, and they play a critical role in allowing students to
learn content specific knowledge at the post-secondary level. Academic enablers are attitudes and
behaviors that allow a student to benefit from instruction. Figure 1 provides a simplified picture of the
potentially complex relationships among instruction, academic enablers, academic skills, and academic
competence.
Academic Competence and College Students with Learning Disabilities
Although we were unable to identify previous studies that explicitly measured the academic
competence of college students with learning disabilities, researchers have explored specific academic,
cognitive, or enabling behaviors in the college learning disability population. Hughes and Osgood Smith
(1990) reviewed more than 100 articles related to students with disabilities in college. Fewer than one third
of the articles reported actual data regarding the cognitive or academic performance of college students
with disabilities. Although many of the data-based articles had methodological limitations (e.g., small
sample sizes, outcome measures with limited validity evidence for the college population), the authors were
able to make some inferences regarding the skills and abilities of college students with learning disabilities.
Specifically, college students with learning disabilities appeared to demonstrate similar levels of overall
intellectual functioning (as measured by standardized intelligence tests) to their peers without disabilities.
In addition, the students with disabilities frequently demonstrated meaningful differences from their peers
without disabilities in academic skills, especially in the areas of reading, writing, mathematics, and foreign
language.
Wilcenzenski and Gillespie-Silver (1992) reviewed the academic performance of students with
learning disabilities and students without disabilities at a large public university in the Northeast. As
expected, students with learning disabilities exhibited lower scholastic aptitude and achievement (SAT,
high school rank) than their peers without disabilities. In addition, they demonstrated lower achievement
(overall grade point average) than their peers without disabilities during the first year of college.
More recently, Dunn (1995) explored the skills and abilities of three groups of college students:
students with low achievement, students with documented learning disabilities, and students with selfidentified learning disabilities. Dunn used a direct assessment (Scholastic Abilities Test for Adults) to
assess student participants’ cognitive and academic skills. In addition, Dunn used a questionnaire to
measure students’ self-perceptions of their academic (e.g., reading, writing), cognitive (e.g., memory,
attention), and enabling skills (e.g., study and social behaviors). Results indicated that students in the low
achievement group generally demonstrated a higher level of skills than the other two student groups.
Although few significant differences were found between the students with self-identified disabilities and
students with documented disabilities, the latter demonstrated significantly lower performance in written
language skills. Interestingly, Dunn also found that observed differences among the three groups of
students were consistent across the direct and the self-report assessments. This finding suggests that student
self-ratings can provide useful information for differentiating students with low achievement from students
with disabilities.
Rationale
Many practitioners working with college students experiencing academic difficulty have not had a
systematic method for assessing a student’s academic competence and, when necessary, conceptualizing
interventions to improve it. Currently, there are a limited number of diagnostic achievement tests have
normative data for a college population (e.g., Woodcock-Johnson III Tests of Achievement; Woodcock,
McGrew, & Mather, 2001; Wechsler Individual Achievement Test-2; Psychological Corporation; 2001).
Although evidence is available to support the reliability and validity of these achievement batteries for
measuring college students’ academic skills, they do not provide information about nonacademic behaviors
that facilitate academic success. In addition, these assessments require a significant amount of time for
administration and interpretation because they are primarily intended for diagnostic rather than screening
purposes.
Conversely, a limited number of rating scales have been developed that assess specific types of
academic enablers, such as motivation or study skills. Two such scales are the Learning and Study
Strategies Inventory (LASSI; Weinstein, Schulte, & Palmer, 1987) and the Motivated Strategies for
Learning Questionnaire (MLSQ; Pintrich, McKeachie, & Smith, 1989). Both of these self-report
instruments assess a variety of study and learning strategies that have been shown to impact academic
performance; however, neither explicitly assesses the respondent’s academic skills. The ACES-College
was designed to complement the limitations of existing measures by providing practitioners with an
efficient screening tool to assess a student’s academic skills and academic enablers. In addition, the ACESCollege was developed to facilitate identification of possible intervention strategies by directly engaging
the student who is experiencing the learning difficulty.
In the remainder of this article, we examine the initial criterion-related validity evidence for the selfratings of college students’ academic skills and academic enablers via the ACES-College. Specifically, we
present data that address questions about (a) the relation of scores on the ACES to students’ GPA and SAT
or ACT scores and (b) the ability of ACES scores to differentiate students with learning disabilities from
those without such disabilities.
Method
Participants
The sample consisted of 140 females and 110 males who had been in college for one or more years.
For the students in the standardization sample, the proportion approximates the number of Caucasians,
African-Americans, Hispanics and other race/ethnic groups based on the race/ethnic-group proportions of
college students, as reported in the October 1998 Census data (see Table 1). The students represented 39
colleges in four geographic regions of the country (as specified in the U.S. Census data) where advising
staff volunteered to serve as data collection site coordinators. Site coordinators were paid for their services
and solicited volunteer participants. A random subsample of participants was selected from each pool of
volunteers at an institution. While the sample only approximated the U.S. Census, it is important to
remember that many college students attend colleges or universities outside their region. The sample was
divided into two-year and four-year colleges or universities, both private and public.
Table 1
Demographic Characteristic of the ACES-College Sample
ACES-College Sample
(N=250)
1998 U. S. Census
Projections
Gender
Female
Male
56%
44%
56%
44%
Race/Ethnicity
Caucasian
African American
Hispanic
Other
68%
15%
9%
7%
71%
13%
9%
7%
Region
Northeast
North Central
South
16%
25%
44%
23%
20%
31%
West
15%
25%
College Type
2-year
4-year
32%
68%
29%
71%
Disability Status
No Disability
Learning Disability
88%
12%
---
Of the 250 students who participated, 31 indicated they had been identified in high school or earlier
with a learning disability. In the majority of cases, the self-identified disability was confirmed against
student records by the site coordinator. However, it was not possible to determine how many participants
may have failed to identify themselves as having a disability given the confidentiality of this information.
Overall, 12% of the sample consisted of students with diagnosed learning disabilities. We oversampled
students with disabilities to ensure adequate numbers for statistical power. In typical random samples of
college students, approximately 6% of students report having a disability; 29% of them specify that they
have a learning disability (Horn & Berktold, 1999). In sum, our sample of colleges in many ways
approximates the college student population in the United States.
Materials
ACES-College. The ACES-College is a 108-item questionnaire which includes three separate scales
(Academic Skills - 30 items, Academic Enablers - 36 items, and Learning Strategies - 48 items). In
addition, each scale consists of multiple subscales.
Specifically, the Academic Skills scale includes three subscales: Reading/Writing,
Mathematics/Science, and Critical Thinking. The Reading/Writing subscale which consists of skills
necessary for generating and understanding written language, includes items ranging from reading
comprehension to written communication. The Mathematics/Science subscale primarily reflects skills
related to the use and application of numbers and scientific concepts. Thus it includes items reflecting
measurement, computation, and problem solving. Finally, the Critical Thinking subscale assesses higherorder thinking skills, and includes items reflecting analysis, synthesis, and investigation.
Items on the Academic Skills Scale require criterion-referenced ratings based on students’
perception of grade-level expectations at their institution. Students provide ratings on a 5-point scale
ranging from Far Below to Far Above grade-level expectations. Each skill also is rated on how important
the skill is for functioning in classes. These ratings use a 3-point scale ranging from Not Important to
Critical. (The importance ratings are used to prioritize target skills for intervention.)
The Academic Enablers scale consists of four subscales: Interpersonal Skills, Motivation, Study
Skills, and Engagement. The Interpersonal Skills subscale measures communication and cooperation
behaviors necessary to interact successfully with other students. It consists of items in three domains: social
interaction, work interaction, and responsive behavior. The Motivation subscale assesses a student’s
initiative and persistence regarding academic subjects, and includes items that reflect responsibility,
preference for challenging tasks, and goal-directed behavior. The Study Skills subscale reflects behaviors
and skills that facilitate learning new information. The items on this subscale primarily fall within three
domains: work preparation, work completion, and work review. Finally, the Engagement subscale assesses
a student’s level of active participation during class; items on this subscale reflect asking questions,
volunteering answers, or assuming leadership in groups.
The Academic Enablers Scale uses frequency ratings based on students’ perception of how often they
exhibit a given enabling behavior. Students provide ratings on a 5-point scale ranging from Never to Almost
Always. Similar to the Academic Skills Scale, each enabler also is rated on how important the skill is for
functioning in classes.
Finally, the Learning Strategies Scale on the ACES-College is used to facilitate intervention
planning. Therefore, items on this scale are based on a summary of effective teaching and learning research
(Christenson, Rounds, & Gorney, 1992; Elliott, Kratochwill, Littlefield Cook, & Travers, 2000) and reflect
seven categories of learning strategies: expectations for learning and achievement, time management and
organization, maximizing learning during instruction, learning resources, homework and assignments, selfmonitoring and evaluation, and rewards and consequences. These items are not scored; instead, they require
the respondent to select from three alternatives (Not Helpful, Somewhat Helpful, or Very Helpful) to
indicate if a strategy would be helpful for improving their own learning. Because these items are for
intervention planning purposes only and are not scored, they were not included in the analyses for this
article.
Grade point averages (GPA). Two measures of GPA were collected as part of the study. The first
required students to report their GPA from the most recently completed academic semester. The second
GPA measure required students to report their cumulative GPA throughout their post-secondary education.
Standardized tests of aptitude and achievement. Students were asked to provide their scores on
any standardized college admissions tests (i.e., SAT or ACT assessment) that they had taken prior to
enrolling in their postsecondary institution.
Procedure
Student participants were recruited through site coordinators at postsecondary institutions
throughout the United States. After consenting to participate, each student completed a questionnaire
requesting demographic information (race, gender, age, disability status, etc.) as well as current and
previous grades and standardized test scores. Each student then completed the standardization version of
the ACES Academic Skills and Academic Enablers Scales.
Predictions and Data Analyses
As noted, this study was designed to examine the criterion-related validity of the ACES-College and
specifically addressed questions about (a) the relation of ACES scores to students’ GPA and SAT or ACT
scores and (b) the ability of ACES scores to differentiate students with learning disabilities from those
without such disabilities. Dependent variables were students’ grade-level expectation ratings of Academic
Skills and frequency ratings of Academic Enablers.
Prediction #1. We predicted the ACES-College would correlate moderately (.30 < r < .60) with both
students’ GPAs and SAT or ACT scores. These predicted relationships were tested using correlation
analyses. The relationship between test scores and other observable variables is an important source of
validity evidence. Frequently, these observable variables are measured by other tests that have been
established to measure similar or different, but related, constructs. In the development of the ACES, we
were interested in understanding the relationship among academic competence and constructs such as
academic achievement (as measured by grades and standardized tests) and aptitude. This type of evidence
addresses questions about the degree to which relationships are consistent with the theory underlying the
proposed test interpretations.
Prediction #2. We also predicted that ACES scores for students with learning disabilities would be
significantly lower (p < .05) than the scores for students not identified as having a disability. This
predicted difference in ACES scores between groups of students was tested using a MANOVA and
followed up with a discriminant-function analysis to determine the classification accuracy of the ACES.
Mathematically, a MANOVA and a discriminant-function analysis are the same, but the emphases often
differ. The major question in the multivariate analysis of variance was whether group membership (i.e.,
students with learning disabilities and students without a disability) was associated with reliable mean
differences in combined dependent variable scores, whereas in the discriminant-function analysis, the
question was whether predictors (i.e., ACES scores) can be combined to predict group membership
reliably. The end goal of the discriminant-function analysis was to classify participants into one of the two
groups of students.
Given the goal of group classification, one can ask What proportion of cases was correctly classified?
When classification errors occur, how were cases misclassified? Before looking at the results of
classification analyses based on discriminant functions with ACES subscale scores, it is important to
remember that the ACES was not developed for disability classification purposes. Instead, it was designed
to be a reliable measure of many of the skills and behaviors indicative of academically competent students.
Yet the ACES-College should be reasonably good at differentiating students with an academic difficulty
from students without such difficulties.
Results
Prediction #1: Relationships Between ACES and Academic Indicators
The prediction concerning the relationship between ACES-College scores and students’ GPAs and
SAT or ACT scores was partially supported. The correlational evidence that supports this first prediction
follows.
Grade point average (GPA). Correlations between ACES-College scores, last semester GPA, and
overall GPA are displayed in Table 2. As shown, correlations between the Academic Skills scores (total
and subscales) with last semester GPA were relatively low in magnitude, ranging from .16 to .21.
Correlations between Academic Skills scores and overall GPA were slightly higher in magnitude, ranging
from .23 to .27. With the exception of the Interpersonal Skills subscale, the Academic Enablers scores
(total and subscales) generally demonstrated stronger relationships with both GPA indices than the
Academic Skills scores. Correlations between last semester GPA and Academic Enablers scores ranged
from .02 to .38 and were consistently higher than the correlations between Academic Enablers scores and
overall GPA, which ranged from .02 to .31.
SAT and ACT scores. Correlations between ACES scales and subscales and students’ scores on the
SAT or ACT are displayed in Table 3. As illustrated, the Academic Skills Scale and most of the subscales
demonstrated low to moderate correlations with SAT scores, ranging from .08 to .42. As expected, the
Reading/Writing subscale demonstrated the largest correlation (.42) with SAT Verbal scores and the
Math/Science Subscale demonstrated the largest correlation (.38) with the SAT Math Score. The Academic
Skills Scale and all its subscales demonstrated significant low to moderate relationships with the ACT
Composite score. The higher correlations between the ACES Academic Skills scores and the ACT
Composite score are possibly due to the fact that the ACT Composite, like the ACES, samples a broad
range of academic skills (English, Reading, Mathematics, and Science Reasoning) whereas the SAT
measures Verbal and Mathematics skills only. The pattern of correlations across these measures indicates
that the ACES Academic Skills subscales measure distinct but related skills in reading and writing,
mathematics and science, and critical thinking.
Table 2
Correlations Between the ACES-College, Semester GPA, and Overall GPA
GPA
Last Semester
Overall
Academic Skills Total
Reading/Writing
Math/Science
Critical Thinking
.16a
.17a
.18a
.21
.23
.26
.25
.27
Academic Enablers Total
Interpersonal Skills
Motivation
Study Skills
Engagement
.38
.02b
.37
.36
.17a
.31
.02b
.28
.30
.10b
Note. Grade level was partialled out of the correlations. All correlations significant at the
p < .01 level unless indicated otherwise. N’s ranged from 174-205.
a
p< .05. b nonsignificant
Correlations between the ACES Academic Enablers scale and subscales with SAT (Verbal, Math,
and Combined) and ACT Composite consistently were not statistically significant. As such, these
correlations cannot reliably be distinguished from a value of 0 (i.e., no relationship). The one significant
correlation occurred between the Motivation Subscale and SAT Verbal Scale, and was low in magnitude
(.31). This overall pattern of correlations indicates that the ACES Academic Enablers measure something
different than academic aptitude as measured by these standardized tests.
Prediction #2. Differences in ACES Scores for Known Groups of Students
Statistical evidence supported the prediction that the ACES-College can effectively identify and
differentiate students with learning disabilities from students without a disability. Table 4 provides the
mean raw score ratings for students with and without disabilities on each of the ACES-College scales and
subscales. An examination of the data suggests that the students with a learning disability exhibited mean
academic skills significantly below those of their peers without disabilities.
Table 3
Correlations Between the ACES-College, SAT scores, and ACT Composite Score
SATa
ACTb
Verbal
Math
Combined
Composite
Academic Skills Total
Reading/Writing
Math/Science
Critical Thinking
.27*
.42***
.16
.24*
.23*
.23*
.38***
.08
.22*
.28***
.25*
.14
.45***
.47***
.41***
.28*
Academic Enablers Total
Interpersonal Skills
Motivation
Study Skills
Engagement
.09
-.02
.31***
.19
.11
-.07
-.13
.08
-.10
-.06
.16
-.02
.18
.14
.16
.14
-.01
.18
.14
.01
a
N’s ranged from 77-97. bN’s ranged from 59-63.
p < .05. ***p < .001
In contrast, these two groups of students exhibited similar mean levels of academic enabling behaviors. The
observed means were tested using a multivariate analysis of variance (MANOVA). The multivariate Fratio resulting from this analysis was statistically significant [F (7, 142) = 9.00, p < .001]. Subsequent
univariate analyses for each of the ACES Academic Skills subscales [Reading/Writing: F (1, 148) = 54.40;
Math/Science: F (1, 148) = 15.85; Critical Thinking: F (1, 148) = 18.36] also were statistically significant
at the p < .001 level. This statistical evidence supports the observation that the ACES-College is sensitive
to many of the skills and behaviors that differentiate students with and without academic skill difficulties.
Table 4
Means and Standard Deviations for Scores on the ACES-College by Disability Status
Disability Status
No Disability a
Learning Disabilityb
109.46
(16.90)
90.22
(19.20)
Reading/Writing
(10 items)
36.82
(6.74)
27.56
(6.02)
Math/Science
(10 items)
34.97
(6.56)
29.91
(7.18)
Critical Thinking
(10 items)
37.35
(6.31)
32.07
(7.81)
148.33
(17.82)
143.05
(18.21)
Interpersonal Skills
(8 Items)
33.00
(4.21)
32.16
(4.07)
Motivation
(10 items)
41.93
(5.88)
40.52
(5.67)
Study Skills
(10 items)
42.10
(6.34)
41.94
(6.49)
Engagement
(8 items)
30.54
(6.19)
28.86
(6.00)
Academic Skills Scale Total
Academic Enablers Scale Total
a
N’s ranged from 275-302. bN’s ranged from 36-50.
In addition to conducting a MANOVA for educational status, we conducted a MANOVA with ACES
subscales as dependent variables and College Level (2-year vs. 4-year school), Gender, and Minority Status
as independent variables. The effects for College Level, F(7, 169) = 1.32, p = .243, Minority Status, F(7,
169) = 1.22, p = .296, and a three-way interaction, F(28, 688) = 0.97, p = .517, all were nonsignificant;
however, the effect due to gender was significant, F(7, 169) = 3.82, p = .0007, with female students on
average scoring higher than their male peers. Followup univariate ANOVAs of the gender effect indicated
that most of the differences occurred between the Academic Enablers subscales. Although these differences
were statistically significant, they were not large enough from a practical perspective to warrant
identification of different norms by gender for the Academic Enablers subscales.
A discriminant-function analysis represents the final evidence concerning test-criterion relationships
for the ACES-College. Classification analysis resulting from the combination of subscale scores on the
ACES-College are summarized in Table 5. The results indicate that the ACES-College, on average across
grades 13-16, classified nearly 76% of the students assessed accurately into one of two groups: (a) students
without disabilities or (b) students identified as having a learning disability.
Discussion
The research reported in this article focused on the validity of a recently developed self-report
measure of academic competence. This new measure, the ACES-College, was designed to measure
academic competence. Previous research has demonstrated that academic competence as measured by the
ACES is a multidimensional construct comprised of a skills domain and an enabler domain (DiPerna &
Elliott, 2000). The present study used a representative sample of college students’ ratings on the ACES to
explore the instrument’s relationship with commonly used indicators (i.e., GPA) and predictors (i.e., SAT
and ACT scores) of academic functioning. In addition, to further advance an understanding of the ACESCollege, we tested its ability to meaningfully differentiate between students known to have a learning
disability and students who had not been identified with any disability.
Table 5
Classification Analysis for Educational Status Groups using ACES-College
n
Actual Group Membership
General Education
Learning Disability
Predicted Group Membership
General
Learning
Education
Disability
n
%
n
%
119
92
77.31
27
22.69
31
8
25.81
23
74.19
Overall
Percentage of
Correctly
Classified Cases
75.75
The research is part of a program of research conducted to establish validity evidence for the
resulting test scores as required for any new assessment tool (American Educational Research Association,
1999). Test developers and users historically have referred to the type of evidence we collected as
concurrent validity, convergent and discriminant validity, predictive validity, or criterion-related validity.
Regardless of what it is called, this type of evidence is important for understanding what a test measures
and whether it relates to other variables as expected based on the theoretical model that guided its
development and use.
The pattern of convergent correlations between ACES scales and subscales and GPA provided
evidence that the ACES-College measures skill and behavior domains that are related with students’ actual
performance in their courses, although the correlations are somewhat lower than those obtained in previous
research using a K-12 version of the teacher ACES (DiPerna & Elliott, 1999, 2000). There are several
plausible explanations for these findings. One explanation is the restricted range of the sample. College
students, even those who are struggling, demonstrated a level of academic proficiency prior to entering
college. Thus, the college population generally represents a smaller and higher-achieving sample than the
general population. Although every effort was made during ACES standardization to collect a diverse
college sample with regard to demographic variables as well as academic achievement, the college
population is a unique (and restricted) sample by definition. A second explanation is that GPA data were
self-reported and, although students had no incentive to misreport GPA, inaccuracies inevitably occurred as
a result of having to report these values from memory. Third, given the target population for ACES use
(i.e., students experiencing academic difficulty), the instrument emphasizes fundamental skills and
behaviors that allow students to learn and demonstrate knowledge (e.g., reading, writing, math, thinking)
rather than assessing content knowledge, which is arguably the primary attribute assessed in many college
courses. Although future research is necessary to identify which (or what combination) of these hypotheses
is accurate, the obtained correlations provide evidence that the ACES measures skills and behaviors relate
to current and overall academic achievement at the collegiate level.
In contrast, the Academic Enablers are clearly measuring different constructs than aptitude or
previous achievement, but they are measuring behaviors and skills related to current academic
achievement. The Interpersonal Skills subscale did not demonstrate significant relationships with any of the
external validity measures. Although interpersonal skills or prosocial behavior has been demonstrated to be
a significant predictor of current academic achievement for students in grades K-12 (DiPerna 1999;
Malecki & Elliott 2002; Wentzel, 1993), this relationship has not been adequately explored in
postsecondary education. At a practical level, interpersonal skills are necessary to function successfully in a
college environment (both academically and socially); however, relative to elementary and secondary
classrooms, these skills are de-emphasized in many courses at the college level. We decided to retain the
Interpersonal Skills subscale on the ACES-College given the empirical evidence based on the internal
structure of the measure as well as the critical role these skills play in functioning within a college
environment (DiPerna, 2001).
The inclusion of a sample of students with learning disabilities in this study was central to questions
about the criterion-related validity and the utility of the ACES-College. Although our sampling approach
was not entirely random and it was not possible to verify the disability status of all participants, the
resulting self-identified sample appeared to be representative of the U.S. population of college students
with learning disabilities. As expected, we found that students with learning disabilities self-reported levels
of academic skills that were on average significantly lower than those of peers without disabilities. The
academic skill differences across the two known groups are consistent with expectations based on the
definition of a learning disability as well as previous findings with the teacher and student versions of the
ACES (DiPerna & Elliott, 2000). Although the academic enablers findings are inconsistent with previous
research using the ACES-Teacher and Student versions at the secondary level (DiPerna & Elliott, 2000),
the results make conceptual sense. That is, students with disabilities who have well-developed enablers
(similar to those of other students who pursue postsecondary education) appear to be most likely to enter
and stay in college. To be successful at the college level, students with disabilities need academically
enabling behaviors that are similar to those of their college-bound peers.
Although the ACES-College was not designed to be used primarily as an assessment instrument for
classification purposes, it was found to do a reasonably good job of correctly classifying a high percentage
of students into their known groups. Practitioners and researchers should never use a single assessment
instrument to make important educational decisions, such as a disability classification. Yet, the ACESCollege has the potential to be an efficient supplement to more direct measures of academic skills. In
addition, it has the added benefit of actively involving the student in analyzing his or her own academic
skills, enabling behaviors, and learning strategies.
Collectively, there is substantial evidence to support the conclusion that scores from the ACESCollege have moderate relationships with external criteria commonly used to measure students’ academic
functioning. Specifically, this evidence suggests that scores on the ACES-College (a) share statistically
significant variance with scores from individual standardized tests of achievement, and (b) differentiate
between students who have learning disabilities and students who have no identified academic difficulties.
Continued research efforts are needed to document the validity of assessments to make important
educational decisions. Future research with the ACES-College will examine the self-reported use of
learning strategies by students experiencing academic difficulty and interventions that have been found
effective for increasing skills, enablers, and strategies of all students. The goal of all this work is to better
understand what it takes for students to achieve a high level of academic competence throughout their
formal educational experiences.
References
American Educational Research Association. (1999). Standards for educational and psychological testing.
Washington, DC: Author.
Christenson, S.L., Rounds, T., & Gorney, D. (1992). Family factors and student achievement: An avenue
to increase students’ success. School Psychology Quarterly, 7, 178-206.
DiPerna, J.C. (1997). Academic competence: The construct and its measurement via teacher ratings.
Unpublished master’s thesis, University of Wisconsin.
DiPerna, J.C. (1999). Testing student models of academic achievement. Unpublished doctoral dissertation,
University of Wisconsin.
DiPerna, J.C. (2001). Validity evidence for the content and structure of the Academic Competence
Evaluation Scales – College Edition. Manuscript submitted for publication.
DiPerna, J.C., & Elliott, S.N. (1999). The development and validation of the Academic Competence
Evaluation Scale. Journal of Psychoeducational Assessment, 17, 207-225.
DiPerna, J.C., & Elliott, S.N. (2000). The Academic Competence Evaluation Scales (ACES K-12). San
Antonio, TX: The Psychological Association.
DiPerna, J.C., & Elliott, S.N. (2001). The Academic Competence Evaluation Scales (ACES College). San
Antonio, TX: The Psychological Association.
Dunn, C. (1995). A comparison of three groups of academically at-risk college students. Journal of College
Student Development, 36, 270-279.
Elliott, S.N., Kratochwill, T.R., Littlefield Cook, J., & Travers, J. (2000). Educational psychology:
Effective teaching, effective learning (3rd ed.). Boston: McGraw-Hill.
Henggeler, S.W., Cohen, R., Edwards, J.J., Summerville, M.B., & Ray, G.E. (1991). Family stress as a link
in the association between television viewing and achievement. Child Study Journal, 21, 1-10.
Horn, L., & Berktold, J. (1999). Students with disabilities in postsecondary education: A profile of
preparation, participation, and outcomes. Washington, DC: U.S. Department of Education, National
Center for Education Statistics.
Hughes, C. A., & Osgood Smith, J. (1990). Cognitive and academic performance of college students with
learning disabilities: A synthesis of the literature. Learning Disability Quarterly, 13, 66-79.
Malecki, C.K. & Elliott S.N. (2002). Children’s social behaviors as predictors of academic achievement: A
longitudinal analysis. School Psychology Quarterly, 17(1), 1-23.
Pintrich, P. R., McKeachie, W. J., & Smith, D. (1989). The Motivated Strategies for Learning
Questionnaire (MSLQ). Ann Arbor, MI: National Center for Research to Improve Post-Secondary
Teaching and Learning.
Psychological Corporation. (2001) Wechsler Individual Achievement Test (2nd ed.). San Antonio, TX:
Author.
Rotheram, M.J. (1987). Children’s social and academic competence. Journal of Educational Research, 80,
206-211.
Weinstein, C. E., Schulte, C., & Palmer, D. H. (1987). Learning and Study Strategies Inventory.
Clearwater, FL: Hemisphere.
Wentzel, K.R. (1993). Does being good make the grade? Social behavior and academic competence in
middle school. Journal of Educational Psychology, 85, 357-364.
Wilcenzenski, F. L., & Gillespie-Silver, P. (1992). Challenging the norm: Academic performance of
university students with learning disabilities. Journal of College Student Development, 33, 197-202.
Woodcock, R. J., McGrew, K. S., & Mather, N. (2001). Woodcock-Johnson III Tests of Achievement.
Itasca, IL: Riverside Publishing.
About the Authors
Stephen N. Elliott received his doctorate at Arizona State University in 1980 and is currently a
professor of Educational Psychology and Associate Director of the Wisconsin Center for Education
Research at UW-Madison. He is a faculty member in the School Psychology Program and currently directs
four federal grants concerning testing accommodations and services for students with disabilities and their
teachers.
James C. DiPerna received his doctorate at the University of Wisconsin-Madison and is currently an
assistant professor in the School Psychology Program at Lehigh University. His research interests focus on
the assessment of academic skills and enablers for students K-college and providing intervention services
for at-risk learners.
Download