The Cattell-Horn-Carroll Theory of
Cognitive Development: Justification
for the continued use of cognitive
instrumentation in psychological
assessment.
John M. Garruto, MS, NCSP
School Psychologist
Frederick Leighton Elementary School
Oswego City School District
Oswego, NY
©2005
It is
Meanwhile,
age of
there
controversy
are a few
between
rebel
school
the cognitive
psychologists
processing
who believe
Try
to the
picture
the
previous
text
boxes
leaving
the screen
factions,
that
best practice
the RTI factions,
can be instituted
and the traditional
by combining
assessment
both thefactions.
RTI
through the top.
After a silent
paradigm
andcold
the cognitive
war between
processing
all theorists,
paradigm.
the federal
Can these
government
few
has come
school
psychologists
out with a law
stand
that
tall
clearly
against
favors
overwhelming
the RTI faction.
odds toThe
provide
traditional
best
practice
factions
for their
andclients?
the cognitive processing factions have been
silently accepting, yet planning how their paradigms can be pushed
through, despite discouragement from these paradigms by the
federal law.
IDEA WARS
By the end of this presentation,
you should be able to:
Identify the limitations of the cognitive-achievement
discrepancy approach to learning disability
evaluations.
Discuss the promise and the pitfalls to the uses of
informal assessment when working with students
who have learning concerns.
Discuss how the Cattell-Horn-Carroll (CHC) theory
and cross battery assessment fit into a problem
solving model for identifying learning disabilities and
for intervention for students with various learning
concerns.
The traditional approach
to learning disability
identification
Historically, eligibility for learning disabilities was
determined by use of a discrepancy model. Although
how this was determined varied from practitioner to
practitioner, although one of these methods were
often used:
Use of Normal Curve Equivalents
Simple-Difference Method
Predicted-Achievement Method
Use of Normal Curve Equivalents
Pros:
Allowed practitioner to
find a 50% discrepancy
between cognitive and
achievement.
Easy method to use
(divide NCE of cognitive
score by two-scored
below NCE on
achievement measure
considered discrepant.)
Cons:
Assumes a 1:1
relationship between
cognitive and
achievement measures.
Easier to identify lower
functioning students,
harder to identify higher
functioning students
(cognitively).
See Mark Penalty
Use of the simple-difference method:
Pros:
Established statistics for
determining statistical
significance.
Easy to use (subtract
cognitive from
achievement and find
corresponding alpha
level.)
Cons:
Does not take into
account natural
regression to the mean.
Alpha levels are
misunderstood-they
indicate the likelihood
that the difference is not
due to chance, not how
prevalent the difference
is in the population.
See Mark Penalty
Use of the predicted achievement
method
Pros:
Uses regression to
the mean as a way
to predict individual
achievement.
Cons:
If you do not use
data in the manual,
you must know how
to predict
achievement.
See Mark Penalty
So what is this Mark
Penalty?
The Mark Penalty:
“**Mark 4:25: "For he that hath, to him shall be given: and
he that hath not, from him shall be taken even that which he
hath." The Mark Penalty is incurred when a student's
disability (e.g., visual impairment, hearing loss, or learning
disability basic process disorder) is allowed to depress not
only measures of academic achievement, but also estimates
of the student's intelligence so that the misguided examiner
or benighted team concludes that there is no significant
difference between the student's academic achievement and
the level of achievement that would be predicted from the
student's score on the intelligence test. The same disability is
depressing both the student's actual achievement and the
erroneous estimate of the student's intellectual ability.”
Taken from the webpage-Dumont-Willis
http://http://alpha.fdu.edu/psychology/FLOW_CHART.htm
As can be seen in the quote below from IDEA1997, such a practice is NOT LEGAL.
Taken from 34 C.F. R. § 300.532. Evaluation procedures:
Tests are selected and administered so as best to ensure that
if a test is administered to a child with impaired sensory,
manual, or speaking skills, the test results accurately reflect
the child’s aptitude or achievement level or whatever other
factors the tst purporst to measure, rather than reflecting the
child’s impaired sensory, manual, or speaking skills (unless
those skills are the factors the test purports to measure).
Taken from
http://framework.esc18.net/documents/34CFR300/500/300.5
32.htm
In addition to the preceding factors, the traditional
approach does not necessarily link assessment to
intervention. It is primarily focused on eligibility. Such a
conceptualization can be problematic because it focuses on
scores, not necessarily on processes. A discrepancy itself
does not highlight a disorder in one or more of the basic
psychological processes.
Recap on traditional assessment
Normal curve equivalents, the simple-difference
method, and predicted achievement method, are
some ways of establishing a discrepancy.
No approach takes into account the “Mark Penalty”
and are contrary to § 300.532 “Evaluation
Procedures” of the C.F.R.
Assessments may be focused on eligibility only and
not on the intervention planning process.
The Response to Intervention model has been indicated as
an option by the 2005 Reauthorization of Individuals with
Disabilities Education Act to circumvent the problems
identified with the traditional approach.
The theme of this approach is that special education should be
the last area that is visited-that the student cannot receive a
Free and Appropriate Public Education without special
educations upports.
Characteristics of the Response to
Intervention paradigm
Decreased emphasis on norm-referenced or
individualized assessment (may not translate to what
is happening in the classroom)
Increased opportunities for indirect intervention
(consultation)
Increased opportunities for direct intervention
(counseling)
Increased use of alternative local assessments (such
as DIBELS, curriculum based assessments, or
curriculum based measurement.)
Positive points to this paradigm
Progress monitoring is easier (can administer
multiple assessments and document quality
of interventions).
Onus is on using empirically based
interventions, not minor informal changes.
Student can be compared to peers in their
local cohort, as opposed to a national
representation.
Are there cons to this framework? YES!
Assessments are outcome-driven, not
process driven (You only see the products as
a result of cognitive processes.)
More difficult to determine to what extent
facilitators and inhibitors end and real
learning problems begin.
Difficult to determine etiology of the referral
concern.
Recap on response to intervention:
Decrease emphasis on normed assessment.
Increased opportunities for other school
psychologist roles.
Can monitor progress by administering multiple
assessments.
Results focus on outcomes, not process.
Difficult to determine disorder in basic
psychological processes-assumption is made of
LD.
The multi-tiered model advocates a comprehensive
implementation of the problem solving model. However, few
writings on utilizing a response to intervention framework give
suggestions about alternatives to the “referral” stage. Specific
theoretical models are not endorsed. There are problems with this
framework.
Consider:
•If the definition of learning disabled is still a disorder in one or
more of the basic psychological processes, how will these
deficits be defined if we go right to identification without
formal assessment?
•Will there be a temptation to use the traditional approach at the
final tier, an approach that we already know is problematic?
This is why a solid theoretical approach is endorsed. An
approach that fits this model is…
The Cattell-Horn
Carroll (CHC)
Theory of
Cognitive
Development
The Cattell-Horn-Carroll Theory of cognitive development is a synthesis
of the models by Raymond Cattell, John Horn, and John Carroll.
Cattell postulated that there were two overall abilities people have:
Crystallized intelligence and fluid intelligence. Crystallized
intelligence reflected abilities that were relatively static (such as
learned information) while fluid intelligence was more related to novel
problem solving.
John Horn expanded this model by adding seven to nine (depending
on your theoretical orientation) broad abilities. They include:
John Horn’s broad abilities
Crystallized
Intelligence
(Gc)
Fluid
Reasoning
(Gf)
Auditory
Processing
(Ga)
Processing
Speed
(Gs)
Visual-Spatial
Processing
(Gv)
Short-Term
Memory
(Gsm)
Reading/
Writing ability
(Grw)
Long-Term
Retrieval
(Glr)
Quantitative
Reasoning
(Gq)
After analyzing past years of data, John Carroll came up with his own se
of broad abilities. He also offered a three-stratum theory of cognitive
development.
•Stratum III represents ‘g’ or overall intelligence.
•Stratum II represents the broad abilities previously discussed.
•Stratum I represents the narrow abilities grouped under the broad
abilities (for example, the broad construct of Gsm includes the narrow
abilities of working memory and memory span.)
John Horn and John Carroll agreed to synthesize their theories
(Carroll’s three stratum theory with Horn’s broad abilities). The
result is the CHC theory of cognitive development.
Following the evolution of CHC theory, cross battery assessment
emerged as a way to assess students. The principle is to select
tests from varied batteries which best match the referral concern.
The examiner then puts the scores into a cross-battery template,
where the stratum II and I levels are aggregated (the subtests were
matched with their stratum I and II counterparts by an expert
consensus study.) Those scores are then averaged.
Although combining subtests and averaging may not seem like
best practice, it has been noted to be an empirically defensible
practice (See the FAQ section in http://www.crossbattery.com)
Why cross battery assessment? Why not
just give the whole WJ-III?
Assessments should be based on the referral
concern.
The WJ-III is a good test, but other tests
measure other constructs better (for example,
WISC-IV is a great measure of Gc, but has no
measures of Ga or Glr).
Highlighting cognitive profiles, as opposed to
broad ability composites can better reflect a
child’s strengths and weaknesses.
Why bother with this theory of assessment if
the problem solving literature is not strongly
linking assessment with intervention?
The research DOES link cognitive constructs with
academic deficiencies.
A thorough assessment of cognitive abilities can be
helpful in terms of determining to what extent other
facilitators and inhibitors may be hindering progress
of students (e.g. attention and focus, motivation,
ecological factors (instruction)).
Why bother with this theory of assessment if
the problem solving literature is not strongly
linking assessment with intervention? (Part
Deux)
Consider the problem solving model. Accurately
identifying the problem is important. Think of
functional behavioral assessments-it is the
hypothesis that is the crux of the FBA (which fits into
the problem solving model.)
School psychologists are experts at understanding
information processing. Individualized assessment
can confirm those “disorders in the basic
psychological processes.”
In addition to the preceding justifications for cognitive assessment,
consider this quote by Hale and Fiorello (2004) on aptitude-treatment
interactions:
Because most ATI research occurred when investigators had poor assessment
instruments and a limited understanding of brain functions, early AT research failures
have been attributed to a variety of reasons. Many cognitive constructs were poorly
defined or poorly measured (Ysseldyke & Salvia, 1974). Often, heterogeneous groups
were simply divided at the median to define “high” and “low” groups. Treatments were
poorly defined or implemented without integrity checks (Reynolds, 1988). (p. 40)
Hale and Fiorello then go on to say:
As Branden and Kratochwill (1997) have noted, however the fact that ATIs weren’t
established in the past doesn’t mean that they can’t be established in the future,
especially at the single-subject level of analysis. Changing the focus from the content of
test items (e.g., auditory, visual) to the underlying psychological processes (Reynolds,
Kamphaus, Rosenthal, & Hiemenz, 1997) may be key to understanding the true nature
of brain-behavior relationships for individual children. (p. 40)
Not convinced?
Just kidding
In a case study noted in Hale and Fiorello (2005), they reflect on
a child who was identified as learning disabled in the second
grade. Only anecdotal reports were used at reevaluation and he
was subsequently brought in for a private evaluation. The
clinician noticed a decline in overall functioning and that his
profile had been markedly different. After a subsequent referral
to a neurologist, it was found out that the child had a brain
tumor. (p.21)
Someone already did this
PowerPoint?
Two days before presenting this PowerPoint, I had
discovered that someone did a very similar
presentation at NASP. Jim Hanson (2005) of Portland
public schools spoke of synthesizing the two models.
Here is something from his presentation that is
noteworthy:
Cognitive Processing
Is it ethical NOT to use available technology that better addresses
instructional variables, results in earlier service delivery, and might
improve outcomes for students?
RTI
Is it ethical to CLASSIFY a student as having a specific learning
disability on the “assumption” or “inference” that a child has a “withinchild” neurological difference?
Why use CHC as part of the
Problem Solving Model?
It is an empirically based theory, solidified through
comprehensive research of previous evaluations.
The cognitive constructs HAVE been related to
certain academic difficulties.
It allows for confirmation of a disorder in one or more
of the psychological processes.
The research suggests there are still possibilities for
linking assessment with intervention.
Implications for practitioners and
training programs:
Promoting an RTI paradigm in schools pushes us to
put the focus on intervention, not diagnosis or
problem admiration.
Using a solid theoretical base for cognitive
assessment is justified as part of the continuum for a
problem solving model in identifying learning
disabilities.
Thank you for coming, participating, and putting up with
my pop-culture addiction!
References
DDeno, S. L. (2003). Problem Solving as "Best Practice". Best Practices in School Psychology IV. A. Thomas and J. Grimes. Bethesda, Nationa
Association of School Psychologists. 1: 37-55.
DDumont, R.P. & Willis, J.O. (2001). CHC Theory. Retrieved July 5, 2005, from http://alpha.fdu.edu/psychology/chc_theory.htm.
EElliot, C.D. (1990). Differential Ability Scales. San Antonio, TX: The Psychological Corporation.
EEvans, J. J., R. G. Floyd, et al. (2002). "The Relations Between Measures of Cattell-Horn-Carroll (CHC) Cognitive Abilities and Reading
Achievement During Childhood and Adolescence." School Psychology Review 31(2): 246-263.
FFlanagan, D. P. (2000). "Improving the validity of test interpretation through Wechsler-based Gf-Gc cross-battery assessment." NYS
Psychologist 12(1): 38-42.
FFlanagan, D. P. and S. O. Ortiz (2003). Best Practices in Intellectual Assessment: Future Directions. Best Practices in School Psychology IV. A
Thomas and J. Grimes. Bethesda, National Association of School Psychologists. 2: 1351-1372.
FFlanagan, D. P., S. O. Ortiz, et al. (2002). The Achievement Test Desk Reference. Boston, Allyn & Bacon.
FLloyd, R. G. (2003). "Relations Between Measures of Cattell-Horn-Carroll (CHC Cognitive Abilities and Mathematics Achievement Across th
School-Age Years." Psychology in the Schools 40(2): 155-172.
HHale, J. B. and C. A. Fiorello (2004). School Neuropsychology: A Practitioner's Handbook. New York, The Guilford Press.
McGrew, K. and D. P. Flanagan (1997). "Beyond g: The impact of Gf-Gc specific cognitive abilities research on the future use and interpretatio
of intelligence tests in the schools." School Psychology Review 26(2): 189-201.
RReschly, D. J. and J. P. Grimes (2003). Best Practices in Intellectual Assessment. Best
Practices in School Psychology IV. A. Thomas and J
Grimes. Bethesda, National Association of School Psychologists. 2: 1337-1350.
RReschly, D. J. and J. E. Ysseldyke (2003). Paradigm Shift: The Past is Not the Future. Best Practices in School Psychology IV. A. Thomas and
Grimes. Bethesda, National Association of School Psychologists. 1: 3-20.
SSattler, J.M. (2001). Assessment of Children: Cognitive Applications. San Diego, CA: Author.
TTilly, W. D. (2003). Best Practices in School Psychology as a Problem-Solving Enterprise. Best Practices in School Psychology IV. A. Thomas
and J. Grimes. Bethesda, National Association of School Psychologists. 1: 21-36.
UUnited States Department of Education Office of Special Education and Rehabilitative Services (2005). Proposed Rules. Retrieved July 5, 20
from http://www.nasponline.org/advocacy/05-11804.pdf
WWagner, R.K., Torgensen, J.K., & Rashotte, C.A. (1999). Comprehensive Test of Phonological Processing. Austin, TX: PRO-ED.
WWillis, J. O. & Dumont, R. P. (1998). Guide to Identification of Learning Disabilities (1998 New York State Ed., p. 104). Acton, MA: Copley
aWatkins, M.W., Kush, J.C., & Glutting, J.J. (1998). Discriminant and predictive validity of the WISC-III ACID profile among children with
learning disabilities. Psychology in the Schools, 34(4), 309-319.
Download

The Cattell-Horn-Carroll Theory of Cognitive Development: A