Response to Intervention Models - Oregon Department of Education

advertisement
TECHNICAL ASSISTANCE TO SCHOOL DISTRICTS
Identification of Students with Learning
Disabilities under the IDEA 2004
Oregon Response to Intervention
Office of Student Learning & Partnerships
September 2005
OREGON DEPARTMENT OF EDUCATION
Public Service Building
255 Capitol Street NE
Salem, Oregon 97310-0203
Phone: (503) 378-3569
www.ode.state.or.us
This paper could not have been developed without the assistance of the special education staff
from Tigard-Tualatin School District. The Oregon Department of Education appreciates their
willingness to share their skills and knowledge to benefit Oregon students with disabilities and
their families.
It is the policy of the State Board of Education and a priority of the Oregon Department of
Education that there will be no discrimination or harassment on the grounds of race, color, sex,
marital status, religion, national origin, age or disability in any educational programs, activities or
employment. Persons having questions about equal opportunity and nondiscrimination should
contact the State Superintendent of Public Instruction at the Oregon Department of Education.
Identification of Students with Learning
Disabilities under the IDEA 2004
Oregon Response to Intervention
This report is posted on the ODE website at:
http://www.ode.state.or.us/search/results/?id=319.
(Effective September 28, 2005)
The Oregon Department of Education hereby gives permission to copy any or all
of this document for educational purposes.
OrRTI Guidance, September 2005
i
OrPTI Guidance, September 2005
ii
OrRTI Guidance
Table of Contents
Page
Introduction......................................................................................................... 1
Section One, Response to Intervention ............................................................ 5
Section Two, Implementing Response to Intervention (RTI) ........................ 17
Section Three, Perspectives on Evaluation Models ...................................... 39
Appendix
References .............................................................................................. 45
Response to Intervention Readiness Checklist ........................................ 51
OrRTI Guidance, September 2005
iii
OrPTI Guidance, September 2005
iv
Introduction
Every day Oregon educators make decisions about children that are of life long
importance. Among the most profound of these is the conclusion that a child’s
educational struggles are the result of a disability. Educators engage in this
difficult task because they know that, despite the dangers inherent in labeling
students, important benefits may follow. When the decision is accurate, it can
help parents and children understand the source of difficulties. It opens the door
to resources, assistance, and accommodations.
Deciding a child does not have a disability is equally important. That conclusion
says to general educators that they can effectively educate the student. It tells
parents and students that success is attainable through hard work, practice, and
engaged instruction, without special education services.
It is critical that schools make these decisions based on the best information
possible. For the majority of children in special education, those identified as
having a learning disability (LD), this decision has been made in a climate of
uncertainty. For decades the field of learning disabilities has struggled with
identification issues both in practice and in the law. However, with the 2004
reauthorization of the Individuals with Disabilities Education Act (IDEA 2004), the
climate is changing.
When IDEA was reauthorized in 1997, the U.S. Department of Education Office
of Special Education Programs (OSEP) began a process to “carefully review
research findings, expert opinion, and practical knowledge … to determine
whether changes should be proposed to the procedures for evaluating children
suspected of having a specific learning disability” (Federal Register, 1999, p.
12541). This review resulted in a “Learning Disabilities Summit.” At this summit,
a series of white papers presented relevant developments in the LD field and
provided empirical validation for the use of alternatives to traditional discrepancy
models. Following the summit, a series of meetings was conducted to gain
consensus in the field regarding issues around LD. The following are consensus
statements from the 2002 Learning Disabilities Roundtable report that apply to
LD identification and were influential in the 2004 reauthorization process:

Identification should include a student-centered, comprehensive
evaluation and problem solving approach that ensures students who
have a specific learning disability are efficiently identified.

Decisions regarding eligibility for special education services must draw
from information collected from a comprehensive individual evaluation
using multiple methods and sources of relevant information.

Decisions on eligibility must be made through an interdisciplinary team,
using informed clinical judgment, directed by relevant data, and based
on student needs and strengths.
OrRTI Guidance, September 2005
1

The ability-achievement discrepancy formula should not be used for
determining eligibility.

Regular education must assume active responsibility for delivery of
high quality instruction, research-based interventions, and prompt
identification of individuals at risk while collaborating with special
education and related services personnel.

Based on an individualized evaluation and continuous progress
monitoring, a student who has been identified as having a specific
learning disability may need different levels of special education and
related services under IDEA at various times during the school
experience.
(Source: Specific Learning Disabilities: Finding
Common Ground; pp. 29-30)
IDEA 2004 represents consensus on at least three points regarding LD
identification. These points are: (1) the field should move away from the use of
aptitude achievement discrepancy models, (2) there needs to be rapid
development of alternative methods of identifying students with learning
disabilities, and (3) a response to intervention (RTI) model is the most credible
available method to replace discrepancy. RTI systematizes the clinical judgment,
problem solving, and regular education interventions recommended in the
consensus statements above. In RTI, students are provided with carefully
designed interventions that are research based, and their response to those
interventions is carefully tracked. This information is analyzed and used as one
component in determining whether a child has a learning disability.
IDEA 2004 includes two important innovations designed to promote change:
1. States may not require school districts to use a severe discrepancy
formula in eligibility determination, and
2. Districts may use an alternative process, including a “response to
intervention” (RTI) method described in IDEA 2004, as part of eligibility
decisions.
This document provides information to assist school districts in designing and
adopting an RTI approach that best fits the district, is technically sound, and is
sustainable. It also reviews current information regarding the use of other
evaluation approaches. Whatever model the district uses to implement RTI, such
an adoption will affect more than a district’s special education and evaluation
departments. RTI requires a way of thinking about instruction, academic
achievement, and individual differences that makes it impossible to implement
without fully involving general education.
It is important that practitioners know why the authors of IDEA 2004 decided to
include an alternative to the discrepancy approach. Adopting an alternative
OrPTI Guidance, September 2005
2
requires that individuals release long held beliefs and practices and involves
substantial effort and resources. Federal requirements call for states to adopt
criteria for LD eligibility that districts will be required to use. This paper will
provide a basis for Oregon’s criteria and a better understanding of the basis for
the criteria for relevant stakeholders. The development and implementation of an
RTI model has been identified as an area of focus in Oregon’s System
Performance Review & Improvement system (SPR&I).
This paper contains three sections: “Response to Intervention,” “Implementing
Response to Intervention,” and “Perspectives on Evaluation Models.”
“Response to Intervention” provides detail about various models that have been
proposed, described, and implemented and strengths and challenges inherent in
each. “Implementing Response to Intervention” is a practical guide to developing
and sustaining RTI in a school district. “Perspectives on Evaluation Models”
reviews background information regarding research in discrepancy and
processing models of LD evaluation.
OrPTI Guidance, September 2005
3
OrPTI Guidance, September 2005
4
Section One
Response to Intervention
IDEA 2004 allows the use of a student’s “response to scientific, research-based
intervention” (20 U.S.C 1414 (B)(6)(A)) as part of an evaluation. Response to
intervention (RTI) functions as an alternative for learning disability (LD)
evaluations within the general evaluation requirements of IDEA 2004. The
statute continues to include requirements that apply to all disability categories,
such as the use of validated, non biased methods, and evaluation in all
suspected areas of difficulty. IDEA 2004 adds a new concept in eligibility that
prohibits children from being found eligible for special education if they have not
received instruction in reading that includes the five essential components of
reading instruction identified by the Reading First Program. These requirements
are those recognized by the National Reading Panel: phonemic awareness,
phonics, reading fluency (including oral reading skills), vocabulary development,
and reading comprehension strategies. RTI is included under this general
umbrella. By using RTI, it is possible to identify students early, reduce referral
bias, and test various theories for why a child is failing. It was included in the law
specifically to offer an alternative to discrepancy models.
RTI is not a new approach. It is recognizable under other names such as
dynamic assessment, diagnostic teaching, and precision teaching. Those terms,
however, have been applied to approaches used to maximize student progress
through sensitive measurement of the effects of instruction. RTI applies similar
methods to draw conclusions and make LD classification decisions about
students. The underlying assumption is that using RTI will identify children
whose intrinsic difficulties make them the most difficult to teach. Engaging a
student in a dynamic process like RTI provides an opportunity to assess various
hypotheses about the causes of a child’s difficulties, such as motivation or
constitutional factors like attention.
Capacities Required to Adopt RTI
Several organizational approaches are available for implementing RTI. These
models generally encompass the following four system requirements (Gresham,
2002 Vaughn, 2002):
1. Measurement of academic growth
2. Use of validated interventions
3. Capability of distinguishing between:
a. Performance deficits and skill deficits and
b. Instructional problems and individual learning problems
4. Ability to determine the effects of interventions and make decisions about
cutoff criteria
OrPTI Guidance, September 2005
5
These requirements imply both technical and practical capacity that must be
considered when an RTI system is developed or adopted.
Measurement of Academic Growth
Fuchs and Fuchs (1998) introduced the important concept that a student, in order
to be considered to have a learning disability, must be dually discrepant. It has
been demonstrated that, in order for a student to be reliably classified as having
LD, low achievement must be accompanied by slow progress. Using low
achievement alone results in group membership that will change substantially
over time, with students moving into and out of the group. (Francis et al., 2005).
RTI decisions must be made both on the basis of a student’s relative low
achievement and on the student’s slow slope of progress.
This criterion can be met by use of a well documented approach referred to as
curriculum based measurement (Fuchs and Fuchs, 1998). Curriculum based
measurement uses “critical indicators” of growth such as oral reading fluency,
correct word sequences written, and rate of correct calculations. These
measures may be normed on a local sample (Shin, 1988) or on the results of
large scale studies. Alternatively, typical peers may be sampled as a direct
comparison group during the assessment phase (Fuchs and Fuchs, 1998).
CBMs have been established as valid, easy to use, and economical. They can
also be used as frequently as daily without threatening their sensitivity.
To aid in the early identification of students who are not progressing as expected,
Good and Kaminski (Good and Kaminski, 1996) have developed a number of
“indicators” of early literacy development as predictors of later reading
proficiency. These measures, included in the Dynamic Indicators of Basic Early
Literacy Skills (DIBELS), provide a tool to overcome the important challenge of
early identification of children with potential reading problems. The DIBELS
system allows for careful tracking of students on the development of early skills
related to phonological awareness, alphabetic understanding, and fluency. The
DIBELS system includes a series of benchmarks that assist in the “sorting” of
students into tiered groups that have increasing levels of risk. A further benefit of
using the DIBELS system is that oral reading fluency measures extend through
sixth grade and provide normative data to thousands of school districts in the
United States.
The DIBELS measures are provided to Oregon schools at no cost through the
Oregon Reading First Project. Further information about DIBELS may be found
at http://dibels.uoregon.edu/techreports.
It should be noted that RTI research and model implementation generally
focuses on elementary aged children. The measures that are available are most
appropriately used with younger students and, as students mature, factors such
OrPTI Guidance, September 2005
6
as motivation and behavior make interpretation of students’ performance
increasingly complex. This is true of traditional testing paradigms as well. RTI
models frequently combine response to intervention with hypothesis testing or
problem solving approaches, both of which become increasingly important for
older students. For students in late elementary and secondary schools, careful
review of students’ histories is very important.
Use Validated Interventions
General education is the first intervention. Many authors (Kame’enui and
Simmons, 2002) conceptualize the first phase of “intervention” to be at the
general education basic or core curriculum level. From this perspective, use of a
research based core curriculum is a necessary precondition for adopting RTI.
Such curricula provide development in the instructional components identified as
essential by the National Reading Panel: phonemic awareness, systematic
phonics, fluency, vocabulary, and text comprehension. A number of published
curricula have been aligned with these instructional components. An issue for
consideration in an RTI model is that there should be a mechanism in place for
judging the fidelity of implementation of any identified curriculum.
Interventions are supported by research. With respect to more intensive
individual interventions, the body of literature on validated procedures is growing.
Gresham (2002) reviewed the current body of literature and reached the
following conclusions:
1. The concept of a validated intervention protocol is supported by research.
2. A combination of “Direct Instruction” and “Strategy Instruction” is the most
productive in effecting growth.
The use of validated instructional protocols presumes that the school has
identified sets of instructional interventions, usually of increasing intensity, that
have been demonstrated to be effective. These interventions are varied by
curriculum focus, group size, frequency, duration, and motivational conditions.
Often, these variables are modified in relation to student characteristics.
Using Direct Instruction and Strategy Instruction means the school has adopted
interventions that are based on these well established instructional approaches.
Direct instruction models are characterized by relatively more teacher directed
instruction and less independent seatwork. Information is carefully structured
and sequenced and is actively presented so as to maximize student attention,
involvement, and practice. Consistent procedures for cueing, prompting, and
correction are utilized within the context of on-going assessment of student
understanding (Huitt, 1996). In direct instruction models, student mastery is
carefully defined and achieved before moving to the next step in the instructional
sequence.
OrPTI Guidance, September 2005
7
Strategy instruction employs specific, highly elaborated instruction in text
comprehension. Validated models of strategy instruction use instructional
techniques that are consistent with direct instruction. Modeling and planned
generalization of skills are typical instructional steps in strategy instruction.
Several authors (Pressley, 2000) have described instructional sequences
appropriate to strategy instruction.
Distinguish Between Types of Learning and Performance Problems
Gresham (2002) describes relevant research demonstrating it is possible to
determine if a student’s problems are performance problems (can do, but
doesn’t) and instructional problems (wasn’t taught or wasn’t available for
teaching). In the case of performance problems, an intervention might alter the
motivational conditions (contingencies associated with) of a task. Howell and
Nolet (2000) detail a number of ways to alter instructional conditions to assess
the effects of motivation and other variables on acquisition of knowledge. In the
case of instructional problems, the larger instructional context might be analyzed
more thoroughly—for example, assessing the growth trends of all children in a
class or grade level—to determine if there is a curriculum and instruction problem
contributing to skill acquisition for the group as a whole.
These distinctions are often made within the context of what is termed a “problem
solving” approach. In this approach, hypotheses are developed that “compete”
with the explanation that a child has a disability. These hypotheses are tested by
first providing interventions that address the problem identified and then
evaluating the student’s progress. For example, changing schools frequently is
often a contributor to students’ struggles. In a problem solving approach, the
student might be provided with a moderately intense reading program at an
appropriate curricular level. By tracking the student’s progress carefully, the
team might conclude that the student’s strong response is an indicator that
interrupted instruction, rather than the existence of a learning disability, is a
reasonable explanation for the student’s academic struggles.
Determine the Effects of Instruction and Make Decisions about Cutoff Criteria
When beginning to use RTI, the first question practitioners often ask is: “How
much progress is enough?” The second question, closely following, is “When is
an intervention special education?”
Success in answering the first question is predicated on the ability to sensitively
measure growth and to know what benchmark the student is working toward.
Research in applied behavior analysis and curriculum based measurement
inform the practices necessary to track the effects of interventions. A regular,
reliable progress monitoring tool, such as an oral reading fluency measure, must
be adopted. The school must also know what is expected of the typically
progressing student. Data must be plotted and reviewed, and decisions must be
OrPTI Guidance, September 2005
8
made when the data are examined. Most systems develop decision making
rules to guide this process. For example, a decision making rule might be
“Change the intervention after one week of data points that do not meet the
student’s aim line.”
The second question, “When is an intervention special education?” is one that
involves a system level decision as well as a clinical decision. First, a socially
determined cut off for “functional” performance must be established. This is
usually defined as being in the “average” range on a normal distribution.
Secondly, an informed group of professionals needs to evaluate the intensity of
the intervention provided and either test or make a professional judgment about
the effect of removing an intensive intervention.
Response to Intervention Models
A number of models have been utilized to implement RTI. Various authors have
labeled their approaches as being of a specific type—such as those described
below—but in reality the approaches share similarities. For example, the
“problem solving” model is used at certain points in the “tiered model.” Both the
problem solving and the tiered models may involve direct teacher referral to
teams that may result in a form of what has been thought of as pre referral
intervention.
Problem Solving or Hypothesis Testing
Problem solving approaches typically involve a team of teachers who engage in
a systematic process of problem identification and intervention implementation.
The underlying assumption in problem solving is that the presence of a disability
is the least likely and therefore least common explanation for failure.
In problem solving approaches, teams of teachers and other specialists will
typically review a student’s history and known attributes in an attempt to identify
issues other than disability that would explain the student’s failure. Problems a
teacher or team investigates could include interrupted school experiences
caused by frequent moving or illness, lack of student “availability” for instruction
due to trauma or behavioral challenges, inadequate previous instruction, or the
presence of other disabilities.
Marston (2002) describes the widespread use of the problem solving approach in
the Minneapolis Public School system. This system uses problem solving within
the context of “Intervention Assistance Teams.” In Minneapolis, general
education teachers are trained to identify problems, design interventions, and
determine whether their interventions are effective. If they are not, the
Intervention Assessment Team assists in developing and providing other
interventions. If those interventions fail, the student is referred to a “Student
Support Team” for a special education evaluation. This system relies heavily on
OrPTI Guidance, September 2005
9
the capacity of its general and special education teachers to use curriculum
based measures (CBM) to track student progress and has well defined
procedures for moving students through levels of intervention.
In order for a problem solving approach to meet the general IDEA 2004
evaluation requirements, meet RTI requirements, and overcome shortcomings
associated with more traditional assessment models, schools must adopt the
following system components:
1. Use of decision rules to prompt referral
2. Adopted standards for intervention design
3. Uniform progress monitoring procedures
4. Decision rules for judging effectiveness of interventions
The relative strengths and challenges inherent in a “pure” problem solving
approach are summarized below:
Strengths

Problem solving approaches address the “exclusionary” requirements
of LD evaluation.

A problem solving approach fits easily into systems that many schools
already have in place, such as teacher assistance teams.

Problem solving approaches may not require the adoption of extensive
new assessment technology.

Educationally relevant information may be gathered throughout the
process.
Challenges

Identified problems are often ones that cannot be directly addressed by
schools.

Academic problems induced by external factors such as lack of
preschool experience or behavior difficulties may coexist with learning
disabilities.

Problem solving models alone do not address problems like referral bias.
Pre Referral Approaches
Pre referral models were conceived in the 1980’s as a method of addressing
over-identification in special education through prevention of inappropriate
referrals. Essentially, this model systematizes requirements that general
education teachers modify instructional and classroom management approaches
in order to better meet the needs of diverse learners. The assumption is that
OrPTI Guidance, September 2005
10
poor academic performance is often the reflection of students’ unmet
instructional or curricular needs rather than an intrinsic disability. Through pre
referral, teachers are guided to differentiate instruction in order to maximize the
number of students who benefit from the general education program.
The most typical pre referral models have at their heart a teacher assistance
team, known by a variety of names including care teams, student study teams,
and student assistance teams. The team processes cases of students who are
identified by their teachers as struggling. The team may design specific
interventions or make suggestions to the teacher for possible interventions. If
positive results are documented, no referral is made to special education. If,
however, a lack of improvement is noted, the student is referred for a special
education evaluation.
Major shortcomings of the pre referral model for use in RTI include referral bias
and negative perceptions of the process among classroom teachers. Factors
such as teachers’ years of training and experience and the socioeconomic status
of students have been shown to influence which students are identified as
struggling (Drame, 2002). Referrals may be based as much on how
overwhelmed teachers are feeling at any given moment as on a student’s level of
skill development or performance. Additionally, the pre referral process can be
viewed as a series of hoops through which a teacher must jump before being
“allowed” to make a special education referral rather than as a meaningful
avenue for addressing students’ needs (Slonski-Fowler & Truscott, 2004).
Perhaps the most significant drawback of a pre referral model is that the teacher
must deal with each struggling student individually. Given that up to 20% of
students are likely to have significant difficulty learning to read (Shaywitz, 2004),
this approach makes it difficult to provide meaningful resources to all students.
While it is likely that students with the most apparent and immediate needs will
be referred for interventions, intervening with students one by one forces
teachers into educational triage. Meanwhile, students with marginal problems
will continue to struggle and perhaps fall further behind (Gerber & Semmel, 1984;
Gresham, MacMillan & Bocian, 1997).
Pre referral models may be more or less prescriptive with respect to decision
rules for identifying students, intervention design, and progress monitoring.
Many of the model’s weaknesses can be addressed by adopting standard
practices that are designed and monitored by the teacher assistance team.
Adopting procedures to ensure uniformity in decision making is critical to
utilization of a pre referral model for RTI. Without specific system components, a
pre referral system will not meet the general IDEA 2004 evaluation requirements
or the RTI requirements, and will fail to remedy the shortcomings of traditional
assessment paradigms. Specifically, the following would be required:
1. Use of decision rules to prompt referral
2. Adopted standards for intervention design
OrPTI Guidance, September 2005
11
3. Uniform progress monitoring procedures
4. Decision rules for judging the effectiveness of interventions
Pre referral strengths and challenges include:
Strengths

Many school districts currently have pre referral systems in place.

Pre referral utilizes a team approach to identifying students.

Pre referral provides for systematic response to students’ difficulties
before evaluation.

Pre referral has the potential to build capacity for individual teachers to
differentiate instruction for struggling learners.

It may be combined with typical components of other models, such as
problem solving approaches.

Educationally relevant information is gathered throughout the process.
Challenges

Pre referral does not inherently address the problem of referral bias, as
it depends on idiosyncratic responses of teachers to academic
difficulty.

Traditionally, pre referral models do not use a prescribed intervention
protocol.

The reporting of effects of intervention is often anecdotal and lacks
standard format for data presentation.

Students are dealt with “one at a time,” which may delay intervention to
students with less severe deficits.
Tiered Intervention
The three-tiered model is based on literature in the area of public health (Caplan
& Grunebaum, 1967) and Positive Behavior Support (Walker et al., 1996). Using
the public health analogy, systematic practices for healthy individuals (strong and
normally developing readers) and those at risk of developing health conditions
(students showing early signs of struggling) will prevent severe problems from
developing and will also allow for identification of individuals with the potential to
develop severe problems.
The underlying assumption of this prevention oriented approach is that
approximately 80% of students will benefit from implementation of a researchbased core curriculum program that is being delivered with a high degree of
fidelity. This level of “intervention” is referred to as “primary” or “Tier I.” An
estimated 15% of students will need additional intervention support beyond the
OrPTI Guidance, September 2005
12
core curriculum (“secondary” or “Tier II”) , and about 5% who have not responded
to primary and secondary efforts may require more intensive individualized
support ( “Tertiary” or “Tier III” level).
This approach requires the use of a universal screening program. The threetiered model has been implemented successfully in Oregon as the Effective
Behavior and Instructional Support system (Sadler, 2002). In this model, teams
of teachers examine a standard set of data that is gathered on a periodic
schedule. Students are sorted into groups that are provided with increasingly
intensive interventions depending on their achievement and response to
intervention. Movement through the tiers is a dynamic process, with students
entering and exiting according to their progress data.
In this model, it is assumed that students who do not respond to the most
intensive intervention are likely to have a learning disability. Frequently, the
tiered approach is combined with more traditional assessment models or with
problem solving procedures before a student is determined to have a disability.
This approach requires “blurring” of the lines between general and special
education, as well as close cooperation or merging of compensatory education
services and services for English language learners.
Relative strengths and challenges of the tiered model include:
Strengths

All struggling students are identified. Prevention and early
identification are possible.

Students may be “sorted” into levels of severity and interventions may
be tailored to each group.

Decision making is based on standardized progress monitoring
information.

Intervention decisions can be standardized.

It may be combined with typical components of other models, such as
problem solving approaches.

Educationally relevant information is gathered throughout the process.
Challenges

Resources must be committed for universal screening.

It requires skill grouping across classes to provide interventions of
sufficient intensity.

Ensuring that every child at risk is identified and provided intervention
requires establishment of broad groupings, which may result in
allocation of resources on children who are not actually in need.
OrPTI Guidance, September 2005
13

The most suitable screening and progress monitoring tools are
available in reading. Tools in other areas are not as well established.
IDEA 2004 Evaluation Requirements and Traditional Assessment Practices
Regulatory Requirements
IDEA 2004 requires careful attention to how special education evaluations are
conducted. The statute places emphasis on linking student assessment to
student instruction through the use of RTI. It is important to remember that RTI is
allowed as a component of evaluations for LD, but only one component. Districts
must be aware of and ensure that their procedures address the following
requirements:
A “full and individual initial evaluation” shall be conducted. . . “to determine
whether the child is a child with a disability. . . and to determine the educational
needs of the child (20 U.S.C. 1414 (a)(1)(A) and (C) (i) (I) and (II).) These
requirements and those discussed below obligate teams to consider all aspects
of a child’s functioning. While RTI may address whether a child responds to
intervention, it may not adequately describe the effects of emotional and
behavioral problems or language and cultural factors on performance. It is
important that, at a well articulated point in any process, formal and individual
evaluation is conducted and that areas in addition to those addressed in the RTI
process are considered.
An initial evaluation must be conducted within “60 school days” of obtaining
parental consent (OAR 581-015-0072). . . and “The agency proposing to
conduct an initial evaluation. . . shall obtain informed consent from the parent of
such child before conducting the evaluation.” (20 U.S.C. 1414 (a)(1)(D). These
requirements mean that the process for RTI must be carefully tracked. It must be
clear to teams that there is a specific point at which the intervention process is
part of a special education evaluation. Parental consent must be obtained at that
point, and the parent must understand that the procedure being implemented will
contribute to a decision about whether the student has a learning disability and is
eligible for special education.
Evaluation procedures must: …“use a variety of assessment tools and strategies
to gather relevant functional, developmental, and academic information, including
information from the parent” and may “not use any single measure or
assessment as the sole criterion.” The procedures must include the use of
“technically sound instruments that assess the relative contribution of cognitive
and behavioral factors.” (20 U.S.C. 1414 (b)(2)(A)(B) and (C). Further, (3)(A)(i-v)
continue the requirements that nonbiased assessment procedures are used and
that procedures are administered by qualified, trained, and knowledgeable
personnel. (3)(B) reiterates that the child must be “assessed in all areas of
suspected disability.” These requirements make it clear that a single form of
OrPTI Guidance, September 2005
14
assessment—RTI—may not be used to either find children eligible or define all of
their educational needs. Teams must continue to consider whether a student is
most appropriately identified as a child with LD as opposed to another disability,
such as emotional disturbance. They must also design individual evaluations
that are tailored to children’s presenting issues.
(3)(A)(i) and (ii) and (5)(C) require that assessments conducted “are selected and
administered so as not to be discriminatory on a racial or cultural basis” and are
“provided and administered in the language and form most likely to yield accurate
information on what the child knows and can do academically, developmentally
and functionally”. . .and that a child may not determine a child is eligible for
special education if the “determinant factor for such determination is. . .limited
English proficiency.” The effects of second language acquisition and cultural
variations must be considered for English language learners and interventions
that are designed for those students must be appropriate. The procedures used
in RTI are aligned with recommended best practices for ELL students (need a
reference here), but it is important that the design of interventions and
interpretation of ELL students’ responses to interventions be informed by
individuals who are very knowledgeable about the education of ELL students.
The Role of Intelligence Testing in RTI
The general IDEA 2004 evaluation requirements dictate that teams make
individual decisions about the scope of evaluation and specific evaluation tools
that are used for a student. However, the question of whether or not to give an
IQ test to a student is of great interest to professionals as they contemplate this
change.
The final section of this paper reviews many of the problems related to using
intelligence testing with students who have or are suspected of having learning
disabilities, and readers are encouraged to review that information when making
policy decisions about the use of IQ tests. Generally, teams are encouraged to
use IQ tests with discretion. Certainly they must be used if the team suspects
mental retardation. There is substantial overlap between skills measured on IQ
tests and academic skills, and poor achievement affects students’ performance
on components of IQ tests. In short, there is limited value in profiling a student’s
cognitive skills in order to inform instruction. (Stanovich, 2005).
Summary
RTI models have the capacity to improve outcomes for and provide support to
students who are both low achieving and LD. They do, however, require
substantial cooperation between regular and special education. They also
require that procedures be used within general education to impact the general
education curriculum and teacher practices. Widespread progress monitoring of
OrPTI Guidance, September 2005
15
all students, systematic intervening within general education, and collegial
problem solving are hallmarks of RTI.
IDEA 2004 anticipates the resource requirements of such an approach, allowing
for use of up to 15% of Part B funds for “early intervening” services for students
who are not yet identified as having a disability. Districts can both adopt a new
diagnostic approach and benefit children before their low performance becomes
an intractable achievement deficit that may be accompanied by low motivation
and behavioral problems. However, districts must carefully determine which
model of RTI is most appropriate within their system and implement standardized
procedures aligned to state criteria.
OrPTI Guidance, September 2005
16
Section Two
Implementing Response to Intervention (RTI)
Response to intervention assessment requires changes in the ways resources
are used and a very close relationship between general and special education.
General educators need to understand the approach and why all of their students
need to be closely monitored—especially in the development of early academic
skills. Special educators must understand the limitations of traditional
assessment systems and adopt highly prescribed and systematic interventions.
Most importantly, general and special educators need to work together to
implement and maintain the system. This chapter details the following system
requirements for RTI:
1. Leadership
2. Teaming
3. Use of a research based core reading curriculum
4. Valid screening or identification procedures and decision rules
5. Adopted intervention protocols and progress monitoring
6. Policy and procedure development including special education procedures
7. Capacity building
Leadership
Moving from a discrepancy approach to RTI requires on-going support to teams
and to individuals. A leadership team at the district level will help schools move
forward and sustain new practices. This team needs to be able to:

Provide expertise when problems are encountered or practices are
questioned.

Provide training related to LD identification including traditional
practices and the rationale for RTI.

Identify the need for and provide support to teams with respect to
research based interventions and progress monitoring methods.

Help obtain and commit resources for screening, assessment and
interventions.

Interpret new information in the field regarding LD.

Judge the fidelity of implementation of components of RTI and trouble
shoot.

Plan to sustain the system.
OrPTI Guidance, September 2005
17
Horner and Sugai (2000) have worked extensively with school wide systems that
address behavior supports through team processes, and they emphasize the
importance of the school principal having a primary role on any such team. The
philosophical and instructional leadership provided by the principal is essential to
a team’s ability to establish its mission, overcome difficulties and sustain its work
over time.
Teaming
Teaming is an essential component of an RTI system. As described throughout
this paper, RTI requires cooperation among special education, general
education, and compensatory programs such as Title I or Title III (English
language learners/ELL). Considerations to take into account include team
membership, team structures, and teamwork.
Team Membership
Experience in implementing Effective Behavior and Instructional Supports
(Sadler, personal communication) dictates that decisions about team
membership be considered carefully. Generally, the team must have one or
more members who:

Have the authority to allocate school resources and assign work
(administrative support)

Can provide leadership for the team, organize and implement
agendas, monitor role clarity and fidelity

Are able to effect changes in the general education instructional
program for groups of students (such as skill grouping)

Can organize and present screening data

Are able to plan for and provide research based individualized
interventions (such as a small group working on decoding multi-syllabic
words)

Can set goals for students, plan for progress monitoring, plot data, and
interpret data to determine the effectiveness of interventions

Are able to train classroom teachers and paraprofessionals to progress
monitor and provide interventions

Represent the involvement of special education, ELL, Title, and other
support programs
Team Structures
Schools may find that more than one team best serves their needs. For
example, initial data analysis and planning may be accomplished through a
grade level team. At that level, a group of teachers might find that fewer than
OrPTI Guidance, September 2005
18
80% of their students are meeting expectations and decide to investigate ways to
strengthen their instructional program. If the core program is meeting the needs
of at least 80% of the students, the teachers may decide to strengthen instruction
for students who are marginally below expectations through skill grouping and
differentiating instruction across classes. This level of team must have
measurement, progress monitoring, and administrative resources available.
Another level of the team might meet to plan interventions for students who are
not making expected progress in the programs designed by the grade level
teams. This team must maintain strong ties to the general education classroom
and all of the capacities listed above.
Figure 1: Two General Education Teams
Manage RTI Prior to Referral
On-going data gathering and analysis at group and individual level. Decision rules drive
decisions about including students in interventions and referral decisions. Standard formats
are used for data presentation and analysis.
Grade Level Team
evaluates effectiveness of
core program and plans
initial group interventions.
Centralized team
plans targeted
small group and
individual
interventions.
Evaluation Team
plans and conducts
RTI related
specifically to
eligibility.
Teams move students through interventions that increase in intensity based on
individual student outcomes.
Some schools use one central team to perform all functions of data analysis and
intervention planning, at the classroom, small group, and individual level. Using
a single level team requires a substantial commitment of time and resources to
both conducting the work of the team and maintaining the “health” of team
functioning.
Teamwork
Teaming in order to analyze student progress using standardized data and
decision making means that on occasion individuals will need to engage in
OrPTI Guidance, September 2005
19
difficult communication about student outcomes, fidelity of implementation of
curriculum, or special education decisions. Successful teams establish working
agreements about conducting meetings, decision making, interactions, and roles.
Examples of such agreements are:

Team members will encourage each other to express their opinions.

Decision rules will be used to guide the team’s work.

Once a decision is made, team members will support that decision.

Meetings will start and end on time.

Team members will bring necessary information to the meetings.

The agenda will be followed.

Decisions will be made through a consensus process.

The principal makes final decisions about allocation of resources.
Teams should revisit their working agreements periodically to ensure they
continue to be relevant and are being implemented.
Planning is Essential
As the team is formed, a year long plan of work should be established. There will
be a cycle of reviewing school wide data, group intervention data, and individual
intervention data that requires projecting agendas for meetings and planning to
organize information to be considered. A typical calendar used by the Oregon
EBIS (Sadler, 2002) model follows:
Figure 2: Example of a Yearly RTI Team Plan of Work
Beginning of year
Define group membership, establish group norms, plot
work for the year. Review current school wide data on
academic achievement, behavior, attendance, etc.
Quarterly
Gather and review data (based on screening or in class
assessments). Review membership in groups considered
at benchmark or at risk.
Weekly
Review data on students who are in intensive
interventions. Change membership of intervention groups
or change individual interventions.
End of year
Review data on all students based on final quarterly and
statewide assessment. Identify students for extended year
opportunities. Identify students for immediate intervention
or evaluations to be implemented in September.
OrPTI Guidance, September 2005
20
Use of a Research-Based Core Curriculum
IDEA 2004 requires teams to determine students not eligible for special
education if their difficulties are attributed to lack of instruction in the essential
components of reading instruction as identified in No Child Left Behind
(phonemic awareness, alphabetic principle, fluency, vocabulary, and
comprehension). Also, when implementing RTI teams must have confidence that
the general core curriculum provides students with an appropriate opportunity to
learn. Effective core curricula are expected to provide sufficient instruction so
that at least 80% of students meet expectations without additional support.
Information on research-based core curricula may be found at Oregon’s Reading
First Project website: http://oregonreadingfirst.uoregon.edu./instruction.
Reading is set apart as especially important because the majority of students
with learning disabilities are identified with problems learning to read. Other
academic areas are substantiated with less research, but curricula and
instruction may be validated by meeting these guidelines:
1. The curriculum that is being used has been analyzed and is aligned with
benchmarks.
2. Instruction is intense, regular, and differentiated to meet the skill needs of
individual students.
3. At least 80% of students are meeting expectations such as benchmarks.
It is particularly important to examine the “80%” criterion. This expectation is a
general guideline, and teams should adjust that expectation to a higher level if
the general achievement in the school is typically higher than 80%. In some
schools, the expectation is more appropriately 85% or even 90%. Performance
in each classroom is expected to be close to the school average.
While the criteria may be adjusted upward, it should not be adjusted downward.
It should be assumed that, if 80% of students in a district, school, or classroom
are not meeting benchmarks, the problem is with either the content of the core
curriculum, or intensity and frequency of instruction.
OrPTI Guidance, September 2005
21
Figure 3 is a conceptual representation of the 80% rule applied, with increasingly
intense interventions.
Figure 3: Conceptual Depiction of the 80% Rule
All students are provided with research based instruction. If 80%
are not meeting benchmark, core practices are evaluated.
About 15% of students are provided with
research based interventions of moderate
intensity.
About 5% of students are
provided with research
based intensive
interventions.
Valid Screening or Identification Procedures and Decision Rules
School personnel need to know when a student is at risk of failure in a core
academic subject. Whether a district adopts a problem solving, pre referral, or
tiered model of RTI, teachers or teams need to know when to select a student for
intervention. This requires that valid data are examined on a regular schedule
and that students are selected for intervention on this basis. Data that may
inform these decisions include:
1. Dynamic Indicators of Basic Early Literacy Skills
2. Curriculum Based Measures
3. Fluency measures with norms that are local or based on national studies
4. Statewide assessments
5. Locally developed measures that can be interpreted on consistent,
objective criteria
6. Behavior and attendance data that can be interpreted on consistent,
objective criteria
7. Teacher concern
Using clearly defined criteria and decision rules—for example, “Students who are
in the lowest 10% of the class will be selected for interventions”—helps to
OrPTI Guidance, September 2005
22
overcome problems in referral bias, lack of teacher experience, or differences in
teachers’ level of tolerance with respect to students who are struggling. These
decision rules are those that are used to select students for intervention.
Decision rules are also necessary to ensure that students who are not
responding adequately move to more intensive or appropriate interventions, and
that a decision is made to complete the special education referral process when
needed. An example of this kind of decision rule is “Change the intervention
when the student does not meet the aim line for three consecutive data points.”
These types of rules are those that govern the intervention process.
It is very important that data are reviewed on ALL students in the school, on an
ongoing basis. Even though students may be placed in special education, ELL,
or Title I, their achievement and progress is part of the total system and must be
continuously tracked.
Adopted Intervention Protocols and Progress Monitoring
Intervention Protocols
There is some concern that RTI based decisions will lack uniformity. This
concern is based at least partially on the difficulty in documenting what
interventions have been provided and their levels of intensity and duration.
Those concerns may be addressed by carefully standardizing interventions.
Gresham (2002) reviewed research in reading interventions and determined that
a combination of Direct Instruction and Strategy Instruction produces the greatest
effects for struggling students. Other authorities (National Reading Panel, 2000;
Shaywitz, 2004) add Fluency Instruction as an important component of
interventions for many students. Examples of curricula that may be used for
interventions may be found at http://oregonreadingfirst.uoregon.edu./instruction.
Interventions become increasingly intense as students do not respond
adequately. Intensity is achieved by changing group size, expertise of the
teacher, duration or frequency of lessons, or motivation.
As interventions become more intense, it is difficult to provide both general
education curriculum and interventions to a student. It is recommended that
when there simply isn’t enough time in the school day to provide an intensive
intervention, programming like extended school day be considered. Another
approach is to provide the systematic basic skills instruction and practice in the
intervention setting and include students in language and comprehension
instruction in the general education classroom. For example, in math the student
would be included in the general education classroom for conceptual,
vocabulary, and math reasoning instruction. A menu of interventions should be
OrPTI Guidance, September 2005
23
developed that match area of deficit, curricula, and intensity to specific student
profiles.
The following steps may be used in designing an intervention:
1. Identify students with similar skill deficits (e.g., math fact fluency).
2. Specify each child’s deficit level (e.g., writes 10 correct facts per minute).
3. Identify a curriculum that is specific to the skill deficit(s).
4. Identify an instructor who has been trained to use the curriculum.
5. Decide how long the intervention will progress before review.
6. Develop a progress monitoring chart for each student that includes a
clearly marked benchmark and aim line.
Progress Monitoring
Progress monitoring is assessment of students’ academic performance on a
regular basis in order to determine whether children are benefiting from
instruction and to build more effective programs for those who are not. Standard
methods of progress monitoring prevent inconsistency in decision making and
eligibility decisions. Progress monitoring for these purposes must include clear
benchmarks for performance and reliable, easy to administer measures such as
curriculum based measures (CBMs).
Progress monitoring involves the following steps:
1. Establish a benchmark for performance and plot it on a chart (e.g., “read
orally at grade level 40 words per minute by June”). It must be plotted at
the projected end of the instructional period, such as the end of the school
year.
2. Establish the student’s current level of performance (e.g., “20 words per
minute”).
3. Draw an aim line from the student’s current level to the performance
benchmark. This is a picture of the slope of progress required to meet the
benchmark.
4. Monitor the student’s progress at equal intervals (e.g., every third
instructional day). Plot the data.
5. Analyze the data on a regular basis, applying decision rules (e.g., “the
intervention will be changed after 3 data points that are below the aim
line”).
6. Draw a trend line to validate that the student’s progress is adequate to
meet the goal over time.
OrPTI Guidance, September 2005
24
Figure 4: An Example Progress Monitoring Chart with Aim Line
WIF: Correctly Read Words
Per Minute
100
90
The X is the end-of-term performance goal.
A goal-line is drawn from the median of the
first three scores to the performance goal.
80
70
X
60
50
40
30
20
10
0
1
2
3
4
5
6
7
8
9
10
Weeks of Instruction
11
12
13
14
Source: National Center on Student Progress Monitoring
website: www.studentprogress.org/library/training.asp
Determining Trends
It is very important that data be analyzed sufficiently to determine whether
changes in instruction are required for the student to meet the performance
benchmark. This analysis is enhanced when data are graphed. Trend lines,
graphic indications of a student’s overall slope of progress, are necessary to
determine whether progress is sufficient to meet the goal. There are several
technical approaches to determining trend lines, among which is the Tukey
Method (illustrated in Figure 5).
Robust progress monitoring procedures such as graphing results and using trend
lines are required in order to apply consistent decision rules. Examples of
decision rules are found in the “Policy and Procedures” section of this chapter.
An excellent resource for learning about progress monitoring and establishing
goals may be found at the website for the National Center on Student Progress
Monitoring, found at www.studentprogress.org.
OrPTI Guidance, September 2005
25
Figure 5: Developing a Trend Line Using the Tukey Method
Step 1: Divide the data points into three equal
sections by drawing two vertical lines. (If the
points divide unevenly, group them
approximately.)
WIF: Correctly Read Words Per Minute
100
90
Step 2: In the first and third sections, find the
median data-point and median instructional
week. Locate the place on the graph where the
two values intersect and mark with an “X.”
80
70
60
50
Step 3: Draw a line through the two “X’s,”
extending to the margins of the graph. This
represents the trend-line or line of
improvement.
X
40
X
30
(Hutton, Dubes, & Muir, 1992)
20
10
0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
Weeks of Instruction
Source: National Center on Student Progress Monitoring
website: www.studentprogress.org/library/training.asp
Policy and Procedure Development including Special Education Procedures
Adopting RTI
This paper has emphasized that RTI is a system that affects both general and
special education. Districts that have successfully implemented these
approaches experience substantial system-wide benefits for all children.
However, obtaining “buy-in” and cooperation for use of resources is essential.
Administrative support from the top down and teacher support from the bottom
up are vital to success and sustainability.
Initially, RTI will require extra resources for training and time for teams to work
together. Support services such as ELL and Title I may need to be reorganized.
Funds may need to be set aside to provide interventions. Commitment and
planning need to be in place before RTI is implemented.
IDEA 2004 offers districts the opportunity to support the RTI process by using
IDEA funds for “early intervening services.” These are coordinated services that
are preventative in nature and function within the general education context.
OrPTI Guidance, September 2005
26
Defining and Adopting Procedures
When moving to an RTI approach, a set of fluid activities (data review,
intervention implementation, and analysis) are used much like we have used
traditional testing instruments. These activities may be difficult for some teams to
track. Individuals must conduct those activities in standardized ways,
documenting their work, and using standardized decision making guidelines.
This prevents arbitrary decision making, and ensures students move through the
system and are considered for evaluation and eligibility in a timely manner.
Decision Rules
This essential procedural component has been referenced several times. An
example of decision rules used by the Effective Behavior and Instructional
Support (EBIS) (Sadler, 2002) are:
1. Organize the lowest 20% of students in the group (class, grade level, or
school) to receive interventions.
2. Students in group interventions are monitored weekly.
3. Students in individual interventions are monitored at least 1 time weekly.
4. Change interventions when 3 consecutive data points do not meet the
student’s goal line.
5. Move students to an individual intervention after two unsuccessful group
interventions.
6. Refer a student for special education after one unsuccessful individual
intervention.
Parental Notice and Consent
Procedures must establish clearly when and how parents are involved in the RTI
process. Questions to answer ahead of time include: “When are parents invited
to team meetings?” “When are parents provided with procedural safeguards?”
“When is parental consent required?” In the EBIS system, parents are notified of
any individual intervention. Since it has been found that a second individual
intervention typically is provided to students who are or will be referred for special
education, it is at this point a special education referral is made and consent for
evaluation is obtained.
Special Education Procedures
As described in the general section on RTI, all of the evaluation requirements for
special education remain in effect when implementing this approach. This
means that teams need to be very clear about an evaluation planning process
that regards the use of response to intervention as one component of a full and
OrPTI Guidance, September 2005
27
individual evaluation. Districts need to develop well defined procedures and
ensure teams are trained before implementing RTI.
Figure 6: Example of a Student’s Movement through RTI with
Procedures Noted
Parents notified by
classroom teacher
Formal notice; parents are
invited to participate in process
Parents
notified by
classroom
teacher
Evaluation planning meeting
with parents; procedural
safeguards and parent
consent. 60 day timeline starts
Evaluation planning meeting, procedural safeguards
provided, consent obtained, 60 day timeline starts.
Evaluation Planning and Eligibility Determination
Using a response to intervention model (RTI) to decide a student has LD involves
systematic application of professional judgment. Through the review of data
obtained from multiple sources, consideration of those data within specific
contexts, and thoughtful discourse, a team is prepared to make an eligibility
decision. More complex than relying on a simple numerical formula, this process
produces decisions that reflect the benefit of collective knowledge, expertise, and
insight of the team.
OrPTI Guidance, September 2005
28
Teams should think of the eligibility determination in terms of a conversation
through which issues are viewed from several perspectives, competing factors
are weighed and sorted, and the most likely explanation for a child’s lack of
achievement is found. Remember, the key issue teams are investigating is
whether students evidence a dual discrepancy, that is, low achievement as
well as poor progress when provided with intervention.
Use the following five questions to guide the conversation, and ultimately, the
decision and report of findings:
1. Is our information complete?
Teams must keep in mind that RTI is just one component of a comprehensive
evaluation under IDEA 2004. In addition to the specific information regarding
the student’s response to intervention, the team also must gather the
following information:

File review: Thorough review of a student’s file provides critical
information within the RTI model, especially as students move through
the grades. Frequent moves between schools and excessive
absences should be noted, as should participation in special programs
such as Title I or ELL. Comments on reports cards can provide a
surprisingly helpful picture for the team, especially in creating a
historical record of the child’s achievement in the current area of
concern. Note: While the file review is required for the eligibility
determination, it is most helpful to complete it prior to the special
education referral. Information gleaned from a student’s file is
instrumental in identifying needs and developing interventions to
address those needs. It is strongly recommended that teams plot
the student’s achievement, absenteeism, and other important
factors so that there is a chronological history by school year. By
doing so, teams may be able to see relationships between
achievement and events in the student’s life. This information
addresses the exclusionary factors (see below) and helps the team
consider other explanations for the student’s difficulties. Consider the
following example:
o Tasha is an ELL student who moved to the United States when
she was four years old. Now in fifth grade, her low achievement
is a concern to her teacher. She is noted to have good English
social language skills. When the team conducts a file review,
they find that Tasha’s family moved from town to town until she
was in second grade. She spent second, third and fourth grade
at the same school. The team examines her academic growth
on statewide assessments and notes that, from second to third
OrPTI Guidance, September 2005
29
and third to fourth grades, Tasha’s RIT scores increased almost
1.5 times expected growth, with minimal ELL services.Tasha’s
file review suggests she is progressing in a way not typical of
children with LD.

Observation: Consider the child’s behavior, and characteristics of the
environment, while working in the area of difficulty within the regular
classroom.

Assessments in other areas of concern: The team must address all
areas of concern raised by parents and staff in the initial referral and
during the evaluation planning process. While this doesn’t mean that
formal assessments will be completed in response to every issue
raised, it is necessary to document consideration of each. If other
disability categories are considered by the team, all of the required
elements for those categories must be completed.

Assessments to determine the impact of the disability and educational
need: Progress monitoring data gathered through the intervention
process is the primary assessment in this area, with published
achievement tests serving to complete the picture.
Though not required elements of evaluation, teams may determine that the
following information is needed on a case-by-case basis:

Developmental history

Assessment of intellectual ability

Medical statement
2. Does the student have very low skills?
Teams consider data from multiple sources, including CBMs, DIBELS,
Statewide Assessments, published achievement tests, and work samples.
Viewing the student’s skill development within the context of age level
expectations is key – are these skills very low in comparison to other students
this age? Reporting results in standard scores, percentiles, and POP scores
(statewide assessments) provides this important context.
3. Does the student fail to learn at a sufficient rate in response to
intensive, research-based instruction?
Careful consideration of the child’s progress over time, relative to expected
progress, is required. The team reviews the intensity and duration of
instruction in relationship to the skills gained. It is helpful to find a way to
assign “weight” to both the intervention and the progress, perhaps visualizing
each as rocks on a scale. Consider these examples:
OrPTI Guidance, September 2005
30
Example A: A fourth grade student receives 10 minutes per day of
fluency instruction that is in addition to the core reading program. Oral
Reading Fluency grows from 90 to 103 words per minute during a three
week period. The grade level target is 120 words per minute, and the
student’s progress is clearly following the aim line to hit the target. This
intervention, relatively “light”, could be pictured as a small rock on one
side of a scale. The progress, fairly “heavy”, could be pictured as a
medium rock on the other side of the scale. In this case, the student is
responding well to the instruction, with progress deemed to “outweigh” the
intervention.
Example B: A first grade student receives 45 minutes per day of decoding
and fluency instruction that is in addition to the core reading program.
Oral Reading Fluency grows from 6 to 10 words per minute in an 8 week
period, while other children in the intervention group have gained an
average of 22 words per minute during this same time. It is clear that this
child is not on track to hit the expected level of performance by the end of
first grade. In this case, the intervention is judged to be quite “heavy” and
can be visualized as a large rock on the scale. The child’s progress, in
contrast, is “light”, best visualized as a small pebble on the opposite side
of the scale. This child is resistant to instruction, failing to learn at a
sufficient rate.
Keep in mind that the answers to questions 2 and 3 must both be “yes” in
order to determine that a student has a learning disability. Students with
learning disabilities are those who, despite intervention too intense to be
considered general education, demonstrate very low skills. The percentile
used as a guideline for “how low” may vary depending on local norms, but will
likely fall between the 20th and 30th percentiles. While this may seem
relatively high performance given more traditional approaches that require a
student to be at least below the 16th percentile, remember that our purpose is
to catch students before they fall far behind, and to maintain their skills in a
functional range. This is why the concept of a “weighty” intervention
contrasted with a “light” response is so important. Additionally, in cases of
students receiving intensive intervention and achievement above this “very
low” range, teams may document that without continued intervention the
student’s achievement will fall below the targeted percentile.
4. Do we have conflicting data and, if so, how do we make sense of it?
Sometimes, teams will have some data sources that reflect very low skills and
others that indicate better achievement. In these cases, the team must
decide which sources are most valid and then justify the decision. This
process involves consideration of the demands of each assessment,
including content, speed, and fluency. For example, a child may score better
on an untimed test of decoding skills in which self-corrections are counted as
correct (such as many published achievement tests), than on a timed test of
OrPTI Guidance, September 2005
31
decoding skills (such as Nonsense Word Fluency in DIBELS). In this case, a
team may decide that the timed assessment is a more compelling indication
of the child’s skill development because it reflects the level of automaticity of
decoding, a key factor in reading development. In general, lower scores can
be considered valid if they reflect performance on a test that is more
comprehensive or involves more complex demands than the other
assessment used.
5. Are there other explanations for the student’s low skills and lack of
progress?
IDEA 2004 carries forward “exclusionary factors” to be considered in eligibility
determinations. That is, has the team considered competing explanations for
the student’s lack of achievement? Best practice dictates that these issues
be explored early in the intervention process. Specific factors to consider
include:

A lack of appropriate instruction in the area of concern: Are high levels
of mobility and/or absenteeism the underlying cause of the student’s
low skills? Has the student been enrolled in classrooms or schools in
which research-based curricula weren’t used or quality of instruction is
in doubt? Awkward as these issues may be for teams to discuss
openly, they must be addressed and documented.

The existence of a sensory problem or another disability: What are the
results of current vision and hearing screenings, and is there a history
of problems in either area? Are there suspicions of other disabilities
and, if so, has the team documented consideration of each? In some
cases, teams must carefully sort and analyze information in order to
rule out behavioral or health issues or mental retardation as the source
of difficulty.

Limited English Proficiency: When a learning disability is suspected in
a student for whom English is a second language, it is imperative that
the team include an ELL staff member familiar with the child. In
addition to the required components of a learning disabilities
evaluation, the team should develop a language profile for this child
that includes:
o Current levels of oral, reading, and writing proficiency in both
the primary language and English
o The number of years the student has lived in the U.S.
o The student’s educational history in the country of origin
o Literacy development of other children in the family
o The primary language in the home
o The parents’ literacy profile
OrPTI Guidance, September 2005
32
Deliberations of the team will center on this child’s academic
achievement in comparison to other ELL students with similar
language profiles. In the judgment of the team, does the student’s ELL
status fully explain the low skills?

Environmental or Economic Disadvantage: It is important for the team
to thoroughly explore family stressors that may be impacting academic
achievement. Factors to consider include frequent moves,
homelessness, divorce, unemployment, and extended illnesses or
death in the family. Additionally, document whether the child has had
access to enriching experiences such as opportunity for conversation
with adults, pre-school, and books in the home.
The final determination of eligibility will come out of this guided conversation.
Team members will recognize students who, despite intensive and systematic
instruction, fail to gain basic skills at a satisfactory rate. Some find this
process intimidating at first, as it requires conclusion based on professional
judgment. Keep in mind, however, that these judgments are based on
expertise coupled with hard data. Made with the benefit of a team approach,
these are decisions that make sense in light of current research, educational
need, and practice.
Written Report
IDEA 2004 requires that teams produce a written report to document the
eligibility determination process under an RTI approach. Much more than merely
reporting scores on tests, this report serves to document the thinking that lead to
the eligibility decision. Districts are encouraged to adopt a standard template for
LD Eligibility Reports and require its use in all cases. In training staff to adopt a
uniform approach to LD Eligibility Reports, it is helpful to provide a checklist for
staff to use in evaluating reports they produce. Key components of the report
template and checklist should be the same.
OrPTI Guidance, September 2005
33
Checklist for
Learning Disabilities Eligibility Reports
The following elements are required for each section of LD Eligibility Reports. If
information is missing, or questions are unanswered, the team is not ready to make an
eligibility determination.
Section 1:
Background Information
Source(s)
 Name
 Cumulative file
 Age
 Individual Problem
Solving Worksheet
 Birth date
 Report cards
 School
 Parent interview
 Grade
 Reason for the referral
 Brief description of the student’s language background, to
birth.
 Statement detailing parent concerns and perspective,
including background of disabilities, especially in areas
related to current difficulties
 Summary of the student’s history and current status with
special programs such as Title I and ELL
Section 2:
Students who qualify for special education as having
Source(s)
learning disabilities have very low skills.
 List state/district assessments (including DIBELS)
 Cumulative file
o These are presented as #’s.
 Individual Problem
o Test scores over time with comparison of actual
Solving Worksheet
growth to expected growth
 Report cards
o RIT scores and corresponding percentiles
 State/District
o State how the scores compare to the norm group
Assessment Results
(POP score) and benchmarks
 Individual
 List individual achievement test results in Standard Scores
Achievement Test
(SS) by subtest
Results
o Include SS for tests given in the past
 Work samples
o For any subtests with SS below 90, describe the
 Teacher reports
specific skill deficits that contribute to the low score
 Analyze the historical data.
- Have scores always been low?
o If not, a learning disability is unlikely.
- Are scores relatively low?
o Has the student had intensive assistance to maintain
skills at that level?
 Are the state/district assessments and individual
achievement tests consistent?
o If not, get one more piece of information about the
skills in question.
o Confirm results with reports from teachers, which must
be consistent.
OrRTI Guidance, September 2005
34

If inconsistent results are reported, decide which is valid
and justify the decision.
o Consider the demands of each assessment (content,
speed, fluency)
o Lower scores may be considered valid if they reflect
performance on a test that is more comprehensive or
involves more complex demands than other
assessments used.
 Finish with a summary statement about the student’s low
skills.
Section 3:
The student has been provided the
opportunity to learn the skills.
 Document the level of instructional stability throughout the
student’s educational experience
o Mobility (# of schools attended)
o Attendance (over the years)
o Reason(s) for excessive absences
o Cumulative effect of absences (“Missed 20 days per
year for 4 year, equating to one semester missed”)
 Describe key characteristics of the instruction the student
has received in area of concern
o Research based? Mention specific curricula, if known.
o Amount/intensity?
o Training of instructor (certified? IA?)
o Size of group
Section 4:
The student does not have another
disability or sensory problem.
 Report results of current vision and hearing screenings.
 Report historical difficulties with vision and hearing as
reported by parent (infections, tubes, surgeries)
 Have there ever been suspicions of other disabilities? If
so, what was done about them?
 Consider:
o ADHD
o Asperger’s
o OHI
o Communication
o Emotional Disturbance
o Mental Retardation
 If an IQ test was given, this is the place to note statistically
unusual performance.
 Report results of evaluations done regarding any areas of
concern raised at the time of referral or during the
evaluation.
o This is the place to explain, if you decided not to
assess those areas, why you didn’t.
OrRTI Guidance, September 2005
35
Source(s)






Cumulative file
Individual Problem
Solving Worksheet
Student Intervention
Profile
Parent Interview
Report cards
Teacher interviews
Source(s)







Cumulative file
Individual Problem
Solving Worksheet
Parent Interview
Report cards
Teacher interviews
Assessment results
Minutes of
Evaluation Planning
Section 5:
The student’s problem is not the result of cultural
factors or environmental or economic disadvantage.
 Describe the student’s school history starting with
preschool.
 Describe pertinent information about family literacy levels.
 Describe pertinent information about the family’s social
history that could account for stressors, such as:
o Frequent moves
o Homelessness
o Divorce
o Unemployment
o Extended illnesses or deaths in the family
Section 6:
The student’s problem is not the
result of limited English proficiency.
 Identify the student’s primary and secondary languages.
 Report current levels of:
o Primary Language Oral Proficiency
o Primary Language Writing Proficiency
o Primary Language Reading Proficiency
o English Oral Proficiency
o English Writing Proficiency
o English Reading Proficiency
 How many years has the student lived in the US?
And
 What is the home language?
And
 What is the parents’ literacy proficiency?
And
 What is a typical academic profile for a student with this
language and family history?
Section 7:
Students with learning disabilities have academic skill
deficits that are resistant to well-planned and
implemented research based interventions.
 Relate interventions to the areas of concern as identified
in the initial referral.
 State what the baseline skill level was (a number), and
how that relates to the general population.
 State what the intervention was and the basis upon which
it was chosen.
 Include:
o Specific curriculum/method used
o How much time and for how long, as well as a
qualitative statement about the “size” of this
intervention
o Level and type of reinforcement used
OrRTI Guidance, September 2005
36
Source(s)



Cumulative file
Individual Problem
Solving Worksheet
Parent Interview
Source(s)




Cumulative file
Individual Problem
Solving Worksheet
Parent Interview
LAS scores
Source(s)


Progress monitoring
data
Intervention group
records

Describe the student’s response to the intervention (a
number), measured with the same assessment as the
baseline data.
o How does this relate to the general population?
o How does this relate to progress of intervention group
peers?
 Does this progress support a picture of a skill deficit that is
resistant to instruction?
Section 8:
Is there sufficient evidence to support the conclusion that
this student is eligible for special education as a student
with a learning disability?
 This is the place that the “basis for determination” is
stated clearly.
Sources(s)


Every component of
the report in
sections 1-8.
Discussion of the
team.
Capacity Building
It is imperative that a core group of district staff deeply understands current
research on the identification of learning disabilities. The leadership team
referred to earlier might serve to teach broader groups of teachers, specialists,
and administrators. This team will need time to study and master issues around
LD identification, reading instruction, and progress monitoring.
Special educators must grasp the foundational underpinnings of RTI and the
research from which they developed. Additionally, this group of staff will need
training specifically targeting the transition from a discrepancy model to RTI –
why is practice changing? This group’s level of understanding must be sufficient
to support implementation of the RTI approach as well as explaining the
approach to colleagues and parents. Many practitioners will need to review and
discuss this information several times before they are comfortable with the core
concepts.
General educators require training on the same topics, though understanding
need not be as comprehensive. The focus of training for this group is on the
importance of early skill development and monitoring of that development, the
redefined view of general education as providing first level interventions, and the
need for collaboration.
For some team members, this will require a fundamental change in the way part
of their job is accomplished. For example, when IQ testing is not a routine part of
each LD evaluation, school psychologists will play a very different role on a team.
Their time may be used to conduct more targeted evaluations of attention or
behavioral characteristics, consultation, or to assist with additional progress
monitoring.
OrRTI Guidance, September 2005
37
Where teaming is not a norm, deliberate planning for team formation and
functioning is required. When a group is adopting both new practices and new
ways of doing work, individuals may experience significant stress. A professional
facilitator can help a team define its mission, establish norms, and improve skills
like conflict management and negotiation.
OrRTI Guidance, September 2005
38
Section Three
Perspectives on Evaluation Models
How do you know if a student has a learning disability? This question has
intrigued and plagued educators, psychologists, and families for a quarter of a
century. Numerous definitions have been promoted, with most of them having at
their core the notions of specific and discrete processing difficulties that in turn
cause unexpected underachievement. Educators, clinicians, and lawmakers
have struggled, however, with finding a valid and reliable method of setting
criteria that differentiate children with LD from other children who are low
achieving. Following is a summary of some of the issues pertinent to the
decision to include a new approach to LD evaluation in IDEA 2004.
1. What has the law required for LD eligibility?
Although various definitions for LD have been promoted since the 1970’s, the
codified definition of LD has remained essentially unchanged since 1977,
when P.L. 94-142 was implemented.
The term “specific learning disability” means a disorder in one or more of the
psychological processes involved in understanding or in using language,
spoken or written, which may manifest itself in an imperfect ability to listen,
speak, read, write, spell, or to do mathematical calculations. The term
includes such conditions as perceptual handicaps, brain injury, minimal
brain dysfunction, dyslexia and developmental aphasia. The term does not
include children who have learning disabilities which are primarily the result
of visual, hearing, or motor handicaps, or mental retardation, or emotional
disturbance, or of environmental, cultural, or economic disadvantage.
(USOE, 1977, p. 65083)
This definition has persisted, with minor organization and wording changes,
through subsequent authorizations of IDEA. It is in the implementation of the
definition through establishing eligibility criteria in federal regulations that the
construct of discrepancy has been used:
(a) A team may determine that a child has a specific learning disability if:
(1) The child dopes not achieve commensurate with his or her age and
ability levels in one or more of the areas listed in paragraph (a) (2)
of this section, when provided with learning experiences
appropriate for the child’s age and ability levels; and
(2) The team finds that the child has a severe discrepancy between
achievement and intellectual ability in one or more of the following
areas:
(i) Oral expression;
(ii) Listening comprehension;
(iii) Written expression;
OrRTI Guidance, September 2005
39
(iv) Basic reading skill;
(v) Reading comprehension;
(vi) Mathematics calculation; or
(vii) Mathematics reasoning
(USOE, 1977, p 65083)
Neither the Federal definition nor the criteria provide guidance on
implementing the important exclusionary factors. Further, it is left to states to
determine how to measure the severe discrepancy. The methods utilized by
states include variations of simple discrepancy formulas in which IQ and
achievement standard scores are compared, regression formulas which
remedy measurement problems that exist due to correlations between IQ and
achievement, differences in standard scores on academic achievement
measures, percentage discrepancy, and professional judgment. Fourteen
states, including Oregon, have left the method of determining severe
discrepancy to individual school districts (Reschly et al., 2003).
Oregon’s implementation of LD identification is located in administrative rules
that describe evaluation procedures and criteria. Like all disability categories
in Oregon, the regulations set out a definition which is consistent with federal
law, a set of required evaluation activities and tools, and criteria that must be
met if a team is to find a student eligible. Oregon’s criteria state that an
eligibility team must determine that:
A. The child does not achieve commensurate with his or her age and
ability level in one or more of the areas listed (below). . . when
provided with learning experiences appropriate for the child’s age and
ability levels;
B. The child has a severe discrepancy between intellectual ability and
achievement in one or more of the following areas:
(i) Oral expression;
(ii) Listening comprehension;
(iii) Written expression;
(iv) Basic reading skills;
(v) Reading comprehension;
(vi) Mathematics calculation (when appropriate, includes general
readiness skills); or
(vii) Mathematics reasoning; and
C. The child’s severe discrepancy between ability and achievement is not
primarily the result of:
(i.) A visual, hearing, or motor impairment;
(ii) Mental retardation;
(iii) Emotional disturbance; or
(iv) Environmental, cultural, or economic disadvantage.
OrRTI Guidance, September 2005
40
As in all other categories, the team must also determine that the disability has
an adverse impact on the child’s educational performance (K-12) or the
child’s developmental progress (age 3 to school age).
2. What events led to changes in LD identification in IDEA 2004?
Through decades of educational practice, it has become generally accepted
that a “severe discrepancy” is in fact a learning disability and—or—a proxy for
a learning disability and its underlying processing disorders. It is now widely
acknowledged that there is not a scientific basis for the use of a measured IQ
achievement discrepancy as either a defining characteristic of or a marker for
LD. Though numerous authorities (Fletcher et al., 1998; Lyon et al., 2001;
Stanovich, 2005) have identified problems with discrepancy models, it has
persisted as the most widely used diagnostic concept. In the 1997
reauthorization process, the concern with discrepancy approaches reached a
head and the U.S. Office of Special Education Programs (OSEP) committed
to a vigorous program of examining and summarizing evidence around LD
identification. That effort resulted in the Learning Disabilities Summit
(referred to in the introduction to this document), as well as subsequent
roundtable meetings involving representatives of major professional
organizations. While preparing for the 2004 IDEA reauthorization, OSEP
conducted the 2002 Learning Disabilities Roundtable to generate a series of
consensus statements about the field of learning disabilities. With respect to
the use of discrepancy formulas, the members stated:
Roundtable participants agree there is no evidence that ability-achievement
discrepancy formulas can be applied in a consistent and educationally
meaningful (i.e., reliable and valid) manner. They believe SLD eligibility
should not be operationalized using ability-achievement discrepancy
formulas (pg. 8).
Other points of consensus from the Roundtable include:
Identification should include a student-centered, comprehensive
evaluation and problem-solving approach that ensures students who have
a specific learning disability are efficiently identified (pg. 6).
Decisions on eligibility must be made through an interdisciplinary team,
using informed clinical judgment, directed by relevant data, and based on
student needs and strengths (pg. 29).
OrRTI Guidance, September 2005
41
3. What are major issues related to the use of the concept of
discrepancy? Why change?
Issue #1: Discrepancy models fail to differentiate between children who
have LD and those who have academic achievement problems related to
poor instruction, lack of experience, or other problems. It is generally
agreed that the model of achievement-ability discrepancy that has been
employed was influenced by research conducted by Rutter and Yule (1975)
(Reschly, 2003). This research found two groups of low achieving readers,
one with discrepancies and one without. It was this finding that formed the
basis for the idea that a discrepancy was meaningful for both classification
and treatment purposes. Later analyses of this research, and attempts to
replicate it, have failed to produce support for the “two group” model for
either purpose. In fact, it is now accepted that reading occurs in a normal
distribution and that students with dyslexia or severe reading problems
represent the lower end of that distribution (Fletcher et al., 2002). For a
thorough discussion of this important issue, see Fletcher et al., 1998.
Issue #2: Discrepancy models discriminate against certain groups of
students: students outside of “mainstream” culture and students who are in
the upper and lower ranges of IQ. Due to psychometric problems,
discrepancy approaches tend to under-identify children at the lower end of
the IQ range, and over-identify children at the upper end. This problem has
been addressed by various formulas that correct for the regression to the
mean that occurs when two correlated measures are used. However, using
regression formulas does not address issues such as language and cultural
bias in IQ tests, nor does it improve the classification function of a
discrepancy model (Stuebing et al., 2002).
Issue #3: Discrepancy models do not effectively predict which students will
benefit from or respond differentially to instruction. The research around
this issue has examined both progress and absolute outcomes for children
with and without discrepancy, and has not supported the notion the two
groups will respond differentially to instruction. (Stanovich, 2005) Poor
readers with discrepancies and poor readers without discrepancies perform
similarly on skills considered to be important to the development of reading
skills (Gresham, 2002).
Issue #4: The use of discrepancy models requires children to fail for a
substantial period of time—usually years—before they are far enough
behind to exhibit a discrepancy. In order for children to exhibit a
discrepancy, two tests need to be administered—an IQ test, such as the
Wechsler Intelligence Scale for Children, and an achievement test, such as
a Woodcock Johnson Tests of Achievement. Because of limitations of
achievement and IQ testing, discrepancies often do not “appear” until late
second, third, or even fourth grade. Educators and parents have
OrRTI Guidance, September 2005
42
experienced the frustration of knowing a child’s skills are not adequate and
not typical of the child’s overall functioning, and being told to “wait a year” to
re-refer the child. While waiting for a discrepancy to appear, other
persistent problems associated with school failure develop such as poor
self concept, compromised motivation, vocabulary deficits, and deficits
associated with limited access to written content.
Considering all of the methodological problems associated with discrepancy
formulas, this feature is the one that is most problematic for parents and
practitioners—so problematic, that by the late 1990’s the discrepancy
approach was referred to as the “wait and fail” approach by federal officials.
(Lyon, 2002)
4. If authorities believe underlying processing disorders are the cause
of learning disabilities, why doesn’t IDEA 2004 include a model
based on measuring processing problems?
It is a relatively common practice for LD assessment to include descriptions of
“processing” or “patterns of cognitive ability.” Frequently, the conclusions that
are made are based on a student’s performance on subtests of intelligence
measures, memory tests, and language evaluations. While interesting results
may sometimes be produced, drawing conclusions about the presence of a
disability based on such results is not substantiated by research. (Torgeson,
2002; Fletcher et al., 1998)
Assessment of processing deficits in order to diagnose LD has a history even
longer than that of discrepancy approaches. Indeed, frustration with the
reliability and validity of processing assessment contributed to the proposal to
use the severe discrepancy in LD criteria. (Hallahan and Mercer, 2002) The
result was the inclusion of the concept of processing deficits in the federal
definition of LD, but no criteria related to processing. There are clear
advantages to this approach that make continued focus on processing
variables attractive for both research and practice. Of particular importance is
the concept that, if direct assessment of intrinsic processing was possible, so
might be early and intensive preventative education that would avoid the
associated pitfalls of school failure.
Research in processing measurement continues at the neurological, cognitive
or psychological, and task oriented level. Various authors continue to
examine variables such as rapid naming (Cardoso-Martins and Pennington,
2004), working memory (Swanson, 1999), and temporal processing (Tallal,
1996). Unfortunately, difficulties inherent in this research have limited its
practical application. Torgeson (2002) summarizes these difficulties:
Any deficit in academic outcome or performance that fits the definition of a
learning disability always involves a complex mixture of a processing
weakness (or weaknesses) present at some point in development
OrRTI Guidance, September 2005
43
(perhaps not even concurrently present), an instructional context in which
the processing weakness operates, the child’ motivational and emotional
reaction to the learning difficulties caused by the processing weakness,
and the domain-specific knowledge acquired to support performance on
the task. (p. 589)
In other words, it is not possible to separate out all of the complicated factors
that contribute to a child’s performance on tasks and make the assumption
that an intrinsic cognitive process is being measured. While there may be
promising research underway, a methodology for discrete diagnosis or
classification based on processing differences is unavailable and certainly
could not be included in LD criteria. At this time, it is probably appropriate to
follow the advice of McGrady (2002) and continue a research program for
assessment of intrinsic processes independent of school practice.
5. Are there other indicators of LD that are more valid and reliable?
Generally, attempts to reliably define and measure psychological processing
difficulties have yielded limited results that render them without practical
application. However, related to this research, certain skills have been
identified as robust predictors of academic performance. These skills may be
characterized as “critical indicators” or “marker variables.” When embracing
this approach, one accepts that the indicator may represent both
constitutional and learned skills, and that the variable represents an important
capability. Using this approach, researchers have identified measures of
phonological awareness and early literacy knowledge such as letter sound
relationships as powerful early predictors of later reading performance.
(Good and Kaminski, 2002) Similarly, fluent reading of connected text
continues to be highly correlated with growth in both word reading and
comprehension, and represents meaningful ways to screen and progress
monitor in reading. (Fuchs and Fuchs, 1998) Using this approach provides a
method of screening to identify students with potentially persistent academic
problems, and assessing them further.
Fortunately, these variables have been identified for the most prevalent of
school identified learning disabilities, those in the area of reading. Similar
measures for domains such as math reasoning, calculation, and written
language have not been as thoroughly investigated.
Use of these indicators is a key practice that underlies the response to
intervention (RTI) approach. Since they are valid measures of current
performance and good predictors of later performance, they can be used to
prevent the most serious of problems with discrepancy models—the problem
of waiting for children to fail before they receive help.
OrRTI Guidance, September 2005
44
REFERENCES
American Institutes for Research. (2002, July). Washington DC: U.S. Department
of Education. Specific learning disabilities: Finding common ground.
Caplan, G., & Grunebaum, H. (1967). Perspectives on primary prevention. A
review. Arch Gen Psychiatry, 17, 331-346.
Cardoso-Martins, C., & Pennington, B. F. (2004). The relationship between
phonemic awareness and rapid serial naming skills and literacy
acquisition: The role of developmental period and reading ability. Scientific
Studies of Reading, 8(1), 27-52.
Drame, E. R. (2002). Sociocultural context effects on teacher’s readiness to refer
for learning disabilities. Council for Exceptional Children, 69(1), 41-53.
Fletcher, J. M., Francis, D. J., Shaywitz, S. E., Lyon, G. R., Foorman, B. R.,
Steubing, K. K., & Shaywitz, B. A. (1998). Intelligent testing and the
discrepancy model for children with learning disabilities. Learning
Disabilities Research and Practice, 13(4), 186-203.
Fletcher, J. M., Lyon, G. R., Barnes, M., Steubing, K. K. Francis, D. J., Olson, R.
K., Shaywitz, S. E., & Shaywitz, B. A. (2002). Classification of learning
disabilities: An evidence-based evaluation. In Bradley, R., Danielson, L., &
Hallahan, D. P. (Eds.). Identification of Learning Disabilities: Research to
Practice. Mahwah, New Jersey: Lawrence Erlbaum Associates,
Publishers.
Francis, D. J., Fletcher, J. M., Stuebing, K. K., Lyon, G. R., Shaywitz, B. A., &
Shaywitz, S. E. (2005). Psychometric approaches to the identification of
LD: IQ and achievement scores are not sufficient. Journal of Learning
Disabilities, 38(2), 98-108.
Fuchs, L. S., & Fuchs, D. (1998). Treatment validity: A unifying concept for
reconceptualizing the identification of learning disabilities. Learning
Disabilities Research & Practice, 13(4), 204-219.
OrRTI Guidance, September 2005
45
Gerber, M. M., & Semmel, M. I. (1984). Teacher as imperfect test:
Reconceptualizing the referral process. Educational Psychologist, 19,
1-12.
Good, R. H., Kaminski, R. A. (1996). Assessment for instructional decisions:
Towards a proactive/prevention model of decision-making for early literacy
skills. School Psychology Quarterly, 11, 326-336.
Good, R. H., Kaminski, R. A., Smith, S., Simmons, D., Kame’enui, E., & Wallin, J.
(2003). Reviewing outcomes: Using DIBELS to evaluate a school’s core
curriculum and system of additional intervention in kindergarten. In S. R.
Vaughn & K. L. Briggs (Eds.), Reading in the classroom: Systems for
observing teaching and learning. Baltimore: Paul H. Brookes.
Gresham, F. M. (2002). Responsiveness to intervention: An alternative approach
to the identification of learning disabilities. In Bradley, R., Danielson, L., &
Hallahan, D. P. (Eds.). Identification of Learning Disabilities: Research to
Practice. Mahwah, New Jersey: Lawrence Erlbaum Associates,
Publishers.
Gresham, F. M., MacMillan, D., & Bocian, K. (1997). Teachers as “tests”:
Differential validity of teacher judgments in identifying students at-risk for
learning difficulties. School Psychology Review, 26, 47-60.
Hallahan, D. P., & Mercer, C. D. (2002). Learning disabilities: Historical
perspectives. In Bradley, R., Danielson, L., & Hallahan, D. P. (Eds.).
Identification of Learning Disabilities: Research to Practice. Mahwah, New
Jersey: Lawrence Erlbaum Associates, Publishers.
Horner, R. H., Sugai G., & Horner, H. F. (2000). A school-wide approach to
student discipline. The School Administrator, 2(57), 20-23.
Howell, K. W., & Nolet, V. (2000). Curriculum-based evaluation: Teaching and
decision-making (3rd ed.). Belmont, CA: Wadsworth/Thomas Learning.
OrRTI Guidance, September 2005
46
Huitt, W. (1996). Summary of principles of direct instruction. Educational
Psychology Interactive. Valdosta, GA: Valdosta State University. Retrieved
July 6, 2005, from http://chiron.valdosta.edu/whuitt/col/instruct/dirprn.html.
Individuals with Disabilities Education Improvement Act of 2004, Pub. L. No. 108446.
Kame’enui, E. J., & Simmons, D. C. (2002). From an “exploded view” of
beginning reading toward a schoolwide beginning reading model: Getting
to scale in complex host environments. In Bradley, R., Danielson, L., &
Hallahan, D. P. (Eds.). Identification of Learning Disabilities: Research to
Practice. Mahwah, New Jersey: Lawrence Erlbaum Associates,
Publishers.
Lyon, G. R. (2002, June 6). Learning disabilities and early intervention strategies:
How to reform the special education referral and identification process.
Hearing before the Committee on Education and the Workforce. Retrieved
August 3, 2005 from
http://edworkforce.house.gov/hearings/107th/edr/idea6602/lyon.htm.
Lyon, G. R., Fletcher, J. M., Shaywitz, S. E., Shaywitz, B. A., Torgeson, J. K.,
Wood, F.B., Schulte, A., & Olson, R. (2001). Rethinking learning
disabilities. In C. E. Finn Jr., A. J. Rotherham, & C. R. Hokanson Jr.
(Eds.), Rethinking special education for a new century (pp. 259-287).
Washington, DC: Progressive Policy Institute & The Thomas B. Fordham
Foundation.
Marston, D., Canter, A., Lau, M., & Muyskens, P. (2002, June). Problem solving:
Implementation and evaluation in Minneapolis schools. NASP
Communiqué, 30(8). Retrieved July 21, 2005, from
http://www.nasponline.org/publications/cq308minneapolis.html
McGrady, H. J. (2002). A commentary on “empirical and theoretical support for
direct diagnosis of learning disabilities by assessment of intrinsic
processing weaknesses.” In Bradley, R., Danielson, L., & Hallahan, D. P.
(Eds.). Identification of Learning Disabilities: Research to Practice.
Mahwah, New Jersey: Lawrence Erlbaum Associates, Publishers.
OrRTI Guidance, September 2005
47
National Reading Panel. (2000). Teaching children to read: An evidence-based
assessment of the scientific research literature on reading and its
implications for reading instruction. United States: National Institutes of
Health
Pressley, M. (2000). Comprehension instruction: What makes sense now, what
might make sense soon. In Kamil, M. L., Mosenthal, P. B., Pearson, P. D.,
& Barr, R. (Eds.) Handbook of Reading Research: Volume III. Mahwah,
New Jersey: Lawrence Erlbaum Associates, Publishers.
Oregon Administrative Rules. (2005). Division 15, Special Education, Department
of Education.
Reschly, D. J., Hosp, J.L., Schmied C. M. (2003). And miles to go…: State SLD
requirements and authoritative recommendations. Retreived July 21,
2005, from Vanderbilt University, National Research Center on Learning
Disabilities Web site: http://www.nrcld.org/research/states
Sadler, C. (2002, October). Building capacity in school-wide PBS. Conference
conducted at the School-wide Positive Behavior Support: An
Implementer’s Forum on Systems Change, Naperville, Illinois.
Shaywitz, Sally (2004). Overcoming Dyslexia: A New and Complete ScienceBased New York: Alfred A. Knopf.
Shinn, M. (1988). Development of curriculum-based local norms for use in special
education decision-making. School Psychology Review, 17(1), 61-80.
Slonski-Fowler, K., & Truscott, S. D. (2004). General education teachers’
perceptions of the prereferral intervention team process. Journal of
Educational and Psychological Consultation, 15(1), 1-39.
Stanovich, K. E. (2005). The future of a mistake: Will discrepancy measurement
continue to make the learning disabilities field a pseudoscience? Learning
Disabilities Quarterly, 28(2), 103-106.
OrRTI Guidance, September 2005
48
Steubing, K. K., Fletcher, J. M., LeDoux, J. M., Lyon, G. R., Shaywitz S. E., &
Shaywitz B. A. (2002). Validity of IQ-discrepancy classification of learning
disabilities: A meta-analysis. American Educational Research Journal,
39(2), 469-518.
Swanson, H. L. (1999). What develops in working memory? A live span
perspective. Developmental Psychology, 35, 986-1000.
Tallal, P., Miller, S. L., Bedi, G., Byma, G., Wang, X., Nagarjan, S. S., Schreiner,
C., et. al. (1996). Language comprehension in language-learning impaired
children improved with acoustically modified speech. Science, 271, 81-84.
Torgesen, J. K. (2002). Empirical and theoretical support for direct diagnosis of
learning disabilities by assessment of intrinsic processing weaknesses. In
Bradley, R., Danielson, L., & Hallahan, D. P. (Eds.). Identification of
Learning Disabilities: Research to Practice. Mahwah, New Jersey:
Lawrence Erlbaum Associates, Publishers.
U.S. Office of Education. (1977). Assistance to states for education of
handicapped children: Procedures for evaluating specific learning
disabilities. Federal Register, 42(250), 65082-65085.
U.S. Office of Education. (1999). Assistance to states for the education of
children with disabilities and the early intervention program for infants
and toddlers with disabilities. Federal Register, 64(48), 12505-12554.
Vaughn, S. (2002). Using response to treatment for identifying students with
learning disabilities. In Bradley, R., Danielson, L., & Hallahan, D. P.
(Eds.). Identification of Learning Disabilities: Research to Practice.
Mahwah, New Jersey: Lawrence Erlbaum Associates, Publishers.
Walker, H. M., Horner, R. H., Sugai, G., Bullis, M., Sprague, J. R., Bricker, D., &
Kaufman, M. J. (1996). Integrated approaches to preventing anti-social
behavior patterns among school-age children and youth. Journal of
Emotional and Behavioral Disorders, 4(4), 194-209.
OrRTI Guidance, September 2005
49
OrRTI Guidance, September 2005
50
OREGON DEPARTMENT OF EDUCATION
OFFICE OF STUDENT LEARNING & PARTNERSHIPS
Response to Intervention
Readiness Checklist 8/11/05
Is your district ready to adopt a Response to Intervention (RTI) model for the identification
of learning disabilities? This checklist is provided to assist in answering that question.
Designed to guide a group of district leaders through an appraisal of prerequisite
requirements, the completed Checklist must be submitted to the Oregon Department of
Education (ODE) by all districts no later than November 15, 2005.
Key systemic areas addressed through the checklist are:
 Leadership
 Teaming
 Curriculum
 Screening
District Name: ___________________________________ Date: _______________
Primary Contact for Learning Disabled (LD) Identification Issues:
____________________________________ __________
____________________
Name/Title
E-mail
Phone
Staff Completing the Checklist:
____________________________________ ________________________________
Name/Title
Name/Title
____________________________________ ________________________________
Name/Title
Name/Title
____________________________________ ________________________________
Name/Title
RTI Readiness Checklist
Name/Title
Sept. 2005
51
OREGON DEPARTMENT OF EDUCATION
OFFICE OF STUDENT LEARNING & PARTNERSHIPS
Leadership
Established
Willing to
Implement
District level support at the highest levels, including
agreement to adopt an RTI model and allocate
required resources
Understanding of and commitment to a long term
change process (3 or more years)
Long term commitment of resources (staff, time and
materials) for screening, assessment, and interventions
District leadership team with basic knowledge of the
research relative to RTI and the desire to learn more
Expertise at the district level with respect to research
based practices for academics and behavior
Narrative: For “Established” items documented in the space below include specific
information related to the involvement of the School Board, Central Office
Administrators, and Principals. (Use additional space as necessary.)
Narrative: For “Willing to Implement” items, describe current conditions that would
support change in each area. (Use additional space as necessary.)
RTI Readiness Checklist
Sept. 2005
52
No
OREGON DEPARTMENT OF EDUCATION
OFFICE OF STUDENT LEARNING & PARTNERSHIPS
Teaming
Established
Willing to
Implement
No
Commitment to collaborative teaming (general and
special education) at both the district and school levels
Principal leadership and staff (general and special
education) willing to participate at each school
Willingness for general education, special education,
and compensatory programs to work together at both
the district and school levels
Narrative: For “Established” items documented in the space below include specific
information related to teaming structures currently in place at the district and school
levels and specific initiatives that involve collaboration between general education,
special education and compensatory programs. (Use additional space as necessary.)
Narrative: For “Willing to Implement” items, describe current conditions that would
support change in each area. (Use additional space as necessary.)
RTI Readiness Checklist
Sept. 2005
53
OREGON DEPARTMENT OF EDUCATION
OFFICE OF STUDENT LEARNING & PARTNERSHIPS
Curriculum
Established
Willing to
Implement
No
Use of a research validated core reading program at
each elementary school
Use of or ability to acquire supplemental intervention
materials
Capacity to provide ongoing training and support to
ensure fidelity of implementation
Narrative: For “Established” items documented in the space below list the core reading
program(s) adopted by the district, any supplemental intervention materials currently in
use, and systems in place to provide training related to their implementation. (Use
additional space as necessary.)
Narrative: For “Willing to Implement” items, describe current conditions that would
support change in each area. Include possible options for funding additional curricular
materials that may be necessary. (Use additional space as necessary.)
RTI Readiness Checklist
Sept. 2005
54
OREGON DEPARTMENT OF EDUCATION
OFFICE OF STUDENT LEARNING & PARTNERSHIPS
Screening
Established
Willing to
Implement
No
School-wide structures in place to facilitate systematic
review of and programming to enhance student
performance (examples: EBS, PBS, or EBIS)*
Established student level data collection and
management system that is tied to the content
(example: DIBELS)*
Capacity to implement progress monitoring
Narrative: For “Established” items in the space below describe the data collection and
management system used by the district, including details about the current progress
monitoring system and calendar. (Use additional space as necessary.)
Narrative: For “Willing to Implement” items, describe current conditions that would
support change in each area. (Use additional space as necessary.)
*Examples:
EBS=Effective Behavioral Support
PBS= Positive Behavioral Support
EBIS= Effective Behavioral and Instructional Support
DIBELS= Dynamic Indicators of Basic Early Literacy Skills
RTI Readiness Checklist
Sept. 2005
55
OREGON DEPARTMENT OF EDUCATION
OFFICE OF STUDENT LEARNING & PARTNERSHIPS
Summary Statement (check one):
_____ Yes, after review of our responses on the completed Readiness Checklist, our
district is applying to be included in the OrRTI Guided Development Project beginning in
January 2006. Our completed Readiness Checklist, submitted prior to
November 15, 2005, serves as our application to ODE.
_____ After review of our responses on the completed Readiness Checklist, our district
is moving ahead with implementation of the Response to Intervention approach to LD
Identification. While we are not applying to be part of the OrRTI Guided Development
Project, we understand that implementation of the RTI model will be a focus of audits
included as part of the Systems Performance Review & Improvement (SPR&I) process.
Our completed Readiness Checklist, submitted prior to November 15, 2005, serves as
our notification to ODE.
_____ No, After review of our responses on the completed Readiness Checklist, our
district has identified system requirements which need to be in place before we adopt
the Response to Intervention approach to LD Identification.
Superintendent Signature
Date
Director of Special Education Signature
Date
Return to: Tricia Clair, Coordinator, Statewide Monitoring System
Oregon Department of Education – OSL&P
255 Capitol St. NE
Salem, Oregon 97310-0205
Completed Checklists must be submitted to ODE no later than November 15, 2005.
RTI Readiness Checklist
Sept. 2005
56
Download