RTI Implementer Series: Module 2: Progress Monitoring Training Manual

advertisement
RTI Implementer Series:
Module 2: Progress Monitoring
Training Manual
July 2012
National Center on Response to Intervention
http://www.rti4success.org
About the National Center on Response to Intervention
Through funding from the U.S. Department of Education’s Office of
Special Education Programs, the American Institutes for Research and
researchers from Vanderbilt University and the University of Kansas
have established the National Center on Response to Intervention.
The Center provides technical assistance to states and districts and
builds the capacity of states to assist districts in implementing
proven response to intervention frameworks.
National Center on Response to Intervention
http://www.rti4success.org
This document was produced under U.S. Department of Education, Office of Special Education
Programs Grant No. H326E070004 to the American Institutes for Research. Grace Zamora Durán
and Tina Diamond served as the OSEP project officers. The views expressed herein do not necessarily
represent the positions or polices of the Department of Education. No official endorsement by the
U.S. Department of Education of any product, commodity, service or enterprise mentioned in this
publication is intended or should be inferred. This product is public domain. Authorization to
reproduce it in whole or in part is granted. While permission to reprint this publication is not
necessary, the citation should be: National Center on Response to Intervention (July 2012).
RTI Implementer Series: Module 2: Progress Monitoring—Training Manual. Washington, DC:
U.S. Department of Education, Office of Special Education Programs, National Center on
Response to Intervention.
Contents
Introduction.............................................................................................. 1
Module 1: Screening ............................................................................................................2
Module 2: Progress Monitoring ...........................................................................................2
Module 3: Multi-Level Prevention System.............................................................................2
What Is RTI? ............................................................................................. 2
Screening..............................................................................................................................4
Progress Monitoring ............................................................................................................4
Multi-Level Prevention System ............................................................................................4
Data-Based Decision Making ................................................................................................5
What Is Progress Monitoring?................................................................... 5
Progress Monitoring Assessments.......................................................................................6
Selecting a Progress Monitoring Tool......................................................... 8
What Is Curriculum-Based Measurement (CBM)?...................................... 9
Graphing and Progress Monitoring ......................................................... 10
Calculating Slope ...............................................................................................................10
Goal Setting .......................................................................................................................15
Frequency of Progress Monitoring .......................................................... 19
Instructional Decision Making................................................................. 19
Consecutive Data Point Analysis ........................................................................................20
Trend Line Analysis.............................................................................................................21
Frequently Asked Questions.................................................................... 24
References.............................................................................................. 29
Appendix A: NCRTI Progress Monitoring–Glossary of Terms..................... 31
Appendix B: Handouts............................................................................. 39
Setting Goals With End-of-Year Benchmarking Handout (Gunnar)......................................41
Setting Goals With National Norms Handout (Jane)............................................................43
Setting Goals With Intra-Individual Framework Handout (Cecelia)......................................45
Practicing the Tukey Method Handout.................................................................................47
RTI Implementer Series: Module 2: Progress Monitoringi
Practicing the Tukey Method and Calculating Slope Handout..............................................49
Calculating Slope and Determining Responsiveness in
Primary Prevention Handout (Arthur)..............................................................................51
Calculating Slope and Determining Responsiveness in
Secondary Prevention Handout (David)...........................................................................53
Calculating Slope and Determining Responsiveness to
Secondary Prevention Handout (Martha).........................................................................55
Appendix C: RTI Case Study..................................................................... 57
Bear Lake School................................................................................................................59
Primary Prevention.............................................................................................................59
Secondary Prevention.........................................................................................................60
Tertiary Prevention..............................................................................................................61
Nina.....................................................................................................................................61
Appendix D: Progress Monitoring Graph Template.................................. 63
Appendix E: Additional Research on Progress Monitoring........................ 67
Progress Monitoring...........................................................................................................69
Progress Monitoring—Math...............................................................................................72
Progress Monitoring—Reading..........................................................................................75
Progress Monitoring—Writing............................................................................................78
Progress Monitoring—English Language Learners.............................................................80
Appendix F: Websites With Additional Information................................. 83
This manual is not designed to replace high-quality, ongoing professional development. It
should be used as a supplemental resource to the Module 2: Progress Monitoring Training
PowerPoint Presentation. Please contact your state education agency for available training
opportunities and technical assistance or contact the National Center on Response to
Intervention (http://www.rti4success.org) for more information.
ii
Training Manual
Introduction
The National Center on Response to Intervention (NCRTI) developed three training
modules for beginning implementers of Response to Intervention (RTI). These
modules are intended to provide foundational knowledge about the essential
components of RTI and to build an understanding about the importance of RTI
implementation. The modules were designed to be delivered in the following
sequence: Screening, Progress Monitoring, and Multi-Level Prevention System.
The fourth essential component, Data-Based Decision Making, is embedded
throughout the three modules.
This training is intended for teams in initial planning or implementation of a school
or districtwide RTI framework. The training provides school and district teams an
overview of the essential components of RTI, opportunities to analyze school
and district RTI data, activities so they can apply new knowledge, and team
planning time.
The RTI Implementer Series should be delivered by a trained, knowledgeable
professional. This training series is designed to be a component of comprehensive
professional development that includes supplemental coaching and ongoing
support. The Training Facilitator’s Guide is a companion to all the training modules
that is designed to assist facilitators in delivering training modules from the
National Center on Response to Intervention. The Training Facilitator’s Guide can
be found at http://www.rti4success.org. Each training module includes the following training materials:
●●
PowerPoint Presentations that include slides and speaker’s notes
●●
Handouts (embedded in Training Manual)
●●
Videos (embedded in PowerPoint slides)
●●
Training Manual
RTI Implementer Series: Module 2: Progress Monitoring1
Module 1: Screening
Participants will become familiar with the essential components of an RTI framework: screening, progress monitoring, the multi-level prevention system, and
data-based decision making. Participants will gain the necessary skills to use
screening data to identify students at risk, conduct basic data analysis using
screening data, and establish a screening process.
Module 2: Progress Monitoring
Participants will gain the necessary skills to use progress monitoring data to select
progress monitoring tools, evaluate and make decisions about instruction, establish
data decision rules, set goals, and establish an effective progress monitoring
system.
Module 3: Multi-Level Prevention System
Participants will review how screening and progress monitoring data can assist in
decisions at all levels, including school, grade, class, and student. Participants will
gain skills to select evidence-based practices, make decisions about movement
between levels of prevention, and establish a multi-level prevention system.
What Is RTI?
NCRTI offers a definition of RTI that reflects what is currently known from research
and evidence-based practice:
Response to intervention integrates assessment and intervention within a
multi-level prevention system to maximize student achievement and to reduce
behavioral problems. With RTI, schools use data to identify students at risk for
poor learning outcomes, monitor student progress, provide evidence-based
interventions and adjust the intensity and nature of those interventions
depending on a student’s responsiveness, and identify students with learning
disabilities or other disabilities (NCRTI, 2010).
2
Training Manual
NCRTI believes that rigorous implementation of RTI includes a combination of
high-quality and culturally and linguistically responsive instruction, assessment,
and evidence-based intervention. Further, NCRTI believes that comprehensive RTI
implementation will contribute to more meaningful identification of learning and
behavioral problems, improve instructional quality, provide all students with the
best opportunities to succeed in school, and assist with the identification of
learning disabilities and other disabilities.
This manual and the associated training are based on NCRTI’s four essential components of RTI:
●●
Screening
●●
Progress monitoring
●●
●●
School-wide, multi-level instructional and behavioral system for preventing
school failure
Data-based decision making for instruction, movement within the multi-level
system, and disability identification (in accordance with state law)
Exhibit 1 represents the relationship among the essential components of RTI.
Data-based decision making is the essence of good RTI practice; it is essential for
the other three components: screening, progress monitoring, and the multi-level
prevention system. All components must be implemented using culturally responsive and evidence-based practices.
Exhibit 1. Essential Components of RTI
RTI Implementer Series: Module 2: Progress Monitoring3
Screening
Struggling students are identified by implementing a two-stage screening process.
The first stage, universal screening, is a brief assessment for all students conducted
at the beginning of the school year; however, some schools and districts use
universal screening two or three times during the school year. For students whose
score is below the cut score on the universal screen, a second stage of screening is
then conducted to more accurately predict which students are truly at risk for poor
learning outcomes. This second stage involves additional, more in-depth testing or
short-term progress monitoring to confirm a student’s at-risk status. Screening
tools must be reliable, valid, and demonstrate diagnostic accuracy for predicting
which students will develop learning or behavioral difficulties.
Progress Monitoring
Progress monitoring assesses student performance over time, quantifies student
rates of improvement or responsiveness to instruction, evaluates instructional
effectiveness, and, for students who are least responsive to effective instruction,
formulates effective individualized programs. Progress monitoring tools must
accurately represent students’ academic development and must be useful for
instructional planning and assessing student learning. In addition, in the tertiary
level of prevention, educators use progress monitoring to compare a student’s
expected and actual rates of learning. If a student is not achieving at the expected
rate of learning, the educator experiments with instructional components in an
attempt to improve the rate of learning.
Multi-Level Prevention System
Classroom instructors are encouraged to use research-based curricula in all subjects. When a student is identified via screening as requiring additional intervention, evidence-based interventions of moderate intensity are provided. These
interventions, which are in addition to the core primary instruction, typically
involve small-group instruction to address specific identified problems. These
evidence-based interventions are well defined in terms of duration, frequency, and
the length of the sessions, and the intervention is conducted as it was in the
research studies. Students who respond adequately to secondary prevention
return to the primary level of prevention (the core curriculum) with ongoing
progress monitoring. Students who show minimal response to the secondary level
of prevention move to the tertiary level of prevention, where more intensive and
individualized supports are provided. All instructional and behavioral interventions
4
Training Manual
should be selected with attention to their evidence of effectiveness and with
sensitivity to culturally and linguistically diverse students.
Data-Based Decision Making
Screening and progress monitoring data can be aggregated and used to compare
and contrast the adequacy of the core curriculum as well as the effectiveness of different instructional and behavioral strategies for various groups of students within
a school. For example, if 60 percent of the students in a particular grade score
below the cut score on a screening test at the beginning of the year, school personnel might consider the appropriateness of the core curriculum or whether differentiated learning activities need to be added to better meet the needs of the students in that grade.
What Is Progress Monitoring?
Research has demonstrated that when teachers use progress monitoring, specifically curriculum-based measures (CBMs), to inform their instructional decision
making, students learn more, teacher decision making improves, and students are
more aware of their own performance (Stecker, Fuchs, & Fuchs, 2005; Fuchs,
Fuchs, Karns, Hamlett & Katzroff, 1999). Research focused on CBMs conducted over
the past 30 years has also shown CBMs to be reliable and valid (Foegon, Jibam &
Deno, 2007; Stecker, Fuchs, & Fuchs, 2005; Fuchs, Fuchs, Compton, Bryant, Hamlett
& Seethaler; 2007; Zumeta, Compton, & Fuchs, 2012). The purpose of progress
monitoring is to monitor students’ response to primary, secondary, and tertiary
instruction. Progress monitoring is not limited to those students identified for
supplemental instruction. The data can also be used to:
1.
Estimate the rates of improvement which allows for comparison to peers, of
classes, of subgroups, and of schools
2.
Identify students who are not demonstrating or making adequate progress so
instructional changes can be made
3.
Compare the efficiency or efficacy of different forms of instruction—in
other words, which instructional approach or intervention led to the greatest growth among students (this comparison can occur at the student, class,
grade, or school level.
RTI Implementer Series: Module 2: Progress Monitoring5
Since screening tools cannot identify student as at risk for poor learning outcomes
with 100 percent accuracy, progress monitoring can be used as a second step in the
screening process in order to verify the results of screening. This may include
students who are just above or just below the cut-off score.
Progress monitoring tools, just like screening tools, should be brief, reliable, valid,
and evidence-based. Different progress monitoring tools may be used to capture
different learning outcomes. Unlike screening, which occurs two to three times
during the year, progress monitoring can be used anytime throughout the year.
With progress monitoring, students are given standardized probes at regular
intervals (weekly, bi-weekly, or monthly) to produce accurate and meaningful
results that teachers can use to quantify short- and long-term student gains toward
end-of-year goals. When and how frequently progress monitoring occurs is dependent on the sensitivity of the tools used and the typical rate of growth for the
student. Progress monitoring tools should be administered at least monthly,
though more-frequent data collection is recommended given the amount of data
needed for making decisions with confidence (six to nine data points for many
tools) (Christ & Silberglitt, 2007).
Progress Monitoring Assessments
In selecting appropriate progress monitoring assessments, it is important to
remember that there are three types of assessments that are used in an RTI
framework: summative, diagnostic, and formative (See Module 1: Screening for
more information). Progress monitoring assessments are formative assessments.
With formative assessment, student progress is systematically assessed to provide
continuous feedback to both the student and the teacher concerning learning
successes and failures. They can be used to identify students who are not responsive to instruction or interventions (screening) and to understand rates of student
improvement (progress monitoring). They can also be used to make curriculum and
instructional decisions, to evaluate program effectiveness, to proactively allocate
resources, and to compare the efficacy of instruction and interventions. Progress
monitoring tools should be brief assessments of direct student performance. While
formative assessments can be both formal and informal measures of student
progress, formal or standardized progress monitoring assessments provide data to
support the conclusions made from the progress monitoring test used. The data for
these formal assessments are mathematically computed and summarized. Scores
such as percentiles, stanines, or standard scores are most commonly given from
this type of assessment.
6
Training Manual
There are two common types of progress monitoring assessments: mastery measures and general outcome measures (GOM).
Mastery Measures
Mastery measures determine the mastery of a series of short-term instructional
objectives. For example, a student may master multi-digit addition and then master
multi-digit subtraction. To use mastery measures, teachers must determine a
sensible instructional sequence and often design criterion-referenced testing
procedures to match each step in that instructional sequence. The hierarchy of
skills used in mastery measurement is logical, not empirical. This means that while
it may seem logical to teach addition first and then subtraction second, there is no
evidence base for the sequence. While there are some mastery measures that have
been assessed for technical rigor (see NCRTI Progress Monitoring Tools Chart for
examples), many are teacher-made tests. Teacher-made tests present concerns
given the unknown reliability and validity of these measures.
Mastery measures can be beneficial in assessing whether a student can learn target
skills in isolation and can help teachers to make decisions about changing target
skill instruction. Because mastery measures are based on mastering one skill before
moving on to the next skill, the assessment does not reflect maintenance or
generalization. It becomes impossible to know if, after teaching one skill, the
student still remembers how to perform the previously learned skill. In addition,
how a student does on a mastery measure assessment does not indicate how he or
she will do on standardized tests because the number of objectives mastered does
not relate well to performance on criterion measures.
General Outcome Measures
General outcome measures (GOMs) are indicators of general skill success and
reflect overall competence in the annual curriculum. They describe students’
growth and development over time or both their “current status” and their “rate of
development.” Common characteristics of GOMs are that they are simple and
efficient, are sensitive to improvement, provide performance data to guide and
inform a variety of educational decisions, and provide national/local norms allow
for cross comparisons of data.
Additional information about mastery measures, GOMs, and other forms of
assessment can be found in Module 1 focused on screening.
RTI Implementer Series: Module 2: Progress Monitoring7
Selecting a Progress Monitoring Tool
In addition to determining the type of formative assessment, mastery measure, or
general outcome measure, schools and districts must select the appropriate tool.
NCRTI has developed a Progress Monitoring Tools Chart that provides relevant
information for selecting both mastery measures and general outcome measures.
Each year, NCRTI has a call for tool developers to submit their tools for review.
A technical review committee (TRC), made up of experts in the field, reviews the
tools for technical rigor. The NCRTI Progress Monitoring Tools Chart is not an
exhaustive list of all available progress monitoring measures, as vendors or tool
developers must submit their tool in order for it to be reviewed. Learn more
about the tools available by visiting the Progress Monitoring Tools Chart at
http://www.rti4success.org/progressMonitoringTools.
The tools chart provides information on the technical rigor of the tools, the
implementation requirements, and data that supports the tool. To learn about the
different information that the tools chart provides and the suggested steps for
review view the User Guide at http://www.rti4success.org/progressMonitoringTools.
The six recommended steps included in the User Guide are 1) gather a team,
2) determine your needs 3) determine your priorities 4) familiarize yourself with the
content and language of the tools chart, 5) review the ratings and implementation
data, 6) ask for more information. Similar to screening, establishing a progress
monitoring process begins with identifying the needs, priorities and resources of the
district or school and then selecting a progress monitoring tool that matches those
needs and resources. Prior to tool selection, teams must consider why progress
monitoring is being conducted, what they hope to learn from the progress
monitoring data, and how the results will be used. It is important to note that
schools and districts should accurately identify their needs but might be unable to
address all of the needs due to the available resources.
Once a tool is selected, districts and schools need to continuously evaluate whether the progress monitoring tool matches their needs and resources and provides
the data needed to inform their decisions.
8
Training Manual
What Is Curriculum-Based
Measurement (CBM)?
CBM, a commonly used GOM, is used to assess students’ academic competence at
one point in time (as in screening or determining final status following intervention) and to monitor student progress in core academic areas (as in progress
monitoring). CBM, which is supported by more than 30 years of research, is used
across the United States. It demonstrates strong reliability, validity, instructional
utility, and alternate forms of equivalent difficulty. Using CBM produces accurate,
meaningful information about students’ academic levels and their rates of improvement, and CBM results correspond well to high-stakes tests. When teachers
use CBM to inform instructional decisions, students’ achievement improves (Stecker,
Fuchs, & Fuchs, 2005; Fuchs, Fuchs, Hamlett & Stecker, 1991; Fuchs, Fuchs, Karns,
Hamlett & Katzroff, 1999).
In this manual, progress monitoring will be operationalized through the use of
curriculum-based measurement (CBM).
●●
●●
●●
●●
CBM benchmarks will be used for identifying students suspected to be at risk
CBM slope will be used to confirm or disconfirm actual risk status by quantifying
short-term response to general education primary prevention across 6–10 weeks.
CBM slope and final status will be used to define responsiveness to secondary
preventative intervention.
CBM slope and final status will be used to:
a. Set clear and ambitious goals
b. Inductively formulate effective individualized programs
c. Assess responsiveness to tertiary prevention to formulate decisions about when students should return to less intensive levels of the prevention system.
RTI Implementer Series: Module 2: Progress Monitoring9
Graphing and Progress Monitoring
To monitor progress, each student suspected of being at risk is administered one
CBM alternate form on a regular basis (weekly, bi-weekly, or monthly), and the
student’s scores are charted on a graph. With CBM graphs, the rate at which
students develop academic performance over time can be quantified. Increasing
scores indicate the student is responding to the instructional program. Flat or
decreasing scores indicate the student is not responding to the instructional
program, and a change to the student’s instruction needs to take place.
Graphing CBM scores can be done on teacher-made graphs, through computer
applications such as Excel, or through current data systems. Teachers create
individual student graphs to interpret the CBM scores of every student and see
progress or lack thereof. Alternatively, teachers can use software to handle graph
creation and data analysis. When developmentally appropriate, teachers can also
involve students in measuring their own progress.
Teachers should create a master CBM graph in which the vertical axis accommodates the range of scores from zero to the highest possible CBM score (See Appendix D for a blank sample). On the horizontal axis, the number of weeks of instruction is listed (Exhibit 2). Note that the graphs in this manual include 14 or 20 weeks
of instruction. The number of weeks of instruction will vary based on the student
and school. These examples provide only a snapshot of a progress monitoring
graph as examples, not a graph for the entire school year. Once the teacher creates
the master graph, it can be copied and used as a template for every student. If
teachers use existing software systems, they input the required data (e.g., number
of weeks) and the system will create the graph.
Every time a CBM probe is administered, the teacher scores the probe and then
records the score on a CBM graph (Exhibit 3). A line can be drawn connecting each
data point.
Calculating Slope
Calculating the slope of a CBM graph is important to assist in determining student
growth during primary, secondary, and tertiary prevention. First, graph the CBM
10
Training Manual
Exhibit 2. Sample CBM Template
Exhibit 3. Sample CBM Graph
RTI Implementer Series: Module 2: Progress Monitoring11
scores (Exhibit 4). Then, draw a trend line using a procedure called the Tukey
method and calculate the slope of the trend line. Follow these steps for the Tukey
method (Fuchs & Fuchs, 2007).
1.
Divide the data points into three equal sections by drawing two vertical lines.
(If the points divide unevenly, group them approximately.)
2.
In the first and third sections, find the median data point and CBM week. Locate
the place on the graph where the two values intersect and mark with an X.
3.
Draw a line through the two Xs.
The slope is calculated by first subtracting the median point in the first section from
the median point in the third section. Then, divide this by the number of weeks of
instruction. If collected on a weekly basis the number of weeks of instruction is the
number of data points minus one.
third median – first median
number of weeks of instruction
For example, in Exhibit 4, the third median data point is 50, and the first median data
point is 34. The number of data points is 8 and the data was collected on a weekly
Exhibit 4. Drawing a Trend Line Using the Tukey Method
12
Training Manual
basis so you would subtract one in order to get the total number of weeks of instruction, 7, that has occurred. So, (50 – 34) ÷ 7 = 2.3. The slope of this graph is 2.3.
The next few exhibits show how CBM scores are graphed and how decisions
concerning RTI can be made using the graphs. The Practicing the Tukey Method
Handout and the Practicing the Tukey Method and Calculating Slope Handout
provide opportunities to practice using the Tukey method to calculate slope.
Exhibit 5 shows a graph for Sarah, a first-grade student. Sarah was suspected of
being at risk for reading difficulties after scoring below the CBM Word Identification
Fluency (WIF) screening cut-off. Her progress in primary prevention was monitored
for eight weeks. Sarah’s progress on the number of words read correctly looks like
it’s increasing, and the slope is calculated to quantify the weekly increase and to
confirm or disconfirm at-risk status.
Sarah’s slope is (16 – 3) ÷ 7 = 1.9. Research suggests that the first-grade cut-off for
adequate growth in general education is 1.8. Sarah’s slope indicates that she is benefiting from the instruction provided in primary prevention, and she does not need
secondary prevention at this time. Her progress should continue to be monitored
in primary prevention to ensure that she is making adequate progress without
supplemental supports.
Exhibit 5. Sarah’s Progress on Words Read Correctly—
Primary Prevention
RTI Implementer Series: Module 2: Progress Monitoring13
Look at Exhibit 6. Jessica is also a first-grade student who was suspected of being at
risk for reading difficulties when she scored below the CBM Word Identification Fluency screening cut-off point in September. After collecting eight data points on a
weekly basis, Jessica’s scores on the number of words read correctly are not
increasing. Jessica’ slope is (6 – 6) ÷ 7 = 0. Her slope is not above the first-grade
cut-off of 1.8 for adequate progress in general education. Jessica needs secondary
intervention at this time.
Exhibit 6. Jessica’s Progress on Words Read Correctly—
Primary Prevention
Exhibit 7 shows Jessica’s graph after twelve weeks of secondary prevention. The
dotted line on the graph is drawn at the point that Jessica left primary prevention and
entered secondary prevention. Over these 12 data points collected across the twelve
weeks that Jessica was in secondary prevention, it appears her scores are increasing.
Jessica’s slope is calculated as (28 – 6) ÷ 11 = 2.0. Her slope is above the first-grade
cut-off of 1.8 for growth in secondary prevention. Jessica can exit secondary
prevention at this time. Jessica’s progress should continue to be monitored in
primary prevention to ensure that she is making adequate progress without
supplemental supports she received in secondary prevention.
14
Training Manual
Exhibit 7. Jessica’s Progress on Words Read Correctly—
Secondary Prevention
Practice calculating the slope and using the data to make decisions about student’s
response to primary, secondary, or tertiary instruction using the Calculating Slope
and Determining Responsiveness in Primary Prevention Handout (Arthur), Calculating Slope and Determining Responsiveness in Secondary Prevention Handout
(David), and the Calculating Slope and Determining Responsiveness to Secondary
Prevention Handout (Martha).
Goal Setting
There are three options for setting goals
Option 1: End-of-Year Benchmarks
The first option is end-of-year benchmarking. For typically developing students at
the grade level where the student is being monitored, identify the end-of-year CBM
benchmark (Exhibit 8). This is the end-of-year performance goal. The benchmark is
represented on the graph by an X at the date marking the end of the year. A goal
line is then drawn between the baseline score, which is plotted on the graph at the
end of the baseline data collection period, and the end-of-year performance goal.
RTI Implementer Series: Module 2: Progress Monitoring15
Exhibit 8. Typical End-of-Year Benchmarks in Reading and Math
Grade
Reading
Computation
Concepts and
Applications
Kindergarten
40 sounds/minute (LSF)
—
—
Grade 1
60 words/minute (WIF)
20 digits
20 points
Grade 2
75 words/minute (PRF)
20 digits
20 points
Grade 3
100 words/minute (PRF)
30 digits
30 points
Grade 4
20 replacement/2.5 minutes (Maze)
40 digits
30 points
Grade 5
25 replacement/2.5 minutes (Maze)
30 digits
15 points
Grade 6
30 replacement/2.5 minutes (Maze)
35 digits
15 points
Exhibit 9 shows a sample graph for a third-grade student working on CBM Computation. The end-of-year benchmark of 30 digits is marked with an X and a goal line
drawn between the baseline score, which is plotted on the graph at the end of the
baseline data collection period, and the end-of-year performance goal. The Setting
Goals with End-of-Year Benchmarking Handout (Gunnar) provides an opportunity
to practice end-of-year benchmarking.
Exhibit 9. Sample Graph with End-of-Year Benchmark
16
Training Manual
Option 2: Rate of Improvement
The second option for setting goals is using national norms of improvement. For
typically developing students at the grade level where the student is being monitored, identify the average rate of weekly increase from a national norm chart
(Exhibit 10).
Exhibit 10. Sample CBM Reading and Math Norms for Student
Growth (Slope)
Grade
Reading—
Slope
Kindergarten 1.0 (LSF)
Computation CBM—
Slope for Digits Correct
Concepts and Applications
CBM—Slope for Points
—
—
Grade 1
1.8 (WIF)
0.35
No data available
Grade 2
1.5 (PRF)
0.30
0.40
Grade 3
1.0 (PRF)
0.30
0.60
Grade 4
0.40 (Maze)
0.70
0.70
Grade 5
0.40 (Maze)
0.70
0.70
Grade 6
0.40 (Maze)
0.40
0.70
For example, a fourth-grade student’s average score from his first three CBM
Computation probes is 14. The norm for fourth-grade students is 0.70. To set an
ambitious goal for the student, multiply the weekly rate of growth by the number
of weeks left until the end of the year. If there are 16 weeks left, then multiply 16
by 0.70: 16 x 0.70 = 11.2. Add 11.2 to the baseline average of 14 (11.2 + 14 = 25.2).
This sum (25.2) is the end-of-year performance goal. On the student’s graph, 25.2
would be plotted and a goal line would be drawn. The Setting Goals with National
Norms Handout (Jane) provides an opportunity to practice setting goals based on
national norms.
Rate of Growth (National or Local Norm) X Number of Weeks of Instruction +
Student’s Baseline Score = Student’s Goal
RTI Implementer Series: Module 2: Progress Monitoring17
Option 3: Intra-Individual Framework
The third option for setting goals is by an intra-individual framework. To use this
option, identify the weekly rate of improvement (slope) for the target student
under baseline conditions, using at least eight CBM data points. Multiply this slope
by 1.5. To ensure that the student is making progress, 1.5 is used. This ensures the
performance gap is closed and that the student is not only maintaining growth, but
increasing it by at least half. Take this product and multiply it by the number of
weeks until the end of the year. Add this product to the student’s baseline score
(mean of the three most recent data points). This sum is the end-of-year goal.
For example, a student’s first eight CBM scores were 2, 3, 5, 5, 5, 6, 7, and 4 and
they were collected on a weekly basis. To calculate the weekly rate of improvement
(slope), find the difference between the median of the last three data points and
the median of the first three data points. In this instance, that’s approximately
6 – 3 = 3. Since eight scores have been collected on a weekly basis, divide the
difference by the number of data points minus 1 to determine the number of
weeks of instruction, which is 7: (6 – 3) ÷ 7 = 0.43. Therefore, 0.43 represents the
average per week rate of improvement.
The average per week rate of improvement, 0.43, is multiplied by 1.5 (the desired
improvement rate): 0.43 × 1.5 = 0.645. Next, 0.645 is multiplied by the number of
weeks until the end of the year. If there are 14 weeks left until the end of the year:
0.645 × 14 = 9.03. Then, take the mean of the three most recent data points, which
is (6+7+4)/3 or 5.67. The sum of 9.03 plus the baseline score is the end-of-year
performance goal: 9.03 + 5.67= 14.69. The student’s end-of-year performance goal
would be 14.69 or 15. On the student’s graph, 15 would be plotted and a goal line
would be drawn. The Setting Goals with Intra-Individual Framework Handout
(Cecelia) provides an opportunity to practice setting goals through the intraindividual framework.
Regardless of the method, clear and ambitious goals need to be established, and
effective individualized programs need to be designed and implemented to help
students meet those goals.
18
Training Manual
Frequency of Progress Monitoring
Progress monitoring can be used anytime throughout the school year. Monitoring
should occur at regular intervals, but the frequency of the interval can vary (e.g.,
weekly, bi-weekly, or monthly). At a minimum, progress monitoring tools should be
administered monthly. While the recommended number of data points needed to
make a decision varies slightly, with Shinn and Good (1989) suggesting the need for
at least seven to 10 data points and Christ and Silberglitt (2007) recommending
between six and nine data points, as the number of data points increases, the
effects of measurement error on the trend line decreases. While it may be ideal to
monitor students more frequently, the sensitivity of the selected progress monitoring tool may dictate the frequency with which the tool can be administered. Some
tools are sensitive enough to be used weekly or more frequently, while others are
only sensitive enough to be used once or twice a month.
Instructional Decision Making
Once goals are set and supplemental programs are implemented, it is important to
monitor student progress. CBM can judge the adequacy of student progress and
the need to change instructional programs. Standard decision rules guide decisions
about the adequacy of student progress and the need to revise goals and instructional programs. Two common approaches include analyzing the four most recent
data points and trend lines.
Decision rules based on the most recent four consecutive scores:
●●
●●
If the most recent four consecutive CBM scores are above the goal line, the
student’s end-of-year performance goal needs to be increased.
If the most recent four consecutive CBM scores are below the goal line, the
teacher needs to revise the instructional program.
Decision rules based on the trend line:
●●
If the student’s trend line is steeper than the goal line, the student’s end-ofyear performance goal needs to be increased.
RTI Implementer Series: Module 2: Progress Monitoring19
If the student’s trend line is flatter than the goal line, the teacher needs to
revise the instructional program.
●●
If the student’s trend line and goal line are the same, no changes need to be made.
●●
Consecutive Data Point Analysis
In Exhibit 11, the most recent four scores are above the goal line. Therefore, the
student’s end-of-year performance goal needs to be adjusted. The teacher increases the desired rate (or goal) to boost the actual rate of student progress.
Exhibit 11. Four Consecutive Scores Above Goal Line
The point of the goal increase is notated on the graph as a dotted vertical line. This
allows teachers to visually note when the student’s goal or level of instruction was
changed. The teacher reevaluates the student’s graph in another seven to eight
data points.
In Exhibit 12, the most recent four scores are below the goal line. Therefore, the
teacher needs to change the student’s instructional program. The end-of-year
performance goal and goal line never decrease; they can only increase. The instructional program should be tailored to bring a student’s scores up so they match or
surpass the goal line.
20
Training Manual
The teacher draws a dotted vertical line when making an instructional change. This
allows teachers to visually note when changes to the student’s instructional
program were made. The teacher reevaluates the student’s graph in another seven
to eight data points to determine whether the change was effective.
Exhibit 12. Four Consecutive Scores Below Goal Line
Trend Line Analysis
In Exhibit 13, the trend line is steeper than the goal line. Therefore, the student’s
end-of-year performance goal needs to be adjusted. The teacher increases the
desired rate (or goal) to boost the actual rate of student progress. The new goal
line can be an extension of the trend line.
The point of the goal increase is notated on the graph as a dotted vertical line. This
allows teachers to visually note when the student’s goal was changed. The teacher
reevaluates the student’s graph in another seven to eight data points.
RTI Implementer Series: Module 2: Progress Monitoring21
Exhibit 13. Trend Line Above Goal Line
In Exhibit 14, the trend line is flatter than the performance goal line. The teacher
needs to change the student’s instructional program. Again, the end-of-year performance goal and goal line are never decreased. A trend line below the goal line
indicates that student progress is inadequate to reach the end-of-year performance
goal. The instructional program should be tailored to bring a student’s scores up.
Exhibit 14. Trend Line Below Goal Line
22
Training Manual
The point of the instructional change is represented on the graph as a dotted
vertical line. This allows teachers to visually note when the student’s instructional
program was changed. The teacher reevaluates the student’s graph in another
seven to eight data points.
In Exhibit 15, the trend line matches the goal line, so no change is currently needed
for the student.
Exhibit 15. Trend Line Matches Goal Line
The teacher reevaluates the student’s graph in another seven to eight data points
to determine whether an end-of-year performance goal or instructional change
needs to take place.
RTI Implementer Series: Module 2: Progress Monitoring23
Frequently Asked Questions
What is at the heart of RTI?
The purpose of RTI is to provide all students with the best opportunities to succeed
in school, identify students with learning or behavioral problems, and ensure that
they receive appropriate instruction and related supports. The goals of RTI are as
follows:
●●
●●
Integrate all the resources to minimize risk for the long-term negative consequences associated with poor learning or behavioral outcomes
Strengthen the process of appropriate disability identification
Should we use progress monitoring with all students?
Since screening tools tend to overidentify students as at risk for poor learning
outcomes, progress monitoring is used to verify the results of screening. This could
include students that are just above or just below the cut-off score. Once nonresponders are identified through the screening process and verified through progress monitoring, the focus shifts to those students identified as at risk for poor
learning outcomes. While most progress monitoring focuses on students in secondary or tertiary interventions, it might be necessary to monitor some students
participating in core instruction.
How do I know if kids are benefiting/responding to the interventions?
Progress monitoring is used to assess students’ performance over time, to quantify
student rates of improvement or responsiveness to instruction, to evaluate instructional effectiveness, and, for students who are least responsive to effective instruction, to formulate effective intervention programs. Progress monitoring data are
used to determine when a student has or has not responded to instruction at any
level of the prevention system. There are several approaches to interpreting data.
Some sites follow the Four Point Rule, in which educators make decisions regarding
interventions based on the most recent four student assessment scores, or data
points. Other sites make intervention decisions based on trend lines of student
assessments.
24
Training Manual
Can students move back and forth between levels of the prevention system?
Yes, students should move back and forth across the levels of the prevention
system based on their success (response) or difficulty (minimal response) at the
level where they are receiving intervention, i.e., according to their documented
progress based on the data. Also, students can receive intervention in one academic area at the secondary or tertiary level of the prevention system while receiving
instruction in another academic area in primary prevention.
Can the same tool be used for screening and progress monitoring?
Some tools can be used for both screening and progress monitoring. On the
Center’s Screening Tools Chart and Progress Monitoring Tools Chart you can see
that some tools appear on both charts. In these cases, they have been evaluated
under both sets of standards. Since the goals of screening and progress monitoring
are different, it is important to look at the ratings that a tool has received in both
charts in order to see if it fits your needs. If a tool is only listed on one chart, you
can contact the vendor to find out more information on their approach and the
tool’s evidence base for both forms of assessment.
What is the difference between progress monitoring assessments and
state assessments?
Standardized tests of achievement, or high-stakes tests, are summative assessments typically given once a year and provide an indication of student performance
relative to peers at the state or national level. These tests are assessments of
learning and measures of what students have learned over a period of time. The
assessments are typically used for accountability, resource allocation, and
measures of skill mastery. They are often time-consuming and are not valid for
individual student decision making. Conversely, progress monitoring assessments
are formative assessments that occur during instruction and are brief, efficient
measures of students’ performance on an ongoing basis. With formative assessment, student progress is systematically assessed to provide continuous feedback
to both the student and the teacher concerning learning successes and failures.
These assessments are used to inform instruction and can be used to identify
students who are not responsive to instruction or interventions (screening), to
understand rates of student improvement (progress monitoring), to make curriculum and instructional decisions, to evaluate program effectiveness, to proactively
allocate resources, and to compare the efficacy of instruction and interventions.
RTI Implementer Series: Module 2: Progress Monitoring25
How frequently should I use progress monitoring?
Progress monitoring can be used anytime throughout the school year. Monitoring
should occur at regular intervals, but the frequency of the interval can vary (e.g.,
weekly, bi-weekly, or monthly). At a minimum, progress monitoring tools should be
administered monthly. While the recommended number of data points needed to
make a decision varies slightly by researcher, with Shinn and Good (1989) suggesting the need for at least seven to 10 data points and Christ and Silberglitt (2007)
recommending between six and nine data points, as the number of data points
increases, the effects of measurement error on the trend line decreases. While it
may be ideal to monitor students frequently, the sensitivity of the tool that is
selected may dictate the frequency with which the tool can be administered. Some
tools are more sensitive than others, so they can be used more frequently. The
Progress Monitoring Tools Chart provides information on each tool.
Are there other names for progress monitoring?
Progress monitoring is a relatively new term. Other terms you may be more
familiar with are Curriculum-Based Measurement and Curriculum-Based Assessment. Whatever method you decide to use, it is most important that you ensure it
is a scientifically based practice that is supported by significant research.
How do you set an appropriate goal for a student?
The practice of goal setting should be a logical process where it is clear why and
how the goal was set, how long there is to attain the goal, and what the student is
expected to do when the goal is met. Goals can be set using a number of different
practices. These include benchmarks or target scores, rates of improvement based
on national norms, and rates of improvement based on individual or local norms.
For more information on goal setting, see the Iris Center Module: Classroom
Assessment (Part 2): Evaluating Reading Progress at http://iris.peabody.vanderbilt.
edu/rpm/chalcycle.htm. See the section on “Perspectives and Resources” for
specific guidance around goal setting.
What is CBM?
CBM, or Curriculum-Based Measurement, is an approach to measurement that is
used to screen students or to monitor student progress in mathematics, reading,
writing, and spelling. With CBM, teachers and schools can assess individual responsiveness to instruction. When a student proves unresponsive to the instructional
26
Training Manual
program, CBM signals the teacher/school to revise that program. Each CBM test is
an alternate form of equivalent difficulty. Each test samples the year-long curriculum in exactly the same way using prescriptive methods for constructing the tests.
In fact, CBM is usually conducted with “generic” tests, designed to mirror popular
curricula. CBM is highly prescriptive and standardized, which increases the reliability and validity of scores. CBM provides teachers with a standardized set of materials that has been researched to produce meaningful and accurate information.
CBM makes no assumptions about instructional hierarchy for determining measurement. In other words, CBM fits with any instructional approach. Also, CBM
incorporates automatic tests of retention and generalization. Therefore, the
teacher is constantly able to assess whether the student is retaining what was
taught earlier in the year.
On the Progress Monitoring Tools Chart, there are both General Outcome
Measures and Mastery Measures listed—what is the difference?
Mastery measures and General Outcome Measures (GOMs) are both forms of
formative assessments. Mastery measures determine the mastery of a series of
short-term instructional objectives. By focusing on a single skill, practitioners can
assess whether a student can learn target skills in isolation. For example, a student
may master multi-digit addition and then master multi-digit subtraction. Teachers
can use the information from the ongoing progress monitoring data to make
decisions about changing target skill instruction. To use mastery measures, teachers must determine a sensible instructional sequence and often design criterionreferenced testing procedures to match each step in that instructional sequence.
While, teacher-made tests, which are often used as mastery measures, present
concerns given the unknown reliability and validity of these measures, there are a
number of mastery measure tools that have been reviewed for technical rigor. See
the Progress Monitoring Tools Chart at http://www.rti4success.org/progressMonitoringTools for examples. The hierarchy of skills used in mastery measurement is
logical, not empirical. This means that while it may seem logical to teach addition
first and then subtraction, there is no evidence base for the sequence. Because
mastery measures are based on mastering one skill before moving on to the next
skill, the assessment does not reflect maintenance or generalization. It becomes
impossible to know if, after teaching one skill, the student still remembers how to
perform the previously learned skill. In addition, how a student does on a mastery
measure assessment does not indicate how he or she will do on standardized tests
RTI Implementer Series: Module 2: Progress Monitoring27
because the number of objectives mastered does not relate well to performance on
criterion measures. General outcome measures (GOMs) do not have the limitations
of mastery measures. They are indicators of general skill success and reflect overall
competence in the annual curriculum. They describe students’ growth and development over time or both their “current status” and their “rate of development.”
Common characteristics of GOMs are that they are simple and efficient, are sensitive to improvement, provide performance data to guide and inform a variety of
educational decisions, and provide national/local norms to allow for cross comparisons of data.
28
Training Manual
References
Bangert-Drowns, R. L., Kulik, J. A., & Kulik, C. C. (1991). Effects of frequent classroom testing. Journal of Education Research, 85(2), 89–99.
Christ, T. J., & Silberglitt, B. (2007). Estimates of the standard error of measurement
for curriculum-based measures of oral reading fluency. School Psychology
Review, 36, 130–146.
Fuchs and Fuchs (2007). Using CBM for progress monitoring in reading. Retrieved
from http://www.studentprogress.org/summer_institute/2007/Intro%20reading/
IntroReading_Manual_2007.pdf
Fuchs, L. S., Fuchs, D. L., Compton, D. L., Bryant, J. D., Hamlett, C. L., & Seethaler, P. M.
(2007). Mathematics screening and progress monitoring at first grade: Implications for responsiveness to intervention. Exceptional Children, 73(3), 311–330.
Fuchs, L. S., Fuchs, D., Karns, K., Hamlett, C. L., & Katzaroff, M. (1999). Mathematics
performance assessment in the classroom: Effects on teacher planning and
student learning. American Educational Research Journal, 36, 609–646.
Fuchs, L. S., Fuchs, D., Hamlett, C. L., & Stecker, P. M. (1991). Effects of curriculumbased measurement and consultation on teacher planning and student
achievement in mathematics operations. American Educational Research
Journal, 28, 617–641.
Individuals with Disabilities Education Improvement Act of 2004, 34 Code of
Federal Regulations §§300.307, 300.309 and 300.311
National Center on Response to Intervention (March 2010). Essential Components
of RTI — A Closer Look at Response to Intervention. Washington, DC:
U.S. Department of Education, Office of Special Education Programs,
National Center on Response to Intervention.
Stecker, P. M., Fuchs, L. S., & Fuchs, D. (2005). Using curriculum-based measurement to improve student achievement: Review of research. Psychology in the
Schools, 42, 795–819.
RTI Implementer Series: Module 2: Progress Monitoring29
Shinn, M. R., Good, R. H., & Stein, S. (1989). Summarizing trend in student achievement: A comparison of models. School Psychology Review, 18, 356–370.
Wayman, M. M., Wallace, T., Wiley, H. I., Tichá, R., & Espin, C. A. (2007). Literature
Synthesis on Curriculum-Based Measurement in Reading. The Journal of Special
Education, 41(2), 85–120.
Zumeta, R. O., Compton, D. L., & Fuchs, L. S. (2012). Using word identification
fluency to monitor first-grade reading development. Exceptional Children,
78(2), 201–220.
30
Training Manual
Appendix A:
NCRTI Progress
Monitoring–
Glossary of Terms
RTI Implementer Series: Module 2: Progress Monitoring31
NCRTI Progress Monitoring—
Glossary of Terms
Alternate forms
Alternate forms are parallel versions of the measure within a grade level, of comparable difficulty (or, with Item Response Theory-based item, of comparable ability
invariance).
Benchmark
A benchmark is an established level of performance on a test. A benchmark can be
used for screening if it predicts important outcomes in the future. Alternatively, a
benchmark can be used as a cut-score that designates proficiency or mastery of
skills.
Coefficient alpha
Coefficient alpha is a measure of the internal consistency of items within a measure. Values of alpha coefficients can range from 0 to 1.0. Alpha coefficients that
are closer to 1.0 indicate items are more likely to be measuring the same thing.
Criterion validity
Criterion validity indexes how well one measure correlates with another measure
purported to represent a similar underlying construct. It can be concurrent or
predictive.
Content validity
Content validity relies on expert judgment to assess how well items measure the
universe they are intended to measure.
Criterion measure
A criterion measure is the measure against which criterion validity is judged.
RTI Implementer Series: Module 2: Progress Monitoring33
Cross-validation
Cross-validation is the process of validating the results of one study by performing
the same analysis with another sample under similar conditions.
Direct evidence
Direct evidence is a term used on the Center’s tools charts to refer to data from a
study based on the tool submitted for evaluation.
Disaggregated data
Disaggregated data is a term used on the Center’s tools charts to indicate that a
tool reports information separately for specific sub-populations (e.g., race, economic status, or special education status).
End-of-year benchmarks
End-of-year benchmarks specify the level of performance expected at the end of
the grade, by grade level.
General outcome measure (GOM)
A GOM is a measure that reflects overall competence in the annual curriculum.
Generalizability
Generalizability is the extent to which results generated on a sample are pertinent
to a larger population. A tool is considered more generalizable if studies have been
conducted on large representative samples.
Growth
Growth refers to the slope of improvement or the average weekly increase in
scores by grade level.
Indirect evidence
Indirect evidence is a term used on the Center’s tools charts to refer to data from
studies conducted using other tools that have similar test construction principles.
Inter-scorer agreement
Inter-scorer agreement is the extent to which raters judge items in the same way.
34
Training Manual
Kappa
Kappa is an index that compares the agreement against what might be expected by
chance. Kappa can be thought of as the chance-corrected proportional agreement.
Possible values range from +1 (perfect agreement) via 0 (no agreement above that
expected by chance) to -1 (complete disagreement).
Mastery measurement (MM)
MM indexes a student’s successive mastery of a hierarchy of objectives.
Norms
Norms are standards of test performance derived by administering the test to a
large representative sample of students. Individual student results are compared to
the established norms.
Pass/fail decisions
Pass/fail decisions are the metric in which mastery measurement scores are
reported.
Performance level score
Performance level score is the score (often the average, or median, of two or three
scores); it indicates the student’s level of performance.
Predictive criterion validity
Predictive validity indexes how well a measure predicts future performance on a
highly valued outcome.
Progress monitoring
Progress monitoring is repeated measurement of academic performance used to
inform instruction of individual students in general and special education in Grades
K–8. It is conducted at least monthly to (a) estimate rates of improvement,
(b) identify students who are not demonstrating adequate progress and/or
(c) compare the efficacy of different forms of instruction to design more effective,
individualized instruction.
RTI Implementer Series: Module 2: Progress Monitoring35
Rate of improvement
Rates of improvement specify the slopes of improvement or average weekly
increases, based on a line of best fit through the student’s scores.
Reliability
Reliability is the extent to which scores are accurate and consistent.
Response to Intervention (RTI)
RTI integrates assessment and intervention within a multi-level prevention system
to maximize student achievement and to reduce behavior problems. With RTI,
schools identify students at risk for poor learning outcomes, monitor student
progress, provide evidence-based interventions and adjust the intensity and nature
of those interventions depending on a student’s responsiveness, and identify
students with learning disabilities.
Sensitivity
Sensitivity is the extent to which a measure reveals improvement over time, when
improvement actually occurs.
Skill sequence
The skills sequence is the series of objectives that correspond to the instructional
hierarchy through which mastery is assessed.
Specificity
Specificity is the extent to which a screening measure accurately identifies students
not at risk for the outcome of interest.
Split-half reliability
Split-half reliability indexes a test’s internal reliability by correlating scores from
one half of items with scores on the other half of items.
Standard error of the mean (SEM)
The standard error of the mean (SEM) is the standard deviation of the sample
mean estimate of a population mean.
36
Training Manual
Technical adequacy
Technical adequacy implies that psychometric properties such as validity and
reliability meet strong standards.
Test-retest reliability
Test-retest reliability is the consistency with which an assessment tool indexes
student performance from one administration to the next.
Validity
Validity is the extent to which scores represent the underlying construct.
RTI Implementer Series: Module 2: Progress Monitoring37
Appendix B: Handouts
RTI Implementer Series: Module 2: Progress Monitoring39
Setting Goals With End-of-Year Benchmarking Handout (Gunnar)
This is Gunnar’s CBM Computation graph. He is a fourth-grade student. Use end-of-year benchmarks to
calculate Gunnar’s end-of-year goal.
50
45
40
Digits Correct
35
30
25
20
15
10
5
0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
Weeks of Instruction
Follow these steps to determine end-of-year benchmarks:
1. Identify appropriate grade-level benchmark
2. Mark benchmark on student graph with an X
3. Draw goal line from baseline score to X. The baseline score is the mean of the three most recent
data points (7, 9, 10)
This chart provides the end-of-year benchmarks:
Grade
Reading
Computation
Concepts and
Applications
Kindergarten
40 sounds/minute (LSF)
—
—
Grade 1
60 words/minute (WIF)
20 digits
20 points
Grade 2
75 words/minute (PRF)
20 digits
20 points
Grade 3
100 words/minute (PRF)
30 digits
30 points
Grade 4
20 replacements/2.5 minutes (Maze)
40 digits
30 points
Grade 5
25 replacements/2.5 minutes (Maze)
30 digits
15 points
Grade 6
30 replacements/2.5 minutes (Maze)
35 digits
15 points
Note: These figures may change pending additional RTI research.
RTI Implementer Series: Module 2: Progress Monitoring
41
Setting Goals With National Norms Handout (Jane)
This is Jane’s graph. Jane is a second-grade student. Her progress for CBM Computation is shown in the
graph below. Use national norms to calculate Jane’s goal at the end of the year.
50
45
40
Digits Correct
35
30
25
20
15
10
5
0
1
2
3
4
5
6
7
8
9
10 11 12 13 14 15 16 17 18 19 20
Weeks of Instruction
Data points in order (12, 10, 12)
Follow these steps for using national norms for weekly rate of improvement:
1. Calculate the average of the student’s first three scores (baseline) (scores: 12, 10, 12)
2. Find the appropriate norm from the table
3. Multiply norm by the number of weeks left in the year
4. Add to baseline
5. Mark goal on student graph with an X
6. Draw a goal line from baseline
This chart provides the national norms for weekly rate of improvement (slope):
Note: These figures may change pending additional RTI research.
RTI Implementer Series: Module 2: Progress Monitoring
43
Setting Goals With Intra-Individual Framework Handout (Cecelia)
This is Cecelia’s graph. Use the intra-individual framework to calculate Cecelia’s end-of-year goal. Steps for
calculating the goal can be found below the graph.
Follow these steps for the intra-individual framework:
1. Identify weekly rate of improvement (slope) using at least eight data points. (Slope = 1.0)
2. Multiply slope by 1.5.
3. Multiply (slope × 1.5) by number of weeks until the end of the year (12 remaining weeks).
4. Add to student’s baseline score. The baseline score is the mean of the three most recent data
points (15, 18, 20).
5. Mark goal on student graph with an X.
6. Draw a goal line from baseline to X.
RTI Implementer Series: Module 2: Progress Monitoring
45
Practicing the Tukey Method Handout
Below is a graph of student’s progress for primary prevention. Use the Tukey Method to draw the trend line
for these data points. The steps can be found below the graph.
100
90
Words Read Correctly
80
70
60
50
40
30
20
10
0
1
2
3
4
5
6
7
8
9
10
11
12
13
Weeks of Primary Prevention
Step 1: Divide the data points into three equal sections by
drawing two vertical lines. (If the points divide unevenly,
then group them approximately.)
Step 2: In the first and third sections, find the median data
point and median instructional week. Locate the place on
the graph where the two values intersect and mark that spot
with an X.
Step 3: Draw a line through the two Xs and extend the line
to the margins of the graph. This represents the trend line
or line of improvement.
RTI Implementer Series: Module 2: Progress Monitoring
47
14
Practicing the Tukey Method and Calculating Slope Handout
Below is a graph of a student’s progress across nine weeks of primary prevention. Use the Tukey Method to
draw the trend line and calculate the slope.
100
90
Words Read Correctly
80
70
60
50
40
30
20
10
0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
Weeks of Primary Prevention
Data points in order (20, 19, 20, 24, 25, 28, 40, 41, 40)
Step 1: Divide the data points into three equal sections
by drawing two vertical lines. (If the points divide
unevenly, then group them approximately.)
Step 2: In the first and third sections, find the median
data point and median instructional week. Locate the
place on the graph where the two values intersect and
mark that spot with an X.
Calculating Slope
third median point – first median point
number of weeks of instruction
Step 3: Draw a line through the two Xs and extend that
line to the margins of the graph. This represents the
trend line or line of improvement
RTI Implementer Series: Module 2: Progress Monitoring
49
Calculating Slope and Determining Responsiveness in Primary
Prevention Handout (Arthur)
This is Arthur’s CBM Computation graph. He is a second-grade student. Calculate Arthur’s slope and use the
chart below to determine his responsiveness to primary prevention.
Problems Correct in 3 Minutes
25
20
15
10
5
0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
Weeks of Instruction
Data points in order (8, 5, 7, 9, 8, 6, 8, 7, 6)
This chart provides the slope cut-offs for students in primary prevention. Students above the cut-off are
responsive to primary prevention. Students below the cut-off are unresponsive to primary prevention.
What about Arthur?
Reading Slope
Math
Computation Slope
Math Concepts and
Applications Slope
Kindergarten
1.0 (LSF)
—
—
Grade 1
1.8 (WIF)
0.35
No data available
Grade 2
1.5 (PRF)
0.30
0.40
Grade 3
1.0 (PRF)
0.30
0.60
Grade 4
0.40 (Maze)
0.70
0.70
Grade 5
0.40 (Maze)
0.70
0.70
Grade 6
0.40 (Maze)
0.40
0.70
Grade
Note: These figures may change pending additional RTI research.
RTI Implementer Series: Module 2: Progress Monitoring
51
Calculating Slope and Determining Responsiveness in Secondary
Prevention Handout (David)
This is David’s CBM Passage Reading Fluency graph. He is a third-grade student. Calculate his slope and use
the chart below to determine David’s responsiveness to secondary prevention.
100
90
Words Read Correctly
80
70
60
50
40
30
20
10
0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
Weeks of Instruction
Data points in order (20, 24, 25, 32, 37, 39, 47, 54, 65)
This chart provides the slope and end level cut-offs for students in secondary prevention. Students above
the cut-off are responsive to secondary prevention. Students below the cut-off are unresponsive to
secondary prevention. What about David?
Grade
CBM Probe
Slope
End Level
1
< 30
Kindergarten
Letter Sound Fluency
Grade 1
Word Identification Fluency
1.8
< 30
Grade 2
Passage Reading Fluency
0.50
< 60
Grade 3
Passage Reading Fluency
1.00
< 70
Grade 4
Maze Fluency
0.40
< 25
Grade 5
Maze Fluency
0.40
< 25
Grade 6
Maze Fluency
0.40
< 25
Note: These figures may change pending additional RTI research.
RTI Implementer Series: Module 2: Progress Monitoring
53
Calculating Slope and Determining Responsiveness to Secondary
Prevention Handout (Martha)
This is Martha’s CBM Concepts and Applications graph. She is a third-grade student. Calculate her slope and
use the chart below to determine Martha’s responsiveness to secondary prevention.
40
35
Points Correct
30
25
20
15
10
5
0
1
2
3
4
5
6
7
8
9
Weeks of Instruction
10
11
12
13
14
Data points in order (8, 6, 8, 10, 8, 12, 11, 10, 12)
This chart provides the slope and end level cut-offs for students in secondary prevention. Students above
the cut-off are responsive to secondary prevention. Students below the cut-off are unresponsive to
secondary prevention. What about Martha?
Computation
Grade
Concepts and Applications
Slope
< End Level
Slope
< End Level
Grade 1
0.35
< 20 digits
No data available
No data available
Grade 2
0.30
< 20 digits
0.40
< 20 points
Grade 3
0.30
< 20 digits
0.60
< 20 points
Grade 4
0.70
< 20 digits
0.70
< 20 points
Grade 5
0.70
< 20 digits
0.70
< 20 points
Grade 6
0.40
< 20 digits
0.70
< 20 points
Note: These figures may change pending additional RTI research.
RTI Implementer Series: Module 2: Progress Monitoring
55
Appendix C:
RTI Case Study
RTI Implementer Series: Module 2: Progress Monitoring57
RTI Case Study: Bear Lake School—Nina
Read the first part of this case study to get a background on how RTI is implemented at Bear Lake
School. Think about the questions included throughout the narrative. Then consider the example of
Nina, a second-grade student.
Bear Lake School
Bear Lake uses the widely researched three-tier RTI model. Nina is second-grade student who is
struggling with math in the general education classroom. All the second-grade teachers use a strong
research-based math program. Implementation fidelity of the math program is very high. Last year, only
5 percent of second-grade students failed to achieve end-of-year CBM Computation benchmarks.
Primary Prevention
Bear Lake School uses CBM Computation as its RTI measure. All second-grade students are screened in
September. The cut-off for students suspected to be at risk for math failure on CBM Computation is 10.
QUESTION: Look at Exhibit 1. Based on these CBM Computation scores, which students
in Mr. Bingham’s class are suspected to be at risk for math failure?
Exhibit 1. CBM Computation Scores for Mr. Bingham’s Class
Student
CBM Score
Student
CBM Score
Marcie
13
Cheyenne
13
Anthony
12
Marianne
18
Deterrious
15
Kevin
19
Amy
18
Dax
13
Matthew
11
Ethan
6
Calliope
16
Colleen
21
Noah
25
Grace
14
Nina
8
Cyrus
20
ANSWER: Nina and Ethan scored below 10, so they are suspected to be at risk for math failure.
At Bear Lake School, the students suspected to be at risk are monitored for seven weeks to
check their response to primary prevention. During the 7 weeks, suspected at-risk students are
administered CBM Computation weekly for second grade students.
RTI Implementer Series: Module 2: Progress Monitoring
59
Secondary Prevention
Bear Lake uses a standard tutoring program for secondary prevention. The tutoring instructs students
for 16 weeks in a small-group setting. Student groups work with a tutor three times a week for
30 minutes a session. Tutoring sessions focus on number concepts, basic math facts, addition and
subtraction of two-digit numbers, word-problem solving, and missing addends.
QUESTIONS: Who administers the tutoring sessions? What type of activities will ensure
the tutoring program is implemented correctly?
ANSWERS: Trained paraprofessionals serve as the tutors for the secondary intervention.
To make sure the tutoring program is implemented correctly, tutors should meet on a
weekly basis to troubleshoot tutoring problems and examine student CBM Computation
graphs.
During tutoring, Bear Lake staff measures at-risk students weekly using alternate forms of CBM
Computation. Student scores are graphed, and slopes are calculated at the end of secondary
prevention.
QUESTION: Look at Exhibit 2. What cut-off points should Bear Lake use during
secondary prevention?
Exhibit 2. Quantifying Response to Secondary Prevention Math
Computation
Grade
Concepts and Applications
Slope
< End Level
Slope
< End Level
Grade 1
0.35
< 20 digits
No data available
No data available
Grade 2
0.30
< 20 digits
0.40
< 20 problems
Grade 3
0.30
< 20 digits
0.60
< 20 problems
Grade 4
0.70
< 20 digits
0.70
< 20 problems
Grade 5
0.70
< 20 digits
0.70
< 20 problems
Grade 6
0.40
< 20 digits
0.70
< 20 problems
ANSWER: For second-grade students assessed on CBM Computation, Bear Lake could
use two different cut-off points: Students who have slope improvement greater than
0.30 or an end-level score of at least 20 are responsive to secondary prevention
tutoring. Students who have slope improvement less than 0.30 or an end-level score
below 20 are classified as unresponsive to secondary prevention tutoring.
Students who are unresponsive to secondary prevention may need tertiary prevention (due to their lack
of growth in response to a research-validated standard treatment to which the vast majority of students
can be expected to respond).
60
Appendix C
Tertiary Prevention
At Bear Lake, special education teachers and intervention specialists use progress monitoring to develop
appropriate goals and intensive, individualized instructional programs. Bear Lake’s tertiary prevention is
a flexible service: it permits exit and reentry as student needs change in relation to the demands of the
general education curriculum.
Nina
On the September screening, Nina’s average score across two CBM Computation forms was 8.0. As
discussed before, this score was below the cut-off for students suspected of being at risk for math
failure. Nina’s performance was monitored using CBM Computation for seven weeks to gauge response
to primary prevention.
QUESTION: Look at the graph in Exhibit 3. What is Nina’s CBM slope at the end of seven
weeks of primary prevention?
Exhibit 3. Nina’s CBM Computation Graph in Primary Prevention
Digits Correct in 3 Minutes
25
20
15
10
X
X
5
0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
Weeks of Instruction
Data points in order (8, 7, 8, 8, 9, 8, 8)
ANSWER: At the end of seven weeks, Nina’s CBM Computation slope was
(8 – 8) ÷ 7 = 0.0. This fell well below the 0.30 criterion for positive response.
QUESTION: So, what should happen to Nina?
ANSWER: With a slope of less than 0.30, Nina was deemed unresponsive to primary
prevention, so she should transition to secondary prevention tutoring.
RTI Implementer Series: Module 2: Progress Monitoring
61
Secondary prevention was conducted three times a week for 16 weeks. CBM Computation data were
collected weekly over the course of tutoring. Exhibit 4 shows Nina’s progress over the
16 weeks.
Exhibit 4. Nina’s CBM Computation Graph in Secondary Prevention
Data points in order (8, 7, 8, 9, 11, 13, 12, 12, 13, 12, 15, 14, 14, 15, 15, 16)
QUESTION: Based on this graph, what is Nina’s slope during secondary prevention? What
decisions should be made about Nina?
ANSWER: Nina’s slope over secondary prevention tutoring was (14 – 7) ÷ 15 = 0.46. This slope
exceeded the secondary prevention cut-off of 0.30 for positive response. Nina has been
responsive to secondary prevention and would return to primary prevention with weekly
progress monitoring to monitor her progress in primary prevention.
62
Appendix C
Appendix D:
Progress Monitoring
Graph Template
RTI Implementer Series: Module 2: Progress Monitoring63
Progress Monitoring Graph Template
Name: ________________________________________________ Goal: ___________________________________________________________
150
145
140
135
130
125
120
115
110
105
100
95
90
85
80
75
70
65
60
55
50
45
40
35
30
25
20
15
10
5
Weeks
19
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
Notes/comments:
RTI Implementer Series: Module 2: Progress Monitoring
65
34
35
36
Appendix E:
Additional Research on
Progress Monitoring
RTI Implementer Series: Module 2: Progress Monitoring67
Additional Research on
Progress Monitoring
Research on progress monitoring and CBM has occurred for over 30 years. Below is
a list of articles that can be used as a reference if you would like additional research
or information on progress monitoring, CBM, and progress monitoring research.
The list is grouped by general information on progress monitoring, progress monitoring for math, progress monitoring for reading, progress monitoring for writing,
and progress monitoring for English Language Learners. This research list was last
updated November 2011.
Progress Monitoring
Allinder, R. M. (1996). When some is not better than none: Effects of differential
implementation of curriculum-based measurement. Exceptional Children, 62,
525–535.
Davis, L. B., & Fuchs, L. S. (1995). ‘Will CBM help me learn?’ Students’ perception
of the benefits of curriculum-based measurement. Education & Treatment of
Children, 18(1), 19–32.
Deno, S. L., Reschly, A. L., Lembke, E. S., Magnusson, D., Callender, S. A.,
Windram, H., & Stachel, N. (2009). Developing A School-Wide ProgressMonitoring System. Psychology in the Schools, 46(1), 44–55.
Fuchs, L. S., & Deno, S. L. (1991). Paradigmatic distinctions between instructionally
relevant measurement models. Exceptional Children, 57, 488–501.
Fuchs, L. S., & Fuchs, D. (1986). Effects of systematic formative evaluation:
A meta-analysis. Exceptional Children, 53, 199–208.
Fuchs, L. S., & Fuchs, D. (1996). Combining performance assessment and curriculum-based measurement to strengthen instructional planning. Learning
Disabilities Research and Practice, 11, 183–192.
RTI Implementer Series: Module 2: Progress Monitoring69
Fuchs, L. S., & Fuchs, D. (1998). Treatment validity: A unifying concept for
reconceptualizing the identification of learning disabilities. Learning Disabilities
Research & Practice, 13(4), 204–219.
Fuchs, L. S., & Fuchs, D. (2002). Curriculum-based measurement: Describing
competence, enhancing outcomes, evaluating treatment effects, and identifying treatment nonresponders. Peabody Journal of Education, 77, 64–84.
Fuchs, L. S., & Fuchs, D. (2004). Determining Adequate Yearly Progress from Kindergarten through Grade 6 with Curriculum-Based Measurement. Assessment for
Effective Instruction, 29(4), 25–37.
Fuchs, L. S., Deno, S. L., & Mirkin, P. K. (1984). Effects of frequent curriculum-based
measurement of evaluation on pedagogy, student achievement, and student
awareness of learning. American Educational Research Journal, 21, 449–460.
Fuchs, L. S., Fuchs, D., & Hamlett, C. L. (1989a). Effects of alternative goal structures
within curriculum-based measurement. Exceptional Children, 55, 429–438.
Fuchs, L. S., Fuchs, D., & Hamlett, C. L. (1989b). Effects of instrumental use of
curriculum-based measurement to enhance instructional programs. Remedial
and Special Education, 10, 43–52.
Fuchs, L. S., Fuchs, D., & Hamlett, C. L. (1993). Technological advances linking the
assessment of students’ academic proficiency to instructional planning. Journal
of Special Education Technology, 12, 49–62.
Fuchs, L. S., Fuchs, D., & Hamlett, C. L. (1994). Strengthening the connection
between assessment and instructional planning with expert systems.
Exceptional Children, 61, 138–146.
Fuchs, L. S., Fuchs, D., Hamlett, C. L, Walz, L., & Germann, G. (1993). Formative
evaluation of academic progress: How much growth can we expect? School
Psychology Review, 22, 27–48.
Fuchs, L. S., Fuchs, D., Karns, K., Hamlett, C. L., Dutka, S., & Katzaroff, M. (2000). The
importance of providing background information on the structure and scoring
of performance assessments. Applied Measurement in Education, 13, 83–121.
70
Training Manual
Fuchs, L. S., Fuchs, D., Karns, K., Hamlett, C. L., Katzaroff, M., & Dutka, S. (1997).
Effects of task-focused goals on low-achieving students with and without
learning disabilities. American Educational Research Journal, 34, 513–544.
Fuchs, L. S., Seethaler, P. M., Fuchs, D., & Hamlett, C. L. (2008). Using curriculumbased measurement to identify the 2% population. Journal of Disability Policy
Studies, 19(3), 153–161.
Germann G., & Tindal, G. (1985). An application on curriculum-based assessment:
The use of direct and repeated measurement. Exceptional Children, 52,
244–265.
Hutton, J. B., Dubes, R., & Muir, S. (1992). Estimating trend progress in monitoring
data: A comparison of simple line-fitting methods. School Psychology Review,
21, 300–312.
Mathes, P. G., Fuchs, D., Roberts, P. H., & Fuchs, L. S. (1998). Preparing students
with special needs for reintegration: Curriculum-based measurement’s impact
on transenvironmental programming. Journal of Learning Disabilities, 31(6)
615–624.
Mellard, D., McKnight, M., & Woods, K. (2009). Response to Intervention Screening
and Progress-Monitoring Practices in 41 Local Schools. Learning Disabilities
Research & Practice, 24(4), 186–195.
Stecker, P. M., & Fuchs, L. S. (2000). Effecting superior achievement using curriculum-based measurement: The importance of individual progress monitoring.
Learning Disabilities Research and Practice, 15, 128–134.
Stecker, P. M., Fuchs, L. S., & Fuchs, D. (2005). Using curriculum-based measurement to improve student achievement: Review of research. Psychology in the
Schools, 42, 795–819.
Wesson, C., Deno, S. L., Mirkin, P. K., Sevcik, B., Skiba, R., King, P. P., Tindal, G. A., &
Maruyama, G. (1988). A causal analysis of the relationships among outgoing
measurement and evaluation, structure of instruction, and student achievement. The Journal of Special Education, 22, 330–343.
RTI Implementer Series: Module 2: Progress Monitoring71
Progress Monitoring—Math
Bryant, D. P., Bryant, B. R., Gersten, R., Scammacca, N., & Chavez, M. M. (2008).
Mathematics intervention for first- and second-grade students with mathematics difficulties: The effects of tier 2 intervention delivered as booster lessons.
Remedial and Special Education, 29(1), 20–32.
Calhoon, M. B., & Fuchs, L. S. (2003). The Effects of peer-assisted learning strategies
and curriculum-based measurement on the mathematics performance of
secondary students with disabilities. Remedial and Special Education, 24(4),
235–245.
Chard, D. J., Clarke, B., Baker, S., Otterstedt, J., Braun, D., & Katz, R. (2005). Using
measures of number sense to screen for difficulties in mathematics: Preliminary findings. Assessment for Effective Intervention, 30(2), 3–14.
Christ, T. J., Johnson-Gros, K. N., & Hintze, J. M. (2005). An examination of alternate
assessment durations when assessing multiple-skill computational fluency: The
generalizability and dependability of curriculum-based outcomes within the
context of educational decisions. Psychology in the Schools, 42, 615–622.
Christ, T. J., & Vining, O. (2006). Curriculum-Based Measurement Procedures to
Develop Multiple-Skill Mathematics Computation Probes: Evaluation of Random and Stratified Stimulus-Set Arrangements. School Psychology Review,
35(3), 387–400.
Clarke, B., Baker, S., Smolkowski, K., & Chard, D. J. (2008). An analysis of early
numeracy curriculum-based measurement: Examining the role of growth in
student outcomes. Remedial and Special Education, 29(1), 46–57.
Clarke, B., & Shinn, M. R. (2004). A preliminary investigation into the identification
and development of early mathematics curriculum-based measurement.
School Psychology Review, 33, 234–248.
Espin, C. A., Deno, S. L., Maruyama, G., & Cohen, C. (April 1989). The basic academic
skills sample (BASS): An instrument for the screening and identification of children
at risk for failure in regular education classrooms. Paper presented at annual
meeting of American Educational Research Association, San Francisco, CA.
72
Training Manual
Foegen, A. (2000). Technical adequacy of general outcome measures for middle
school mathematics. Diagnostique, 25(3), 175–203.
Foegen, A. (2008). Progress monitoring in middle school mathematics: options and
issues. Remedial and Special Education, 29(4), 195–207.
Foegen, A., & Deno, S. L. (2001). Identifying growth indicators for low-achieving
students in middle school mathematics. Journal of Special Education, 35(1), 4–16.
Foegen, A., Jiban, C., & Deno, S. (2007). Progress monitoring measures in mathematics: A review of the literature. The Journal of Special Education, 41(2),
121–139.
Fuchs, L. S., Fuchs, D. L., Compton, D. L., Bryant, J. D., Hamlett, C. L., &
Seethaler, P. M. (2007). Mathematics screening and progress monitoring at first
grade: Implications for responsiveness to intervention. Exceptional Children,
73(3), 311–330.
Fuchs, L. S., Fuchs, D., Hamlett, C. L., & Stecker, P. M. (1991). Effects of curriculumbased measurement and consultation on teacher planning and student
achievement in mathematics operations. American Educational Research
Journal, 28, 617–641.
Fuchs, L. S., Fuchs, D., Hamlett, C. L., Phillips, N. B., Karns, K., & Dutka, S. (1997).
Enhancing students’ helping behavior during peer-mediated instruction with
conceptual mathematical explanations. Elementary School Journal, 97,
223–250.
Fuchs, L. S., Fuchs, D., Hamlett, C. L., Thompson, A., Roberts, P. H., Kubek, P., &
Stecker, P. S. (1994). Technical features of a mathematics concepts and applications curriculum-based measurement system. Diagnostique, 19, 23–49.
Fuchs, L. S., Fuchs, D., Karns, K., Hamlett, C. L., & Katzaroff, M. (1999). Mathematics
performance assessment in the classroom: Effects on teacher planning and
student learning. American Educational Research Journal, 36, 609–646.
Fuchs, L. S., Fuchs, D., & Zumeta, R. O. (2008). A curricular-sampling approach to
progress monitoring: Mathematics concepts and applications. Assessment for
Effective Intervention, 33(4), 225–233.
RTI Implementer Series: Module 2: Progress Monitoring73
Fuchs, L. S., Hamlett, C. L., & Fuchs, D. (1998). Monitoring basic skills progress:
Basic math computation (2nd ed.). Austin, TX: PRO-ED.
Fuchs, L. S., Hamlett, C. L., & Fuchs, D. (1998). Monitoring basic skills progress:
Basic math concepts and applications. Austin, TX: PRO-ED.
Helwig, R., Anderson, L., & Tindal, G. (2002). Using a concept-grounded, curriculum-based measure in mathematics to predict statewide test scores for middle
school students with LD. The Journal of Special Education, 36(2), 102–112.
Jitendra, A. K., Sczesniak, E., & Deatline-Buchman, A. (2005). An exploratory
validation of curriculum-based mathematical word-problem-solving tasks as
indicators of mathematics proficiency for third graders. School Psychology
Review, 34(3), 358–371.
Leh, J. M., Jitendra, A. K., Caskie, G. I. L., & Griffin, C. C. (2007). An evaluation
of curriculum-based measurement of mathematics word problem-solving
measures for monitoring third-grade students’ mathematics competence.
Assessment for Effective Intervention, 32(2), 90–99.
Lembke, E. S., Foegen, A., Whittaker, T. A., & Hampton, D. (2008). Establishing
technically adequate measures of progress in early numeracy. Assessment for
Effective Intervention, 33(4), 206–214.
Shapiro, E. S., Edwards, L., & Zigmond, N. (2005). Progress monitoring of mathematics among students with learning disabilities. Assessment for Effective
Intervention, 30(2), 15–32.
Stecker, P. M., & Fuchs, L. S. (2000). Effecting superior achievement using curriculum-based measurement: The importance of individual progress monitoring.
Learning Disabilities Research and Practice, 15, 128–134.
Thurber, R. S., Shinn, M. R., & Smolkowski, K. (2002). What is measured in mathematics tests? Construct validity of curriculum-based mathematics measures.
School Psychology Review, 31, 498–513.
VanDerHeyden, A. M., Witt, J. C., & Naquin, G. (2003). Development and validation
of a process for screening referrals to special education. School Psychology
Review, 32(2), 204–227.
74
Training Manual
VanDerHeyden, A. M., Witt, J. C., Naquin, G., & Noell, G. (2001). The reliability and
validity of curriculum-based measurement readiness probes for kindergarten
students. School Psychology Review, 30, 368–382.
Ysseldyke, J., & Bolt, D. M. (2007). Effect of Technology-Enhanced Continuous Progress
Monitoring on Math Achievement. School Psychology Review, 36(3), 453–467.
Ysseldyke, J., & Tardrew, S. (2007). Use of a Progress Monitoring System to Enable
Teachers to Differentiate Mathematics Instruction. Journal of Applied School
Psychology, 24(1), 1–28.
Progress Monitoring—Reading
Ball, C., & Gettinger, M. (2009). Monitoring Children’s Growth in Early Literacy
Skills: Effects of Feedback on Performance and Classroom Environments.
Education and Treatment of Children, 32(2), 189–212.
Brown-Chidsey, R., Johnson, P., Fernstrom, R. (2005). Comparison of grade-level
controlled and literature-based maze CBM reading passages. School Psychology
Review, 34, 387–394.
Burns, M. K. (2007). Reading at the instructional level with children identified as
learning disabled: Potential implications for response-to-intervention. School
Psychology Quarterly, 22(3), 297–313.
Burns, M. K., Scholin, S. E., Kosciolek, S., & Livingston, J. (2010). Reliability of
Decision-Making Frameworks for Response to Intervention for Reading. Journal
of Psychoeducational Assessment, 28(2), 102–114.
Capizzi, A. M. & Fuchs, L. S. (2005). Effects of curriculum-based measurement with
and without diagnostic feedback on teacher planning. Remedial and Special
Education, 26, 159–174.
Christ, T. J. (2006). Short-Term Estimates of Growth Using Curriculum-Based Measurement of Oral Reading Fluency: Estimating Standard Error of the Slope to
Construct Confidence Intervals. School Psychology Review, 35(1), 128–133.
Deno, S. L., Fuchs, L. S., Marston, D., & Shinn, M. (2001). Using curriculum-based
measurement to establish growth standards for students with learning disabilities. School Psychology Review, 30(4), 507–524.
RTI Implementer Series: Module 2: Progress Monitoring75
Fewster, S., & MacMillan, P. D. (2002). School-based evidence for the validity of
curriculum-based measurement of reading and writing. Remedial and Special
Education, 23, 149–156.
Fuchs, L. S., & Deno, S. L. (1994). Must instructionally useful performance assessment be based in the curriculum? Exceptional Children, 61, 15–24.
Fuchs, L. S., & Fuchs, D. (1992). Identifying a measure for monitoring student
reading progress. School Psychology Review, 58, 45–58.
Fuchs, L. S., & Fuchs, D. (1999). Monitoring student progress toward the development of reading competence: A review of three forms of classroom-based
assessment. School Psychology Review, 28, 659–671.
Fuchs, L. S., Fuchs, D., Hosp, M. K., & Jenkins, J. R. (2001). Oral reading fluency as
an indicator of reading competence: A theoretical, empirical, and historical
analysis. Scientific Studies of Reading, 5, 241–258.
Graney, S. B. & Shinn, M. R. (2005). Effects of reading curriculum-based measurement (R-CBM) teacher feedback in general education classrooms. School
Psychology Review, 34, 184–201.
Griffiths, A., VanDerHeyden, A. M., Skokut, M., & Lilles, E. (2009). Progress Monitoring in Oral Reading Fluency within the Context of RTI. School Psychology
Quarterly, 24(1), 13–23.
Hintze, J. M. & Christ, T. J. (2004). An examination of variability as a function of
passage variance in CBM progress monitoring. School Psychology Review, 33,
204–217.
Hintze, J. M., Shapiro, E. S., & Daly, E. (1998). An Investigation of the Effects of
Passage Difficulty Level on Outcomes of Oral Reading Fluency Progress Monitoring. School Psychology Review, 27(3), 433–445.
Hosp, M. K. & Fuchs, L. S. (2005). Using CBM as an indicator of decoding, word
reading, and comprehension: Do the relations change with grade? School
Psychology Review, 34, 9–26.
Jenkins, J. R. (2009). Measuring reading growth: New findings on progress monitoring. New Times for the Division of Learning Disabilities, 27(1), 1–2.
76
Training Manual
Jenkins, J. R., Graff, J. J., & Miglioretti, D. L. (2009). Estimating Reading Growth
Using Intermittent CBM Progress Monitoring. Exceptional Children, 75(2),
151–163.
King, R. P., Deno, S.L., Mirkin, P. K., & Wesson, C. (1983). The effects of training
teachers in the use of formative evaluation in reading: An experimental-control
comparison (Report No. 111). Minneapolis: University of Minnesota, Institute
for Research on Learning Disabilities.
Marston, D. (1988). The effectiveness of special education: A time-series analysis of
reading performance in regular and special education settings. The Journal of
Special Education, 21, 13–26.
Marston, D., Mirkin, P. K., & Deno, S. L. (1984). Curriculum-based measurement:
An alternative to traditional screening, referral, and identification of learning
disabilities of learning disabled students. The Journal of Special Education, 18,
109–118.
Olinghouse, N. G., Lambert, W., & Compton, D. L. (2006). Monitoring Children with
Reading Disabilities’ Response to Phonics Intervention: Are There Differences
between Intervention Aligned and General Skill Progress Monitoring Assessments? Exceptional Children, 73(1), 90.
Parker, R. (1992). Estimating Trends in Progress Monitoring Data: A Comparison of
Simple Line-Fitting Methods. School Psychology Review, 21(2), 300–312.
Silberglitt, B., & Hintze, J. M. (2007). How much growth can we expect? A conditional analysis of RCBM growth rates by level of performance. Exceptional
Children, 74, 71–84.
Skiba, R., Wesson, C., & Deno, S. L. (1982). The effects of training teachers in the
use of formative evaluation in reading: An experimental-control comparison
(Report No. 88). Minneapolis: University of Minnesota, Institute for Research
on Learning Disabilities.
Stecker, P. M. (2006). Using curriculum-based measurement to monitor reading
progress in inclusive elementary settings. Reading & Writing Quarterly: Overcoming Learning Difficulties, 22, 91–97.
RTI Implementer Series: Module 2: Progress Monitoring77
Wayman, M. M., Wallace, T., Wiley, H. I., Tichá, R., & Espin, C. A. (2007). Literature
Synthesis on Curriculum-Based Measurement in Reading. The Journal of Special
Education, 41(2), 85–120.
Zumeta, R. O., Compton, D. L., & Fuchs, L. S. (2012). Using word identification
fluency to monitor first-grade reading development. Exceptional Children,
78(2), 201–220.
Progress Monitoring—Writing
Deno, S. L., Marston, D., & Mirkin, P. (1982). Valid measurement procedures for
continuous evaluation of written expression. Exceptional Children Special
Education and Pediatrics: A New Relationship, 48, 368–371.
Espin, C. A., de La Paz, S., Scierka, B. J., & Roelofs, L. (2005). The relationship
between curriculum-based measures in writing and quality and completeness
of expository writing for middle school students. Journal of Special Education,
38, 208–217.
Espin, C. A., Scierka, B. J., Skare, S., & Halverson, N. (1999). Criterion-related validity
of curriculum-based measures in writing for secondary school students.
Reading and Writing Quarterly, 15, 5–27.
Espin, C. A., Shin, J., Deno, S. L., Skare, S., Robinson, S., & Benner, B. (2000). Identifying indicators of written proficiency for middle school students. Journal of
Special Education, 34, 140–153.
Espin, C. A., Weissenburger, J. W., & Benson, B. J. (2004). Assessing the writing
performance of students in special education. Exceptionality, 12, 55–66.
Fewster, S., & MacMillan, P. D. (2002). School-based evidence for the validity of
curriculum-based measurement of reading and writing. Remedial and Special
Education, 23, 149–156.
Fuchs, L. S., Fuchs, D., Hamlett, C. L., & Allinder, R. M. (1991). Effects of expert
system advice within curriculum-based measurement on teacher planning and
student achievement in spelling. School Psychology Review, 20, 49–66.
78
Training Manual
Fuchs, L. S., Fuchs, D., Hamlett, C. L., & Allinder, R. M. (1991). The contribution of
skills analysis to curriculum-based measurement in spelling. Exceptional
Children, 57, 443–452.
Gansle, K. A., Noell, G. H., VanDerHayden, A. M., Naquin, G. M., & Slider, N. J.
(2002). Moving beyond total words written: The reliability, criterion validity,
and time cost of alternative measures for curriculum-based measurement in
writing. School Psychology Review, 31, 477–497.
Gansle, K. A., Noell, G. H., Vanderheyden, A. M., Slider, N. J., Hoffpauir, L. D., &
Whitmarsh, E. L. (2004). An examination of the criterion validity and sensitivity
to brief intervention of alternate curriculum-based measures of writing skill.
Psychology in the Schools, 41, 291–300.
Jewell, J., & Malecki, C. K. (2005). The utility of CBM written language indices: An
investigation of production-dependent, production-independent, and accurateproduction scores. School Psychology Review, 34(1), 27–44.
Lembke, E., Deno, S. L., & Hall, K. (2003). Identifying an indicator of growth in early
writing proficiency for elementary school students. Assessment for Effective
Intervention, 28(3-4), 23–35.
Malecki, C. K., & Jewell, J. (2003). Developmental, gender, and practical considerations in scoring curriculum-based measurement writing probes. Psychology in
the Schools, 40, 379–390.
Parker, R. I., Tindal, G., & Hasbrouck, J. (1991). Countable indices of writing quality:
Their suitability for screening-eligibility decisions. Exceptionality, 2, 1–17.
Parker, R. I., Tindal, G., & Hasbrouck, J. (1991). Progress monitoring with objective
measures of writing performance for students with mild disabilities.
Exceptional Children, 58, 61–73.
Tindal, G., & Hasbrouck, J. (1991). Analyzing student writing to develop instructional strategies. Learning Disabilities Research and Practice, 6, 237–245.
Tindal, G., & Parker, R. (1989). Assessment of written expression for students in
compensatory and special education programs. Journal of Special Education,
23, 169–183.
RTI Implementer Series: Module 2: Progress Monitoring79
Tindal, G., & Parker, R. (1991). Identifying measures for evaluating written expression. Learning Disabilities Research and Practice, 6, 211–218.
Watkinson, J. T., & Lee, S. W. (1992). Curriculum-based measures of written expression for learning-disabled and nondisabled students. Psychology in the Schools,
29, 184–192.
Weissenburger, J. W., & Espin, C. A. (2005). Curriculum-based measures of writing
across grade levels. Journal of School Psychology, 43, 153–169.
Progress Monitoring—English Language Learners
Baker, D. L., Cummings, K. D., Good, R. H., & Smolkowski, K. (2007). Indicadores
Dinámicos del Éxito en la Lectura (IDEL®): Summary of decision rules for intensive, strategic, and benchmark instructional recommendations in kindergarten
through third grade (Technical Report No.1) Eugene, OR: Dynamic Measurement Group.
Baker, S. K., & Good, R. H. (1995). Curriculum-based measurement of English
reading with bilingual Hispanic students: a validation study with second-grade
students. The School Psychology Review, 24(4), 561–578.
Betts, J., Bolt, S., Decker, D., Muyskens, P., & Marston, D. (2009). Examining the role
of time and language type in reading development for English language learners. Journal of School Psychology, 47(3), 143–166.
Dufour-Martel, C. (2006, February). IDAPEL: Indicateurs dynamiques d’habiletés
précoces en lecture. Paper presented at the Annual DIBELS Summit, Santa Ana
Pueblo, NM.
Graves, A. W., Gersten, R., & Haager, D. (2004). Literacy instruction in multiplelanguage first-grade classrooms: Linking student outcomes to observed instructional practice. Learning Disabilities Research & Practice, 19(4), 262–272.
Graves, A. W., Plasencia-Peinado, J., & Deno, S. L. (2005). Formatively evaluating
the reading progress of first-grade English language learners in multiplelanguage classrooms. Remedial and Special Education, 26(4), 215–225.
80
Training Manual
Gunn, B., Biglan, A., & Smolkowski, K. (2000). The efficacy of supplemental instruction in decoding skills for Hispanic and non-Hispanic students in early elementary school. The Journal of Special Education, 34(2), 90–103.
Gyovai, L. K., Cartledge, G., Kourea, L., Yurick, A., & Gibson, L. (2009). Early Reading
Intervention: Responding to the learning needs of young at-risk English Language Learners. Learning Disability Quarterly, 32(3), 143–62.
Haager, D. S., & Windmueller, M. P. (2001). Early reading intervention for English
language learners at-risk for learning disabilities: Student and teacher outcomes in an urban school. Learning Disability Quarterly, 24(4), 235–250.
Laija-Rodriguez, W., Ochoa, S. H., & Parker, R. (2006). The crosslinguistic role of
cognitive academic language proficiency on reading growth in Spanish and
English. Bilingual Research Journal, 30(1), 87–106.
McIntosh, A. S., Graves, A., & Gersten, R. (2007). The effects of response to intervention on literacy development in multiple-language settings. Learning
Disability Quarterly, 30(3), 197–212.
Ramirez, R. D. d. (2007). Cross-language relationship between Spanish and English
oral reading fluency among Spanish-speaking English language learners in
bilingual education classrooms. Psychology in the Schools, 44(8), 795–806.
Ramirez, R. D. d., & Shapiro, E. S. (2006). Curriculum-based measurement and the
evaluation of reading skills of Spanish-speaking English language learners in
bilingual education classrooms. The School Psychology Review, 35(3), 356–369.
Vanderwood, M. L., Linklater, D., & Healy, K. (2008). Predictive accuracy of nonsense word fluency for English language learners. School Psychology Review,
37(1), 5–17.
Yesil-Dagli, U. (2011). Predicting ELL students’ beginning first grade English oral
reading fluency from initial kindergarten vocabulary, letter naming, and phonological awareness skills. Early Childhood Research Quarterly, 26(1), 15–29.
RTI Implementer Series: Module 2: Progress Monitoring81
Appendix F:
Websites With Additional
Information
RTI Implementer Series: Module 2: Progress Monitoring83
Websites With
Additional Information
National Center on Response to Intervention
The National Center on Response to Intervention’s mission is to provide technical
assistance tostates and districts and build the capacity of states to assist districts in
implementing proven models for RTI/EIS. The Center provides online resources to
assist states, districts, and schools in implementing response to intervention (RTI).
http://www.rti4success.org
Doing What Works
Doing What Works (DWW) is a website sponsored by the U.S. Department of
Education. DWW provides an online library of resources that may help teachers,
schools, districts, states, and technical assistance providers implement researchbased instructional practice. Much of the DWW content is based on information
from IES’ What Works Clearinghouse (WWC). Doing What Works modules provide
summaries of research-based practices, explanations of key concepts, expert
interviews, school-based interviews, sample materials, tools, templates, and ideas
for moving forward.
http://dww.ed.gov
National High School Center
The National High School Center provides information and resources about many
high school improvement topics, including, dropout prevention transitions, early
warning systems, and high school literacy. The National High School Center has a
variety of products that might be useful when implementing RTI in high schools, for
example, a suite of products on early warning systems including an implementation
guide and tool as well a brief on tiered interventions in high school.
http://www.betterhighschools.org
RTI Implementer Series: Module 2: Progress Monitoring85
RTI Action Network
The RTI Action Network provides resources to guide educators and families in the
large-scale implementation of RTI. The RTI Action Network provides a variety of
resources for RTI including “virtual visits” to schools implementing RTI, expert
interviews, online discussions, forms, checklists, and research briefs. The RTI Action
Network is a program of the National Center for Learning Disabilities, funded by
the Cisco Foundation.
http://rtinetwork.org/professional/leadership-network
IRIS Center
The IRIS Center for Training Enhancements has free online interactive resources
that translate research about the education of students with disabilities into
practice. They provide modules, case studies, activities, and more. These modules
and videos can be used for professional development.
http://iris.peabody.vanderbilt.edu
Center on Positive Behavioral Interventions and Supports (PBIS)
The Center on Positive Behavioral Interventions and Supports is an Office of Special
Education Programs (OSEP) Technical Assistance Center that provides resources on
implementing positive behavior and supports.
http://www.pbis.org
86
Training Manual
National Center on Response to Intervention
1000 Thomas Jefferson Street, NW
Washington, DC 20007
Phone: 877–784–4255
Fax: 202–403–6844
Web: http://www.rti4success.org
Download