Research Based
Calculation and Decision Making
Caitlin S. Flinn, EdS, NCSP
Andrew E. McCrea, MS, NCSP
Matthew Ferchalk EdS, NCSP
ASPP Conference 2010
Today’s Objectives
Explain what RoI is, why it is important, and how to compute it.
Establish that Simple Linear Regression should be the standardized procedure for calculating RoI.
Discuss how to use RoI within a problem solving/school improvement model.
RoI Definition
Algebraic term: Slope of a line
Vertical change over the horizontal change
Rise over run
m = (y
2
- y
1
) / (x
2
- x
1
)
Describes the steepness of a line (Gall & Gall,
2007)
RoI Definition
Finding a student’s RoI = finding the slope of a line
Using two data points on that line
Finding the line itself
Linear regression
Ordinary Least Squares
How does Rate of
Improvement Fit into the
Larger Context?
School Improvement/Comprehensive School Reform
Response to Intervention
Dual Discrepancy: Level & Growth
Rate of Improvement
School Improvement/Comprehensive
School Reform
Grade level content expectations (ELA, math, science, social studies, etc.).
Work toward these expectations through classroom instruction.
Understand impact of instruction through assessment .
Assessment
Formative Assessments/High Stakes Tests
Does student have command of content expectation (standard)?
Universal Screening using CBM
Does student have basic skills appropriate for age/grade?
Assessment
Q : For students who are not proficient on grade level content standards, do they have the basic reading/writing/math skills necessary?
A : Look at Universal Screening; if above criteria, intervention geared toward content standard, if below criteria, intervention geared toward basic skill.
Progress Monitoring
Frequent measurement of knowledge to inform our understanding of the impact of instruction/intervention.
Measures of basic skills (CBM) have demonstrated reliability & validity
(see table at www.rti4success.org
).
Classroom Instruction (Content Expectations)
Measure Impact (Test)
Proficient!
Non Proficient
Use Diagnostic
Test to Differentiate
Content Need?
Basic Skill Need?
Intervention
Progress Monitor
If CBM is
Appropriate
Measure
Intervention
Progress Monitor
With CBM
Rate of Improvement
So…
Rate of Improvement (RoI) is how we understand student growth (learning).
RoI is reliable and valid (psychometrically speaking) for use with CBM data.
RoI is best used when we have CBM data, most often when dealing with basic skills in reading/writing/math.
RoI can be applied to other data (like behavior) with confidence too!
RoI is not yet tested on typical Tier I formative classroom data.
RoI is usually applied to…
Tier One students in the early grades at risk for academic failure (low green kids).
Tier Two & Three Intervention Groups.
Special Education Students (and IEP goals)
Students with Behavior Plans
RoI Foundations
Deno, 1985
Curriculum-based measurement
General outcome measures
Short
Standardized
Repeatable
Sensitive to change
RoI Foundations
Fuchs & Fuchs, 1998
Hallmark components of Response to
Intervention
Ongoing formative assessment
Identifying non-responding students
Treatment fidelity of instruction
Dual discrepancy model
One standard deviation from typically performing peers in level and rate
RoI Foundations
Ardoin & Christ, 2008
Slope for benchmarks (3x per year)
More growth from fall to winter than winter to spring
Might be helpful to use RoI for fall to winter
And a separate RoI for winter to spring
RoI Foundations
Fuchs, Fuchs, Walz, & Germann, 1993
Typical weekly growth rates
Needed growth
1.5 to 2.0 times typical slope to close gap in a reasonable amount of time
RoI Foundations
Deno, Fuchs, Marston, & Shin, 2001
Slope of frequently non-responsive children approximated slope of children already identified as having a specific learning disability
RoI & Statistics
Gall & Gall, 2007
10 data points are a minimum requirement for a reliable trendline
How does that affect the frequency of administering progress monitoring probes?
Importance of Graphs
Vogel, Dickson, & Lehman, 1990
Speeches that included visuals, especially in color, improved:
Immediate recall by 8.5%
Delayed recall (3 days) by 10.1%
Importance of Graphs
“Seeing is believing.”
Useful for communicating large amounts of information quickly
“A picture is worth a thousand words.”
Transcends language barriers (Karwowski,
2006)
Responsibility for accurate graphical representations of data
Skills Typically Graphed
Reading
Oral Reading Fluency
Word Use Fluency
Reading Comprehension
MAZE
Retell Fluency
Early Literacy Skills
Initial Sound Fluency
Letter Naming Fluency
Letter Sound Fluency
Phoneme Segmentation Fluency
Nonsense Word Fluency
Math
Math Computation
Math Facts
Early Numeracy
Oral Counting
Missing Number
Number
Identification
Quantity
Discrimination
Spelling
Written Expression
Behavior
Importance of RoI
Visual inspection of slope
Multiple interpretations
Instructional services
Need for explicit guidelines
Ongoing Research
RoI for instructional decisions is not a perfect process
Research is currently addressing sources of error:
Christ, 2006: standard error of measurement for slope
Ardoin & Christ, 2009: passage difficulty and variability
Jenkin, Graff, & Miglioretti, 2009: frequency of progress monitoring
Future Considerations
Questions yet to be empirically answered
What parameters of RoI indicate a lack of RtI?
How does standard error of measurement play into using RoI for instructional decision making?
How does RoI vary between standard protocol interventions?
How does this apply to non-English speaking populations?
How is RoI Calculated?
Which way is best?
Multiple Methods for
Calculating Growth
Visual Inspection Approaches
“Eye Ball” Approach
Split Middle Approach
Tukey Method
Quantitative Approaches
Last point minus First point Approach
Split Middle & Tukey “plus”
Linear Regression Approach
The Visual Inspection
Approaches
20
14
12
10
8
18
16
6
4
2
0
1
8
2
10
Eye Ball Approach
7
17
14
11
3 4 5 6 7
19
8
14
Split Middle Approach
Drawing “through the two points obtained from the median data values and the median days when the data are divided into two sections”
(Shinn, Good, & Stein, 1989).
20
14
12
10
8
18
16
6
4
2
0
1
8
2
10
X (9)
X(9)
7
3
Split Middle
17
14
4 5 6
11
X(14)
7
19
8
14
Tukey Method
Divide scores into 3 equal groups
Divide groups with vertical lines
In 1 st and 3 rd groups, find median data point and median week and mark with an
“X”
Draw line between two “Xs”
(Fuchs, et. al., 2005. Summer Institue Student progress monitoring for math. http://www.studentprogress.org/library/training.asp
)
Tukey Method
20
14
12
10
8
18
16
6
4
2
0
1
8
10
X(8)
2 3
7
17
14
11
4 5 6
19
X(14)
14
7 8
The Quantitative Approaches
Last minus First
Iris Center: last probe score minus first probe score over last administration period minus first administration period.
Y2-Y1/X2-X1= RoI http://iris.peabody.vanderbilt.edu/resources.html
Last minus First
8
6
4
2
12
10
0
20
18
16
14
1
8
2
10
3
7
4
17
5
19
14
11
(14-8)/(8-0)=0.75
14
6 7 8
Split Middle “Plus”
20
14
12
10
8
18
16
6
4
2
0
1
8
2
10 X(9)
7
3 4
17
5
14
11
X(14)
7
19
(14-9)/8=0.63
6 8
14
Tukey Method “Plus”
20
14
12
10
8
18
16
6
4
2
0
1
8
10
X(8)
2 3
7
4
17
5
14
6
11
X(14)
14
7
19
(14-8)/8=0.75
8
10
8
6
14
12
4
2
0
20
18
16
1
8
2
10
Linear Regression
19
17
14
11
7 y = 1.1429
x + 7.3571
14
3 4 5 6 7 8
RoI Consistency?
Any Method of
Visual Inspection
Last minus First
Split Middle
“Plus”
Tukey “Plus”
Linear
Regression
???
0.75
0.63
0.75
1.10
RoI Consistency?
If we are not all using the same model to compute RoI, we continue to have the same problems as past models, where under one approach a student meets SLD criteria, but under a different approach, the student does not.
Hypothetically, if the RoI cut-off was 0.65 or
0.95, different approaches would come to different conclusions on the same student.
RoI Consistency?
Last minus First (Iris Center) and Linear
Regression (Shinn, etc.) only quantitative methods discussed in CBM literature.
Study of 37 at risk 2 nd graders:
Difference in RoI b/w LmF & LR
Methods
Whole Year 0.26 WCPM
Fall
Spring
0.31 WCPM
0.24 WCPM
McCrea (2010) Unpublished data
Technical Adequacy
Without a consensus on how to compute
RoI, we risk falling short of having technical adequacy within our model.
So, Which RoI Method is Best?
Literature shows that Linear
Regression is Best Practice
Student’s daily test scores…were entered into a computer program…The data analysis program generated slopes of improvement for each level using an Ordinary-Least Squares procedure
(Hayes, 1973) and the line of best fit.
This procedure has been demonstrated to represent CBM achievement data validly within individual treatment phases (Marston, 1988;
Shinn, Good, & Stein, in press; Stein, 1987).
Shinn, Gleason, & Tindal, 1989
Growth (RoI) Research using Linear Regression
Christ, T. J. (2006). Short-term estimates of growth using curriculum based measurement of oral reading fluency:
Estimating standard error of the slope to construct confidence intervals. School Psychology Review , 35, 128-133.
Deno, S. L., Fuchs, L. S., Marston, D., & Shin, J. (2001). Using curriculum based measurement to establish growth standards for students with learning disabilities. School Psychology
Review , 30, 507-524.
Good, R. H. (1990). Forecasting accuracy of slope estimates for reading curriculum based measurement: Empirical evidence.
Behavioral Assessment , 12, 179-193.
Fuchs, L. S., Fuchs, D., Hamlett, C. L., Walz, L. & Germann, G.
(1993). Formative evaluation of academic progress: How much growth can we expect? School Psychology Review , 22, 27-48.
Growth (RoI) Research using Linear Regression
Jenkins, J. R., Graff, J. J., & Miglioretti, D.L. (2009).
Estimating reading growth using intermittent CBM progress monitoring. Exceptional Children , 75, 151-163.
Shinn, M. R., Gleason, M. M., & Tindal, G. (1989).
Varying the difficulty of testing materials: Implications for curriculum-based measurement. The Journal of Special
Education , 23, 223-233.
Shinn, M. R., Good, R. H., & Stein, S. (1989).
Summarizing trend in student achievement: A comparison of methods. School Psychology Review , 18,
356-370.
So, Why Are There So Many
Other RoI Models?
Ease of application
Focus on Yes/No to goal acquisition, not degree of growth
How many of us want to calculate OLS
Linear Regression formulas (or even remember how)?
Pros and Cons of Each
Approach
Pros Cons
Eye Ball Easy
Understandable
Split
Middle &
Tukey
No software needed
Compare to
Aim/Goal line
Yes/No to goal acquisition
Subjective
No statistic provided, no idea of the degree of growth
Pros and Cons of Each
Approach
Pros
Last minus
First
Provides a growth statistic
Easy to compute
Split Middle &
Tukey “Plus”
Considers all data points.
Easy to compute
Linear
Regression
All data points
Best Practice
Cons
Does not consider all data points, only two
No support for
“plus” part of methodology
Calculating the statistic
An Easy and
Applicable Solution
Open Microsoft Excel
I love
ROI
Programming Microsoft Excel to
Graph Rate of Improvement:
Fall to Winter
Setting Up Your Spreadsheet
In cell A1, type 3rd Grade ORF
In cell A2, type First Semester
In cell A3, type School Week
In cell A4, type Benchmark
In cell A5, type the Student’s Name
(Swiper Example)
Labeling School Weeks
Starting with cell B3, type numbers 1 through 18 going across row 3
(horizontal).
Numbers 1 through 18 represent the number of the school week.
You will end with week 18 in cell S3.
Labeling Dates
Note : You may choose to enter the date of that school week across row 2 to easily identify the school week.
Entering Benchmarks
(3rd Grade ORF)
In cell B4, type 77.
This is your fall benchmark.
In cell S4, type 92.
This is your winter benchmark.
Entering Student Data (Sample)
Enter the following numbers, going across row 5, under corresponding week numbers.
Week 1 – 41
Week 8 – 62
Week 9 – 63
Week 10 – 75
Week 11 – 64
Week 12 – 80
Week 13 – 83
Week 14 – 83
Week 15 – 56
Week 17 – 104
Week 18 – 74
*CAUTION*
If a student was not assessed during a certain week, leave that cell blank
Do not enter a score of Zero (0) it will be calculated into the trendline and interpreted as the student having read zero words correct per minute during that week.
Graphing the Data
Highlight cells A4 and A5 through S4 and
S5
Follow Excel 2003 or Excel 2007 directions from here
Graphing the Data
Excel 2003
Across the top of your worksheet, click on
“Insert”
In that drop-down menu, click on “Chart”
Excel 2007
Click Insert
Find the icon for Line
Click the arrow below
Line
Graphing the Data
Excel 2003
A Chart Wizard window will appear
Excel 2007
6 graphics appear
Graphing the Data
Excel 2003
Choose “Line”
Choose “Line with markers…”
Excel 2007
Choose “Line with markers”
Graphing the Data
Excel 2003
“Data Range” tab
“Columns”
Excel 2007
Your graph appears
Graphing the Data
Excel 2003
“Chart Title”
“School Week” X Axis
“WPM’ Y Axis
Excel 2007
Change your labels by right clicking on the graph
Graphing the Data
Excel 2003
Choose where you want your graph
Excel 2007
Your graph was automatically put into your data spreadsheet
Graphing the Trendline
Excel 2003 Excel 2007
Right click on any of the student data points
Graphing the Trendline
Excel 2003
Choose “Linear”
Excel 2007
Graphing the Trendline
Excel 2003 Excel 2007
Choose “Custom” and check box next to
“Display equation on chart”
Graphing the Trendline
Clicking on the equation highlights a box around it
Clicking on the box allows you to move it to a place where you can see it better
Graphing the Trendline
You can repeat the same procedure to have a trendline for the benchmark data points
Suggestion: label the trendline Expected
ROI
Move this equation under the first
100
120
Individual Student Graph y = 2.5138x + 42.113
y = 0.8824x + 76.118
80
60
40
20
0
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
Benchmark
Student: Sw iper
RoI
Linear (Benchmark)
Individual Student Graph
The equation indicates the slope, or rate of improvement.
The number, or coefficient, before "x" is the average improvement, which in this case is the average number of words per minute per week gained by the student.
Individual Student Graph
The rate of improvement, or trendline, is calculated using a linear regression, a simple equation of least squares.
To add additional progress monitoring/benchmark scores once you’ve already created a graph, enter additional scores in Row 5 in the corresponding school week.
Individual Student Graph
The slope can change depending on which week (where) you put the benchmark scores on your chart.
Enter benchmark scores based on when your school administers their benchmark assessments for the most accurate depiction of expected student progress.
Why Graph only 18
Weeks at a Time?
…Finding Curve-linear Growth
205
200
195
190
185
180
175
170
165
1
200
Non-Educational Example of
Curve-linear Growth
Weight Loss Chart
2
197.5
3
193
4
189.5
186
184
7
10 Week RoI = -2.5
First 5 Weeks RoI = -3.6
Second 5 Weeks RoI = -1.5
182.5
8
181
9
179.5
10
178
5
Weeks
6
Academic Example of
Curvilinear Growth
70
60
50
40
30
MOY to EOY = 1.19
20
10
BOY to MOY = 1.60
BOY to EOY = 1.35
0
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36
Weeks
McCrea, 2010
Looked at Rate of Improvement in small
2 nd grade sample
Found differences in RoI when computed for fall and spring:
Ave RoI for fall:
Ave RoI for spring:
1.47 WCPM
1.21 WCPM
Ardoin & Christ, 2008
Slope for benchmarks (3x per year)
More growth from fall to winter than winter to spring
Christ, Yeo, & Silberglitt, in press
Growth across benchmarks (3X per year)
More growth from fall to winter than winter to spring
Disaggregated special education population
Graney, Missall, & Martinez,
2009
Growth across benchmarks (3X per year)
More growth from winter to spring than fall to winter with R-CBM.
Fien, Park, Smith, & Baker,
2010
Investigated relationship b/w NWF gains and ORF/Comprehension
Found greater NWF gains in fall than in spring.
DIBELS (6 th ) ORF Change in
Criteria
2 nd
Fall to Winter Winter to
Spring
24 22
3 rd 15 18
4 th
5 th
6 th
13
11
13
9
11 5
AIMSweb Norms
Based on 50 th
Percentile
1 st
2 nd
Fall to Winter Winter to
Spring
18 31
25 17
3 rd
4 th
5 th
6 th
22
16
17
13
15
13
15
12
Speculation as to why Differences in RoI within the Year
Relax instruction after high stakes testing in
March/April; a PSSA effect.
Depressed BOY benchmark scores due to summer break; a rebound effect (Clemens).
Instructional variables could explain differences in Graney (2009) and Ardoin (2008) & Christ (in press) results (Silberglitt).
Variability within progress monitoring probes
(Ardoin & Christ, 2008) (Lent).
Calculating Needed RoI
Calculating Actual (Expected) RoI –
Benchmark
Calculating Actual RoI - Student
Calculating Needed RoI
In cell T3, type Needed RoI
Click on cell T5
In the fx line (at top of sheet) type this formula
=((S4-B5)/18)
Then hit enter
Your result should read: 2
This formula simply subtracts the student’s actual middle of year (MOY) benchmark from the expected end of year (EOY) benchmark, then dividing by 18 for the first 18 weeks (1st semester).
Calculating Actual (Expected) RoI -
Benchmark
In cell U3, type Actual RoI
Click on cell U4
In the fx line (at top of sheet) type this formula
=SLOPE(B4:S4,B3:S3)
Then hit enter
Your result should read: 1.06
This formula considers 18 weeks of benchmark data and provides an average growth or change per week.
Calculating Actual RoI - Student
Click on cell U5
In the fx line (at top of sheet) type this formula =SLOPE(B5:S5,B3:S3)
Then hit enter
Your result should read: 1.89
This formula considers 18 weeks of student data and provides an average growth or change per week.
within a Problem-Solving Model
Steps
1.
2.
3.
4.
Gather the data
Ground the data & set goals
Interpret the data
Figure out how to fit Best Practice into
Public Education
Universal Screening
Progress Monitoring
Common Screenings in PA
DIBELS
AIMSweb
MBSP
4Sight
PSSA
Validated Progress
Monitoring Tools
DIBELS
AIMSweb
MBSP
www.studentprogress.org
1) To what will we compare our student growth data?
2) How will we set goals?
Multiple Ways to
Look at Growth
Needed Growth
Expected Growth & Percent of Expected Growth
Fuchs et. al. (1993) Table of Realistic and
Ambitious Growth
Growth Toward Individual Goal*
*Best Practices in Setting Progress Monitoring Goals for Academic Skill
Improvement (Shapiro, 2008)
Needed Growth
Difference between student’s BOY (or
MOY) score and benchmark score at MOY
(or EOY).
Example: MOY ORF = 10, EOY benchmark is 40, 18 weeks of instruction
(40-10/18=1.67). Student must gain 1.67 wcpm per week to make EOY benchmark.
Expected Growth
Difference between two benchmarks.
Example: MOY benchmark is 20, EOY benchmark is 40, expected growth (40-
20)/18 weeks of instruction = 1.11 wcpm per week.
Looking at Percent of
Expected Growth
Tier I Tier II Tier III
Greater than 150%
Between
110% &
150%
Between
95% & 110%
Possible LD
Likely LD
Between
80% & 95%
May Need
More
May Need
More
Likely LD
Below 80% Needs More Needs More Likely LD
Tigard-Tualatin School District
(www.ttsd.k12.or.us)
Oral Reading Fluency Adequate
Response Table
1 st
2 nd
3 rd
4 th
5 th
Realistic
Growth
2.0
1.5
1.0
0.9
0.5
Ambitious
Growth
3.0
2.0
1.5
1.1
0.8
Fuchs, Fuchs, Hamlett, Walz, & Germann
(1993)
1 st
2 nd
3 rd
4 th
5 th
Digit Fluency Adequate
Response Table
Realistic
Growth
0.3
0.3
0.3
0.75
0.75
Fuchs, Fuchs, Hamlett, Walz, & Germann
(1993)
Ambitious
Growth
0.5
0.5
0.5
1.2
1.2
From Where Should
Benchmarks/Criteria Come?
Appears to be a theoretical convergence on use of local criteria (what scores do our students need to have a high probability of proficiency?) when possible.
Objectives
Rationale for developing Local
Benchmarks
Fun with Excel!
Fun with Algebra!
Local Benchmarks in Action
Rational for Developing Local
Benchmarks
Stage & Jacobson (2001)
Slope in Oral Reading Fluency reliably predicted performance on
Washington Assessment of Student Learning
McGlinchy & Hixon (2004)
Results support the use of CBM for determine which students are at risk for reading failure and who will fail state tests
Hintze & Silberglitt (2005)
Oral Reading Fluency is highly connected to state test performance and is and is accurate at predicting those students who are likely to not meet proficiency.
Shapiro et al. (2006)
Results of this study show that CBM and be a valuable source to identify which student are likely to be successful or fail state tests.
Ask Jason Pedersen!
Rational for Developing Local
Benchmarks
Identify and validate problems
Creating ideas for instructional grouping, focus, or intensity
Goal setting
Determining the focus and frequency of progress monitoring
Exiting student or moving students to different level or tiers of intervention
Systems level resource allocation and evaluation
(Stewart & Silberglitt, 2008)
Rationale for Developing Local
Benchmarks
Silberglitt (2008)
Districts should “refrain from simply adopting a set of national target scores, as these scores may or may not be relevant to the high-stakes outcomes for which their students must be adequately prepared.” (p.
1871)
“By linking local assessments to high-stakes tests, users are able to establish target scores on these local assessments, scores that divide students between those who are likely and those who are unlikely to achieve success on the highstakes test.”
(p. 1870)
Rationale for Developing Local
Benchmarks
Discrepancy across states, in terms of the percentile ranks on a nationally administered assessment necessary to predict successful state test performance (Kingsbury et al., 2004)
“Using cut scores based on the probability of success on an upcoming state-mandated assessment, might be a useful alternative to normative date for making these decisions.
(Silberglitt & Hintz, 2005)
Can be used to separate students into groups in an RtII framework (Silberglitt, 2008)
Rationale for Developing Local
Benchmarks
Useful in calculating discrepancy in level (Burns,
2008)
Represent the school population where the students are getting their education (Stewart &
Silberglitt, 2008)
Teachers often use comparisons between students in their classroom, this helps to objectify those decisions (Stewart & Silberglitt,
2008)
100%
80%
60%
40%
20%
0%
Rationale for Developing Local
Benchmarks
How accurately does it predict proficiency level in Third
Grade?
Correct Prediction Percentage
83% 83% 84%
77% 78%
80%
ORF-Loc-1 ORF-Loc-2 ORF-Loc-3 ORF-DIB-1 ORF-DIB-2 ORF-DIB-3
Assessment
(Ferchalk, Richardson & Cogan-Ferchalk, 2010)
Rationale for Developing Local
Benchmarks
Percentage of students in Third Grade predicted to be successful on the PSSA who were actually
Successful
100%
80%
60%
40%
20%
0%
81% 82%
Negative Predictive Pow er
94%
82%
89%
94%
ORF-Loc-1 ORF-Loc-2 ORF-Loc-3 ORF-DIB-1
Assessm ents
ORF-DIB-2 ORF-DIB-3
(Ferchalk, Richardson & Cogan-Ferchalk, 2010)
Rationale for Developing Local
Benchmarks
Percentage of Third Grade students predicted to be unsuccessful who actually failed to meet proficiency on the PSSA
100%
80%
60%
40%
20%
0%
89%
85%
Positive Predictive Power
91%
61%
65%
64%
ORF-Loc-1 ORF-Loc-2 ORF-Loc-3
Assessm ents
ORF-DIB-1
(Ferchalk, Richardson & Cogan-Ferchalk, 2010)
ORF-DIB-2 ORF-DIB-3
Getting Started
Collect 3 or more years of student CBM and
PSSA data
Match student data for each student
Name ORF - Fall ORF - Winter ORF - Spring
Harry Potter 41 51 73
PSSA
1080
Use data extract and data farming features offered through PSSA / DIBELS / AIMSweb websites
Download with student ID numbers
If you have a data warehouse…then use your special magic…lucky!
Getting Started
Reliable and valid data
Linear / highly correlated data
Gather data with integrity
Do not teach to the test
All students should be included in the norm group
Be cautious of cohort effects
(Stewart & Silberglitt, 2008)
Getting Started
PSSA Cut Scores
http://www.portal.state.pa.us/portal/server.pt/community/cut_scor es/7441
Use the lower end score for Proficiency
Download the data set from:
http://sites.google.com/site/rateofimprovement/
Wisdom from Teachers
(especially from our reading specialists Tina and Kristin!)
Children do not equal dots!
They are not numbers or data points!
≠
Having said that…
Fun with Excel!
Fun with Algebra!
Matt Burns – University of Minnesota
X=(Y-a)/b
Y = Proficiency Score on the PSSA
a = Intercept
b = Slope
X=Local Benchmark Score
(Burns, 2008)
PSSA Reading and DIBELS ORF Scatterplot
Student data Proficient PSSA Slope
2000
1800
1600
1400
1200
1000
800
600
0 y = 2.561x + 1107.7
20 40 60 80 100 120
DIBELS Oral Reading Fluency
140 160 180 200
PSSA Reading and DIBELS ORF Scatterplot
Student data DIBELS Local Benchmark Proficient PSSA Slope
2000
1800
1600
1400
1200
1000
800
600
0 y = 2.561x + 1107.7
20 40 60 80 100 120
DIBELS Oral Reading Fluency
140 160 180 200
More Fun with Algebra!
Predict student
Proficiency Score
Resolve the equation
X=(Y-a)/b
Y=(Xb)+a
Y=Predicted PSSA Score
Student
93wcpm in the fall
Data Sample
Slope = 2.56
Intercept = 1108
Y=(93X2.56)+1108
Y=1306
Use with Caution!
Local Benchmark Applications
Northern Lebanon School District Local Benchmarks
Grade 3
ORF Benchmarks
Fall
DIBELS
Local Benchmarks
77
50
Winter Spring
92
65
110
80
Grade 4
Grade 5
Grade 6
DIBELS
Local Benchmarks
DIBELS
Local Benchmarks
DIBELS
Local Benchmarks
93
73
104
118
109
113
105
96
115
130
120
115
118
103
124
136
125
113
Local Benchmark Applications
For those that like the DIBELS Graphs
Oral Reading Fluency
Local Benchmarks WCPM Rate of Improvement
150
100
50
0
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36
Sept Oct No v Dec Jan Feb M arch A pril M ay
Date
Local Benchmark Applications
Last Name
Student 1
Student 2
Student 3
Student 4
Student 5
Student 6
Student 7
Student 8
Student 9
Student 10
Student 11
Student 12
Student 13
Student 14
Student 15
Student 16
Student 17
Name
First Name
Student 1
Student 2
Student 3
Student 4
Student 5
Student 6
Student 7
Student 8
Student 9
Student 10
Student 11
Student 12
Student 13
Student 14
Student 15
Student 16
Student 17
Beginning
DIBELS Benchmark Assessments
Fall
Likelyhood of meeting
Proficiency
Middle
Winter
Likelyhood of meeting
Proficiency
End
Spring
Likelyhood of meeting
Proficiency
103
59
151
143
30
82
85
123
72
112
59
100
105
140
158
107
43
Likely
Likely
Likely
Likely
Likely
Likely
Unlikely
Likely
Likely
Likely
Likely
Likely
Likely
Likely
Likely
Likely
Unlikely
124
84
158
142
27
111
90
121
66
103
84
103
125
166
193
126
60
Likely
Likely
Likely
Likely
Likely
Likely
Unlikely
Likely
Likely
Likely
Likely
Likely
Likely
Likely
Likely
Likely
Unlikely
102
117
140
159
149
98
109
129
166
93
113
134
102
164
166
36
Likely
Likely
Likely
Likely
Likely
Likely
Likely
Likely
Likely
Likely
Likely
Likely
Likely
Likely
Likely
Unlikely
Diagnostic Accuracy
Sensitivity
Of all the students who failed the PSSA, what percentage were accurately predicted to fail based on their ORF score
Specificity
Of all of the students who passed the PSSA, what percentage were accurately predicted to pass based on their ORF score
Negative Predictive Power
Percentage of students predicted to be successful on the PSSA who were actually Successful
Positive Predictive Power
Percentage of students predicted to be unsuccessful who actually failed to meet proficiency on the PSSA
(Silberglitt, 2008; Silberglitt & Hintz, 2005)
PSSA Reading and DIBELS ORF Scatterplot
Student data DIBELS Local Benchmark Proficient PSSA
2000
1800
False Positives
1600
1400
1200
1000
800
600
0
True Positives
20 40 60 80 100 120
DIBELS Oral Reading Fluency
True Negatives
140
False Negatives
160 180 200
PSSA Reading and DIBELS ORF Scatterplot
Student data DIBELS Local Benchmark Proficient PSSA
2000
1800
False Positives
1600
1400
1200
1000
800
600
0
True Positives
20 40 60 80 100 120
DIBELS Oral Reading Fluency
True Negatives
140
False Negatives
160 180 200
Local Benchmarks - Method 2
Fun with SPSS!
Logistic Regression & Roc Curves
More accurate
Helps to balance Sensitivity, Specificity,
Negative & Positive Predictive Power
For more information see
Best Practices in Using Technology for Data-
Based Decision Making (Silberglitt, 2008)
If Local Criteria are Not an
Option
Use norms that accompany the measure
(DIBELS, AIMSweb, etc.).
Use national norms.
Making Decisions: Best Practice
Research has yet to establish a blue print for ‘grounding’ student RoI data.
At this point, teams should consider multiple comparisons when planning and making decisions.
Making Decisions: Lessons
From the Field
When tracking on grade level, consider an
RoI that is 100% of expected growth as a minimum requirement, consider an RoI that is at or above the needed as optimal.
So, 100% of expected and on par with needed become the limits of the range within a student should be achieving.
Is there an easy way to do all of this?
Grace
Oliver
Peyton
Josh
Riley
Mason
Zoe
Ian
Faith
David
Alexa
Hunter
Caroline
Aiden
Benchmark
Ava
Noah
Olivia
Liam
Hannah
Gavin
Oral Reading Fluency
34
41
29
30
18
23
28
50
63
49
42
53
65
55
59
64
53
01/15/09 01/22/09 01/29/09 02/05/09 02/12/09 02/19/09 02/26/09 03/05/09 03/12/09 03/19/09 03/26/09 04/02/09 04/09/09 04/16/09 04/23/09 04/30/09 05/07/09 05/14/09
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
Needed RoI* Actual RoI** % of Expected
RoI
68
61
49
49
43
48
40
49
45
77
69
52
57
61
60
54
54
71
87
90
95
92
84
1.61
2.28
2.28
1.29
2.17
2.76
2.01
167%
213%
156%
49
53
79
70
54
44
38
49
40
48
46
50
49
53
64
54
69
68
55
69
51
47
48
67
50
57
36
52
67
46
51
58
36
50
64
70
54
60
68
60
84
74
57
75
67
60
78
77
77
76
74
83
83
82
79
79
2.22
1.50
2.28
2.67
2.06
1.39
1.94
1.72
1.44
2.06
1.45
1.12
1.62
1.76
1.17
1.50
1.58
1.20
1.66
1.76
112%
87%
125%
136%
91%
116%
122%
93%
129%
136%
38
31
36
23
19
23
20
35
44
25
28
42
45
24
68
49
36
52
33
48
40
55
47
36
43
33
38
37
51
30
29
19
23
32
19
45
63
28
25
58
46
44
38
37
34
30
3.11
2.72
3.39
3.33
4.00
3.72
3.44
1.44
0.24
0.75
0.79
0.94
0.75
0.02
111%
19%
58%
61%
73%
58%
2%
* Needed RoI based on difference betw een w eek 1 score and
Benchmark score for w eek 18 divided by 18 w eeks
** Actual RoI based on linear regression of all data points
Benchmarks based on DIBELS Goals
Expected RoI at Benchmark Level
Oral Reading Fluency Adequate Response Table
Realistic Grow thAmbitious Grow th
1st Grade
2nd Grade
3rd Grade
4th Grade
5th Grade
2.0
1.5
1.0
0.9
0.5
3.0
2.0
1.5
1.1
0.8
(Fuchs, Fuchs, Hamlett, Walz, & Germann 1993)
1/14/2011 1/121/2011 1/28/2011 5/14/2011
Needed RoI Actual RoI
1 2 3 18
% of Expected
RoI
Benchmark 68
Student 22 27
90
56 3.78
1.29
1.89
147%
Access to Spreadsheet
Templates
http://sites.google.com/site/rateofimprove ment/home
Click on Charts and Graphs.
Update dates and benchmarks.
Enter names and benchmark/progress monitoring data.
What about Students not on
Grade Level?
Determining Instructional Level
Independent/ Instructional /Frustrational
Instructional often b/w 40 th or 50 th percentile and 25 th percentile.
Frustrational level below the 25 th percentile.
AIMSweb: Survey Level Assessment
(SLA).
Setting Goals off of Grade Level
100% of expected growth not enough.
Needed growth only gets to instructional level benchmark, not grade level.
Risk of not being ambitious enough.
Plenty of ideas, but limited research regarding Best Practice in goal setting off of grade level.
Possible Solution (A)
Weekly probe at instructional level and compare to expected and needed growth rates at instructional level.
Ambitious goal: 200% of expected RoI
Oral Reading Fluency
01/15/10 01/22/10 01/29/10 02/05/10 02/12/10 02/19/10 02/26/10 03/05/10 03/12/10 03/19/10 03/26/10 04/02/10 04/09/10 04/16/10 04/23/10 04/30/10 05/07/10 05/14/10
Needed RoI* Actual RoI**
% of Expected
RoI
3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
5th Grade
1
115
104
104
94
99
2
110
96
73
102
111
110
107 91
4th Grade 105
3rd Grade 115
2nd Grade 68
74 63
97
74
93
112
AB
90
92
108
66
105
60
80
79
83
124
108
79
122
121
118
101
95
100
90
113
62
114
103
121
103
115
121
83
79
72 43 59
81 92
131
107
111
118
134
119
116
135
92
89 90 120
134
129
126
107
132
92
112
135
131
121
76 57 47
120 137
81
70
55 57 66 76 66 47
85 72 84 92 94 82 76
91
104
70
79
65
79
84 71 74 86 82 77
94
87
105
99
122
123
128
108
109
109
65
110
73
68
66
82
91
18
124
143
121
113
93
1.11
1.11
1.67
1.39
142 0.78
95 1.56
116 2.83
112 1.22
120 0.72
119 0.78
119 0.94
118
4.14
2.71
124
6.27
3.25
3.21
4.50
90
0.94
-1.45
1.29
1.15
0.53
0.49
-0.08
-1.41
0.11
1.21
1.22
0.76
-0.06
4.94
1.32
0.89
2.33
0.39
2.03
0.51
0.53
2.15
1.43
407%
271%
248%
168%
440%
74%
383%
97%
21%
228%
231%
-8%
646%
93%
-15%
-267%
-274%
89%
91 70 104 -0.06
2.21
171%
Possible Solution (B)
Weekly probe at instructional level for sensitive indicator of growth.
Monthly probes (give 3, not just 1) at grade level to compute RoI.
Goal based on grade level growth (more than 100% of expected).
What do we do when we do not get the growth we want?
When to make a change in instruction and intervention?
When to consider SLD?
When to make a change in instruction and intervention?
Enough data points (6 to 10)?
Less than 100% of expected growth.
Not on track to make benchmark (needed growth).
Not on track to reach individual goal.
When to consider SLD?
Continued inadequate response despite:
Fidelity with Tier I instruction and Tier
II/III intervention.
Multiple attempts at intervention.
Individualized Problem-Solving approach.
Evidence of dual discrepancy…
05/ 14/ 09
18
82
79
79
78
77
77
76
74
58
90
95
92
84
83
83
46
44
38
37
34
30
2.22
1.50
2.28
2.67
2.06
3.11
2.72
3.39
3.33
1.61
2.28
2.28
1.39
1.94
1.72
1.44
2.06
4.00
3.72
3.44
N eeded R o I* A c t ual R o I** % o f Expec t ed
R o I
1.45
1.12
1.62
1.76
1.17
1.44
0.24
0.75
0.79
1.29
2.17
2.76
2.01
1.50
1.58
1.20
1.66
1.76
0.94
0.75
0.02
112%
87%
125%
136%
91%
111%
19%
58%
61%
167%
213%
156%
116%
122%
93%
129%
136%
73%
58%
2%
D ual D is c repanc y?
Keep On Truckin
Keep On Truckin
BIG PROBLEMS
BIG PROBLEMS
BIG PROBLEMS
BIG PROBLEMS
BIG PROBLEMS
BIG PROBLEMS
Growth Criteria
>125%
85% - 125%
<85%
Three Levels of Examples
Whole Class
Small Group
Individual Student
- Academic Data
- Behavior Data
Whole Class Example
Computation
50th P ercentile
Student
Student
Student
Student
Student
Student
Student
Student
Student
Student
Student
Student
25th P ercentile
Student
Student
Student
Student
Student
Student
Student
01/15/10 01/22/10 01/29/10 02/05/10 02/12/10 02/19/10 02/26/10 03/05/10 03/12/10 03/19/10 03/26/10 04/02/10 04/09/10 04/16/10 04/23/10 04/30/10 05/07/10 05/14/10
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
Needed Ro I* A ctual Ro I** % o f Expected
Ro I
25
19
31 0.35
0.24
6.5
6
13
8.5
6.5
8
9
9
6.5
5.5
7.5
5
5
4.5
9
7.5
8
8
5.5
5.5
8
8.5
4.5
9.3
8.5
5.5
8
10.5
10.5
5.6
9 8 4
6
5.5
5
6.5
5.6
5.5
4
6
5.5
4.5
5
3
10
6.5
10.5
4.5
10
8
5.6
8
4.5
6.5
6.5
8
5
8.5
2.5
4.5
10.5
4
9
8
6.6
9
6.5
6.6
8.5
3.5
7.5
4
3.3
5
4
4.5
4
6
10
11
11
11
9.5
9.6
9.3
8
10.5
6
4.6
8
3.5
6.5
3
5
8
3.5
5
6.5
6.5
6.3
6.3
8
7
6.6
6.6
6
6
5.5
5.3
9
9
9.6
9
23
13
11.5
10.5
1.72
1.72
1.72
1.72
1.72
1.72
1.72
1.72
1.72
1.72
1.72
1.72
1.72
1.72
1.72
1.72
1.72
1.72
1.72
0.61
0.57
1.06
-0.23
-0.03
0.07
0.43
0.07
-0.25
-0.42
-0.18
-0.24
0.09
0.19
-0.46
0.04
0.25
-0.03
-0.14
173%
161%
300%
- 6 6 %
- 7 %
2 1%
122%
2 0 %
- 7 1%
- 119 %
- 5 1%
- 6 7 %
2 6 %
5 5 %
- 13 0 %
11%
7 1%
- 8 %
- 4 0 %
* Needed RoI based on difference betw een w eek 1 score and Benchmark score for w eek 18 divided by 18 w eeks
** Actual RoI based on linear regression of all data points
Percentiles based on AIMSw eb Grow th Tables
Expected RoI at 50th Percentile
Expected RoI at 25th Percentile
Digit Fluency Adequate Response Table
Realistic Grow thAmbitious Grow th
1st Grade
2nd Grade
3rd Grade
0.3
0.3
0.3
0.5
0.5
0.5
4th Grade
5th Grade
0.75
0.75
1.2
1.2
(Fuchs, Fuchs, Hamlett, Walz, & Germann 1993)
3 rd Grade Math Whole Class
Who’s responding?
Effective math instruction?
Who needs more?
N=19
4 > 100% growth
15 < 100% growth
9 w/ negative growth
Small Group Example
Oral Reading Fluency
Benchmark
Student
Student
Student
Student
Student
09/11/09 09/18/09 09/25/09 10/02/09 10/09/09 10/16/09 10/23/09 10/30/09 11/06/09 11/13/09 11/20/09 11/27/09 12/04/09 12/11/09 12/18/09 01/01/10 01/08/10 01/15/10
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
Needed RoI* Actual RoI** % of Expected
RoI
28
26
44
35
31
40
39
38
28
35
44
41
42
32
39
38
45
40
31
45
48
42
50
27
42
52
45
55
29
47
64
52
64
35
53
72
57
72
34
58
74
68
62
74
38
65
78
1.83
2.22
2.33
2.06
1.56
1.41
1.49
2.77
0.57
1.90
2.62
106%
196%
41%
135%
186%
* Needed RoI based on difference betw een w eek 1 score and Benchmark score for w eek 18 divided by 18 w eeks
** Actual RoI based on linear regression of all data points
Benchmarks based on DIBELS Goals
Expected RoI at Benchmark Level
Oral Reading Fluency Adequte Response Table
1st Grade
Realistic Grow thAmbitious Grow th
2.0
3.0
2nd Grade
3rd Grade
1.5
1.0
0.9
2.0
1.5
1.1
4th Grade
5th Grade 0.5
0.8
(Fuchs, Fuchs, Hamlett, Walz, & Germann 1993)
Intervention Group
Intervention working for how many?
Can we assume fidelity of intervention based on results?
Who needs more?
Individual Kid Example
2nd Grade Reading Progress
100
90 y = 1.5333x + 42.8
90
80
70
74
79
68
60
56
60 y = 0.9903x + 36.873
53 53
20
10
50
44
40
30
31
45
48
46
0
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
09/12/08 09/19/0809/26/0810/03/08 10/10/08 10/17/08 10/24/08 10/31/08 11/07/08 11/14/08 11/21/08 11/28/08 12/05/08 12/12/08 12/19/08 01/16/09 01/23/09 01/30/09 02/06/0902/13/09 02/20/0902/27/0903/06/09 03/13/0903/20/0903/27/0904/03/09 04/10/09 04/17/0904/24/09 05/01/09
Benchmark Linear (Benchmark) Linear
Individual Kid
Making growth?
How much (65% of expected growth).
Atypical growth across the year (last 3 data points).
Continue? Make a change? Need more data?
RoI and Behavior?
Percent of Time Engaged in Appropriate Behavior
100
90
80
70
60
50 y = 3.9x + 19.8
y = 7.2143x - 1.5
40
30 y = 2x + 22
20
10
0
1 2
Baseline
3 4
Condition 1
5 6
Condition 2
7 8 9
Linear (Baseline)
10 11 12
Linear (Condition 1)
13 14
Linear (Condition 2)
15 16 17
Linear (Condition 2)
18
Things to Consider
Who is At-Risk and needs progress monitoring?
Who will collect, score, enter the data?
Who will monitor student growth, when, and how often?
What changes should be made to instruction & intervention?
What about monitoring off of grade level?
Who is At-Risk and needs progress monitoring?
Below level on universal screening
Entering 4 th Grade Example
Student A
Student B
Student C
DORF
(110)
115
85
72
ISIP
TRWM
(55)
58
48
35
4Sight
(1235)
1255
1216
1056
PSSA
(1235)
1232
1126
1048
Who will collect, score, and enter the data?
Using MBSP for math, teachers can administer probes to whole class.
DORF probes must be administered oneon-one, and creativity pays off (train and use art, music, library, etc. specialists).
Schedule for progress monitoring math and reading every-other week.
1 st
2 nd
3 rd
4 th
5 th
Week 1 Week 2
Reading Math Reading Math
X X
X X
X X
X X
X X
Who will monitor student growth, when, and how often?
Best Practices in Data-Analysis Teaming
(Kovaleski & Pedersen, 2008)
Chambersburg Area School District Elementary
Response to Intervention Manual (McCrea et. al., 2008)
Derry Township School District Response to
Intervention Model
(http://www.hershey.k12.pa.us/56039310111408/lib/56039310111408/_files/Microsoft_Word_-
_Response_to_Intervention_Overview_of_Hershey_Elementary_Model.pdf)
What changes should be made to instruction & intervention?
Ensure treatment fidelity!!!!!!!!
Increase instructional time (active and engaged)
Decrease group size
Gather additional, diagnostic, information
Change the intervention
Final Exam…
Student Data: 27, 29, 26, 34, 27, 32, 39,
45, 43, 49, 51, --, --, 56, 51, 52, --, 57.
Benchmark Data: BOY = 40, MOY = 68.
What is student’s RoI?
How does RoI compare to expected and needed RoIs?
What steps would your team take next?
What if Benchmarks were 68 and 90 instead?
Questions? & Comments!
The RoI Web Site
http://sites.google.com/site/rateofimprovement/
Download powerpoints, handouts, Excel graphs, charts, articles, etc.
Caitlin Flinn
CaitlinFlinn@hotmail.com
Andy McCrea
andymccrea70@gmail.com
Matt Ferchalk mferchalk@norleb.k12.pa.us
Resources
www.interventioncentral.com
www.aimsweb.com
http://dibels.uoregon.edu
www.nasponline.org
Resources
www.fcrr.org
Florida Center for Reading Research
http://ies.ed.gov/ncee/wwc//
What Works Clearinghouse
http://www.rti4success.org
National Center on RtI
References
Ardoin, S. P., & Christ, T. J. (2009). Curriculumbased measurement of oral reading: Standard errors associated with progress monitoring outcomes from DIBELS, AIMSweb, and an experimental passage set. School Psychology
Review, 38(2), 266-283.
Ardoin, S. P. & Christ, T. J. (2008). Evaluating curriculum-based measurement slope estimates using triannual universal screenings. School
Psychology Review, 37 (1), 109-125.
References
Christ, T. J. (2006). Short-term estimates of growth using curriculum-based measurement of oral reading fluency: Estimating standard error of the slope to construct confidence intervals. School Psychology Review, 35 (1),
128-133.
Deno, S. L. (1985). Curriculum-based measurement: The emerging alternative.
Exceptional Children, 52, 219-232.
References
Deno, S. L., Fuchs, L.S., Marston, D., &
Shin, J. (2001). Using curriculum-based measurement to establish growth standards for students with learning disabilities. School Psychology Review,
30 , 507-524.
Flinn, C. S. (2008). Graphing rate of improvement for individual students.
InSight, 28(3 ), 10-12.
References
Fuchs, L. S., & Fuchs, D. (1998). Treatment validity: A unifying concept for reconceptualizing the identification of learning disabilities. Learning
Disabilities Research and Practice, 13 , 204-219.
Fuchs, L. S., Fuchs, D., Hamlett, C. L., Walz, L., &
Germann, G. (1993). Formative evaluation of academic progress: How much growth can we expect? School Psychology Review, 22 , 27-48.
References
Gall, M.D., & Gall, J.P. (2007). Educational research: An introduction (8th ed.). New
York: Pearson.
Jenkins, J. R., Graff, J. J., & Miglioretti, D.L.
(2009). Estimating reading growth using intermittent CBM progress monitoring.
Exceptional Children, 75 , 151-163.
References
Karwowski, W. (2006). International encyclopedia of ergonomics and human factors . Boca Raton, FL: Taylor & Francis
Group, LLC.
Shapiro, E. S. (2008). Best practices in setting progress monitoring goals for academic skill improvement. In A. Thomas and J. Grimes
(Eds.), Best practices in school psychology V
(Vol. 2, pp. 141-157). Bethesda, MD: National
Association of School Psychologists.
References
Vogel, D. R., Dickson, G. W., & Lehman, J.
A. (1990). Persuasion and the role of visual presentation support. The UM/3M study. In M. Antonoff (Ed.), Presentations that persuade. Personal Computing, 14 .
References
Burns, M. (2008, October). Data-based problem analysis and interventions within
RTI: Isn’t that what school psychology is all about? Paper presented at the
Association of School Psychologists of Pennsylvania Annual Conference, State
College, PA.
Ferchalk, M. R., Richardson, F. & Cogan-Ferchalk, J.R. (2010, October). Using oral reading fluency data to create an accurate prediction model for PSSA Performance.
Poster session presented at the Association of School Psychologists of Pennsylvania
Annual Conference, State College, PA.
Hintze, J., & Silberglitt, B. (2005). A Longitudinal Examination of the Diagnostic
Accuracy and Predictive Validity of R-CBM and High-Stakes Testing. School
Psychology Review , 34 (3), 372-386.
McGlinchey, M., & Hixson, M. (2004). Using Curriculum-Based Measurement to
Predict Performance on State Assessments in Reading. School Psychology Review ,
33 (2), 193-203.
Shapiro, E., Keller, M., Lutz, J., Santoro, L., & Hintze, J. (2006). Curriculum-Based
Measures and Performance on State Assessment and Standardized Tests: Reading and Math Performance in Pennsylvania. Journal of Psychoeducational Assessment ,
24 (1), 19-35.
References
Silberglitt, B. (2008). Best practices in Using Technology for Data-
Based Decision Making. In A. Thomas and J. Grimes (eds.) Best practices in school psychology V . Bethesda, MD: National
Association of School Psychologists.
Silberglitt, B., Burns, M., Madyun, N., & Lail, K. (2006). Relationship of reading fluency assessment data with state accountability test scores: A longitudinal comparison of grade levels. Psychology in the
Schools , 43 (5), 527-535.
Stage, S., & Jacobsen, M. (2001). Predicting Student Success on a
State-mandated Performance-based Assessment Using Oral
Reading Fluency. School Psychology Review , 30 (3), 407.
Stewart, L.H. & Silberglitt, B. (2008). Best practices in Developing
Academic Local Norms. In A. Thomas and J. Grimes (eds.) Best practices in school psychology V . Bethesda, MD: National
Association of School Psychologists.