Learning to Read at Hillcrest: Executive

advertisement
Learning to Read at Hillcrest: Executive
Summary of the First Year Testing Data
Executive Summary Report Prepared by John S. Rice
Department of Sociology and Criminology, UNCW
Co-Founder of the Hillcrest Reading Program
2008-2009
The pages to follow provide a summary of testing results for the children served by the
Hillcrest Reading Program Research Team, and our many student tutors, for the 2008-2009
school year. A brief overview of the research and the tutorial program will be helpful to make
better sense of this executive summary. The research is a straightforward pre-test, post-test
experimental design. The first year of the program, September 2008-April 2009, constituted a
pilot. In 2009-2010, we will secure a control group, matched to our participants' characteristics
(age, grade in school, race, socio-economic status), in order to make statistical comparisons of
outcomes.
In September of 2008, we administered one or two of three quick, and empirically
validated, tests of children's reading skills, to determine where they should be placed in the 100
Easy Lessons curriculum. The tests are called DIBELS (Dynamic Indicators of Basic Early
Literacy Skills). We tested the children at three time points: as noted, September 2008; progress
testing in December 2008; and end-of-year testing in April 2009. I will describe the tests in
greater detail before summarizing the results for each of the tests.
It should also be emphasized that the tutoring did not run uninterrupted from September
to April. Given our tutors’ schedules as university students, there was a break in tutoring from
December of ’08 until February of ’09; there were also breaks for Thanksgiving, Christmas, and
the traditional Spring Break. As such, in the fall semester, the average time each child spent
receiving tutoring was, on average, between 9-13 hours; in the spring, the mean tutoring time per
child was just over 15 hours. All reported gains in this report, then, should be considered in the
context of between 24-28 hours of tutoring.
Older children were given the Oral Reading Fluency (ORF) Test in which children read
aloud from three very short stories in succession. The children are timed, and after one
minute, they read another story, and another. All in all, the test takes about 4 minutes. As
they read, the tester follows along on her/his own copy, marking out words the child gets
wrong or does not get within three seconds, and marks the point at which time elapsed.
o Each test has established benchmarks for the children: for the ORF, they are as
follows:
Page 1 of 10
Benchmarks for Oral Reading Fluency Test
Grade Level
Spring of 1 Grade
Spring of 2nd Grade
Spring of 3rd Grade
Fluency Rate (Words per Minute)
40 WPM
90 WPM
110 WPM
st
Chart 1: Oral Reading Fluency Rate, All Children
T1: September 2008, pre-testing
T2: December 2008, mid-year progress testing
T3: April 2009, end of year testing
Children’s Initials identified below each bar graph row
604
639
105 0
53
52
90
53
89
C.F. M.W.
(3rd) (3rd)
71
12
101
19
40
0
32 73 57 159 52 14163
39
17
8
16
128
107
Correct – T1
T.P.
(3rd)
J.P.
(2nd)
Correct –T2
A.S.
(2nd)
53 418
101
120
78
Z.W.
(3rd)
392
T.W.
(2nd)
Correct -- T3
76
48
M.W.
(2nd)
Total
Net Gain
Page 2 of 10
This chart shows the progress of all of the older children (grades 2-3) that participated in
the reading program throughout the year. Because some of the children graduated from 100 Easy
Lessons in the fall 2008 semester, and others either stopped coming to the program, or joined us
later in the year, for all of the tests there are no testing data for some of the participants at one or
another of the testing periods. For example, C.F. and Z.W. (above) finished 100 Easy by
December 2008. As such, I have included both rates for all of the participants as well as adjusted
rates for each of the testing measures (see below). The adjusted rates are called for because the
all-children rates, given the variations in participation, artificially reduce the groups’ rate of
progress: Z.W. and C.F., for example, did not get zeros in T3; they simply were not tested.
Given the established DIBELS benchmarks, M.W., the third-grader, went from being
seriously at-risk, in September (19 WPM at T1), to being almost at grade level (90 WPM, the
benchmark is 110 WPM by spring of third grade), by the time we tested in April. With the
exception of J.P., the other three second-graders are no longer at risk, and are, indeed, well ahead
of the second-grade benchmarks. A.S. leads the pack here, not surprisingly. She was well ahead
of them when she started the program (107 WPM in September), and now reads 159 WPM,
which is 49 WPM higher than she should be able to read in spring of her third-grade year. Much
the same may be said of T.W., whose T3 fluency rate is now close to the spring of third-grade
level. J.P. and M.W., at 73 and 101 WPM, respectively, also made substantial gains. M.W. has
become a book-lover, and we are confident that he will be at grade level next year. J.P. is still 17
WPM behind the expected rate for second-graders, but he made impressive gains this year, as
well.
Page 3 of 10
Chart 2: Oral Reading Fluency Rate, Adjusted, Testing Times - T1-T3
604
328
433
90
71
32
53
19
73
40
17
8
39
16
159
57
52 141 63
128
120
107
78
M.W. (3rd)T.P. (3rd)
J.P. (2nd)
Correct – T1
A.S. (2nd)
Correct –T2
T.W. (2nd)
53
101 276
76
48
M.W. (2nd)
Correct -- T3
Total
Net Gain
Chart 2 summarizes the gain in oral reading fluency for the children that participated in
the Reading Program for the entire first year of the program (the data are “adjusted” insofar as
this chart includes only those children that were with us all year). As the chart shows, all of the
children made substantial gains in the numbers of words they were able to read correctly in one
minute: left to right, M.W. (3rd grade) had a net gain of 71 words per minute (she also went from
being seriously at risk to almost catching up to grade level, by the end of the year, according to
the DIBELS benchmarks; T.P. – who has been diagnosed with a learning disability – improved
from her initial score of 8 words per minute (with 7 accompanying errors – see ORF Error chart,
below) to 40 words per minute, for a net gain of 32 words per minute. Despite her substantial
gains, T.P. is still at-risk. A.S., T.W., and M.W. (2nd grade), are all above the second-grade
benchmarks.
Chart 2 also underscores the importance of providing the adjusted rates. Although, as we
saw in Chart 1, the eight children that participated for at least one semester had a net gain of 392
words per minute over the course of the program, their average gain was 44 WPM; the six
Page 4 of 10
children that participated all year had a smaller net gain, overall – 328 WPM – but their average
gain per child was just over 10 WPM more: just over 54 WPM.
Chart 3: Oral Reading Fluency Error Rate, All Children, Testing TimesT1-T3
40
13
0
3
0
14
18
3
3
1
0
2
6
1
C.F.
(3rd)
M.W.
(3rd)
Z.W.
(3rd)
-1
7
51
4
3
3
2
1
2
0
0
1
0
3
0
3
0
20
2
1
9
15
T.P.
(3rd)
Errors – T1
J.P.
(2nd)
Errors – T2
A.S.
(2nd)
T.W.
(2nd)
Errors -- T3
M.W.
(2nd)
Total
Net Gain
This chart shows the children’s progress in their oral reading fluency error rate from
September of 2008 through May of 2009; i.e., the number of errors the children made on the
ORF tests. As a group, at T1 the children made a total of 51 errors; by T3, those had been cut to
9 errors. This comprises a net gain of 40 fewer errors; on average, 5 fewer errors apiece. Much
more striking are the gains made by the two M.W.s, whose gains, combined, total 31 of the total
40-word net gain.
Page 5 of 10
Chart 4: Oral Reading Fluency Error Rate, Adjusted, Testing Times-T1-T3
50
38
45
40
35
18 47
13
30
25
4
20
15
10
5
0
3
1
14
3
6
7
3
2
1 20
0
0
0
3
0
M.W.
T.P. (3rd)
J.P. (2nd)
(3rd)
Errors – T1
1
2
A.S.
(2nd)
Errors – T2
0
2
9
1
3
T.W.
(2nd)
13
M.W.
(2nd)
Errors -- T3
Total
Net Gain
This chart shows the adjusted ORF error rate. Although this rate appears to show that this
group made less net gain in the overall number of errors they committed in the one-minute
DIBELS test, their rate is actually higher. Whereas all of the children, as we saw in Chart 3, had
a collective average of 5 fewer errors, this group has just over 6 fewer errors. As noted above,
the greatest gains were made by the M.W.s.
The younger children were given either (in some cases, both) the Phoneme
Segmentation Fluency (PSF) Test or the Nonsense Word Fluency (NWF) Test. Both
are one-minute tests, and administration and scoring are the same as is used for ORF
(above).
o NWF is a check to see if children can sound out words they have never seen
before (and likely will never see again); if they can they are developing the
phonetic decoding skills necessary to becoming a good reader. The benchmark for
this test is 50 Correct Letter Sounds by mid-1st grade.
Page 6 of 10
Chart 5: Nonsense Word Fluency, All Children, Testing Times-T1-T3
250
226
200
202
150
52
100
50
52
0
29
62
30
35
23
0
54
53
28
0
F.S. (1st)
I.M.
(1st)
A.P.
(1st)
40
16
0
6
0
0
X.M. (K) S.C. (PT.W. (K)
O.N. (K)
K)
Correct T1
Correct T2
52
Total
Correct T3
Chart 5 summarizes the progress on the DIBELS Nonsense Word Fluency tests
made by all children that participated in the program throughout the year. As with
previous charts, it should be noted that these “all children” results must be understood in
relation to the partial-year participation of three of the children: I.M. and O.N. stopped
participating at the end of the fall semester; X.M. joined the program in February. We
were disappointed that I.M. could not participate in the spring semester. She came a
handful of times, but was not there enough to make tangible progress. As such, she was
not tested in May. O.N. did not come at all in the spring. Our disappointment stems from
the substantial progress I.M. had made in the fall, going from being unable to sound out
any of the nonsense words in September to correctly identifying 53 by December, placing
her ahead of the spring, 1st grade benchmark. Conversely, in O.N.’s case, the
disappointment stemmed from our concern with his poor reading skills. He could’ve
benefited from continued intervention. X.M. joined us in February after his mother
learned about the program through community word-of-mouth. Given the rate of his
progress in the spring, we are confident that he will exceed the 1st grade benchmark next
year. Overall, as a group, the kids improved from 52 to 226 correct responses over the
Page 7 of 10
course of the year: an average of just under 25 words per minute.
Chart 6: Nonsense Word Fluency, Adjusted, T1-T3
77
80
70
60
30
39
30
50
40
16
22
30
20
10
0
0
S.C. (P-K)
0
1
T.W. (K)
O.N. (K)
T1
T2
1
Total
T3
Chart 6 presents the adjusted rate of Nonsense Word Fluency: that is, again, the
rate only for the children that participated regularly in the program for the full year. Of the
children in kindergarten and first grade we served in the program, four fit that description.
Whereas, as we saw in Chart 5, the rate for all children was just under 25 words per minute, the
rate for these children revealed an average gain from 13 words per minute at T1 (52 WPM,
divided among 4 children) to 43.5 WPM at T3 (203 WPM, divided by 4). Except for S.C.’s score
of 28, all of the children met or exceeded the 1st grade benchmark; given that S.C. will start
kindergarten next school year, we are confident that he will surpass that benchmark in his
kindergarten year.
F.S.’s lack of gain warrants further explanation. The research team discussed and has
proposed a couple of explanations for the lack of progress from T2-T3. Both explanations center
on his having “maxed-out.” First of all, his score of 52 exceeds the benchmark for his grade and
Page 8 of 10
age level; as such, we are convinced that he’d rather have moved on to regular reading, rather
than continued tutoring in phonemic awareness and the alphabetic principle, etc. (We perhaps
should have tested him for Oral Reading Fluency, but that introduced problems of
incommensurability. His baseline and progress tests were measured using the NWF tests.) The
other sense in which he was maxed out was that by the end of the year, he was often cranky and
tired, and did not want to participate in tutoring. This, of course, overlaps with the first problem:
he was, in short, bored.
Chart 7: Phoneme Segmentation Fluency Rate, All Children, T1-T3
77
80
70
60
30
39
30
50
40
16
22
30
20
10
0
0
S.C. (P-K)
0
1
T.W. (K)
O.N. (K)
T1
T2
1
Total
T3
The PSF test determines whether, and how well, children recognize that words are
made up of discrete sounds. The benchmark for this test is 35-45 Correct Letter
Sounds (CLS) by spring of kindergarten or fall of 1st grade.
As Chart 7 shows, S.C.’s progress all but certainly guarantees that he will be well ahead of
grade-level skills by the time he reaches spring of his kindergarten year. So too, T.W. exceeded
the DIBELS benchmark for CLS on this measure. O.N., as noted above, did not come in the
spring and was not tested at T-3.
Page 9 of 10
In sum, it was a very successful year in the Hillcrest Reading Program. Other than the
exceptions noted in the above narrative, all of the children made substantial gains in the
fundamental skills that predict whether children will go on to become successful readers. Two or
three of the children are still not where they need to be, but the progress they made this gives
them a solid foundation upon which to build further reading skills and to eventually catch up to
where they should be. As we saw, several of the children are now as much as a year ahead of
grade-level expected readings skills.
Page 10 of 10
Download