Team Read Case PA 522

advertisement
Norma Dye
PA 522 Program Evaluation
November 30, 2011
Evaluating the Efficacy of a Cross-Age Tutoring Program in the Seattle School District
Introduction/Program Overview
Team Read is a program in the Seattle public school system that helps elementary school
students to achieve reading success. Team Read works with elementary students who are reading
significantly below grade level, helping them learn to read at or above grade level through oneon-one tutoring. The idea was a collaborative vision of philanthropists Craig and Susan McCaw,
the then Seattle Schools Superintendent John Stanford, and Joan Dore, a Reading Specialist in
the school district. They created a cross tutoring program where high school students tutor
elementary students in reading. The philosophy behind the program was that reading is an
important skill to achieve to be successful in other life endeavors. Joan Dore created the crosstutoring concept and the program was born in March 1998.
Team Read is an example of private dollars and talent working with public education to
improve literacy. The Alliance for Education is an agency that allows private investors to
contribute to the Seattle public school system and was formed in the 1990s. The Team Read
program is dedicated to the purpose of increasing the reading skills of second through fifth grade
Seattle elementary school students through a year-long tutoring program. One of the goals of the
program is also regarding the high school students who do the tutoring. The goal is to provide
work experience, instill a sense of responsibility, and learn the value of community service.
An Advisory Board was formed for Team Read that consisted of Seattle School District
Staff, Alliance for Education program representatives, and Craig and Susan McCaw. At the
launch of the program they hired a Program Manager to head the program. To assure the
program ran smoothly the Program Manager hired Site Coordinators for each of the schools. The
Site Coordinators were part-time staff or teachers of the school. They were trained by Team
Read to supervise after-school tutoring sessions twice a week for an hour and a half from
October to May. The Program Manger also formed partnerships with relevant agencies in the
Seattle area that provided volunteers to support the part-time Site Coordinators and coaches. The
volunteers helped with transportation for the students and coaches and worked with the coaches
to assist in tutoring.
The Team Read tutors were recruited and received intensive training coaching and
guidance consistent with district reading goals, standards, and district-wide reading strategies.
The focus was on increasing vocabulary development, fluency, and comprehension
(TeamRead.org). There were incentives for the tutors to participate in the program. They could
choose to be paid hourly, get school credit, or college tuition credits. In addition to these
incentives, high school students who complete a year of tutoring could win a four-year college
scholarship. To be eligible to participate in the program high school students have to meet the
following guidelines:

Be registered in a Seattle public school

Have a GPA of 2.7 or higher

Attend all training and tutoring sessions

Follow directions by Site Coordinator, Program Manager, and Reading Specialist
Team Read attempts to reach those students who are most in need and work with the
elementary schools to identify potential participants. Working closely with Seattle Public
Schools leadership, Team Read identifies the elementary schools with the lowest reading scores
and greatest need for the program. Teachers refer about 25 students from their schools to the
program each September. Pending permission from their parents, these students go on to
participate in Team Read. The eligibility criteria are based on need as indicated by:

Placement in bottom quartile of district reading test scores

Recommended by a teacher or student intervention team

Possible retention due to failure to meet reading standards at each grade level (Team
Read Case A)
Ten Year History
1997
The McCaw’s meet with Superintendent John Stanford to discuss strategies to support
increasing reading skills of elementary students and donate $1 million to start Team
Read.
1998
The program is launched in four elementary schools serving 2nd to 5th grade students.
First scholarship is awarded to one of the Tutors.
1999
Six more schools are added to the list of participating schools.
By the end of the school year, Team Read has approximately 335 students and 300
coaches in ten schools.
Program Manager gets good feedback on the program from teachers.
2000
First evaluation is conducted by independent evaluator hired by Team Read Advisory
Board.
The program is expanded to 17 elementary schools and has coaches from 15 high
schools.
There is a second evaluation of the program conducted by same the independent
evaluator.
2001
Team Read refocuses to serve 2nd and 3rd grade readers.
2002
Scholarship was awarded to a tutor at each site.
2004
Reading Leaders Pilot Program started at three elementary school sites.
Team Read hires Evergreen Training and Evaluation consultants to survey student
readers and coaches.
2006
Team Read limits comparison to pre-and post-program reading levels for each grade.
Team Read becomes an official 501(c) (3) non- profit organization.
2007
Team Read becomes a United Way Partner.
2008
Formal Pilot Program for Reading Leaders and Reading Mentors is implemented.
Team Read has served 12,000 students.
(From Team Read 2008 Annual Report. TeamRead.org)
The Evaluations
The Team Read Advisory Board hired independent evaluator, Margo Jones, in 1999 to
evaluate the program. Even though the Program Manager, Tricia McKay, had received good
informal feedback throughout the year about the program from teachers, site coordinators, tutors,
readers, and parents, there was no indication that reading scores were improving. Jones found in
her first evaluation that there was a strong benefit to the coaches but had found mixed results on
the benefits to readers. She recommended an increase in the training of tutors and site
coordinators.
Jones was hired again in 2000 to evaluate the program. Jones evaluated the same original ten
schools in the program for both evaluations. She formulated three questions to assess Team
Read’s impact:
1. Do the reading skills of the student readers improve significantly because they
participate in the Team Read program?
2. How does the program affect the reading coaches?
3. What is working well in the program and what can be improved?
Goal #1: Reading Skills
For the second evaluation, Jones attempted to hone the accuracy of the results for reading
improvement and adjusted her evaluation approach. She used correlation criteria to enhance the
analysis of the program’s impact on improving reading skills. Jones chose state reading scores as
a comparison of reading skills for the students of Team Read. She used two methods to measure
reading improvement. She looked at the positive mean change of scores for the Team Read
students as a whole and then she looked at the upward movement of the student readers with the
lowest pre-test scores and compared to the percentage of district students at the same grade level
that made the same upward movement.
For comparing the pre-test the post-test scores of the readers, she used state reading tests.
State reading tests were tests the students took about three-quarters into each school year. She
used the current year’s test as the post-test and as the pre-test she used the prior year’s test, when
the student was a grade below. The difference between the post-test and pre-test describes the
average improvement in reading scores. She then compared the averages for each of the Team
Read students across the schools and summarized a net increase in scores for the students.
Lastly, she compared these scores with the scores of all district students moving from below to
at-or-above grade level. The table below is an example of the state tests administered for each
grade and the grading metric.
Grade Pre-test
Post-test
Post-test Metric
2
DRA Fall 1999
DRA Spring 2000
Test Level
3
DRA Spring 1999
ITBS Spring 2000
NCE
4
ITBS Spring 1999
WASL Spring 2000 Scale score
5
WASL Spring 1999 ITBS Spring 2000
NCE
(Team Read Case B pg. 5)
The results for goal one were that 18.2% of all district students improved from below to at-orabove grade level compared to only 9.6% of Tea m Read students (Team Read Case B).
Goal #2: Impact on Coaches
For the second goal in the evaluation, Jones analyzed a questionnaire handed out to the
coaches that asked them how they felt about different elements of the program to include their
personal experience with Team Read, their relationship with students, how much support they
got from the program, and how they benefited from the program. She isolated the first few
questions about how they felt they were supported during the tutoring sessions and how
successful they felt the tutoring sessions were for helping student readers and found that the
results varied based on the school that the coaches were assigned. The results from analyzing the
last question on the questionnaire most significantly showed how the coaches felt they benefited
from the program. The answers to this questionnaire showed that 82% of the coaches felt they
had an increased sense of responsibility, 80% said they had better communications skills, and
74% had a sense of pride and accomplishment. More results were shown on other benefits such
as more patience etc. (Team Read Case B pg. 8)
Goal #3: How Is the Team Read Program Going?
For the last goal, Jones accessed the overall program. She gathered information through
interviews with the major stakeholders. She interviewed the Program Manager, two of the
reading specialists, the site coordinators, and some of the volunteer assistants. She also did a site
visit to observe coaching sessions in Spring 2000. She drew some conclusions from the data
measuring the readers and coaches. Lastly she reviewed research done in the last five years on
cross-age tutoring and the last three years’ research on best practices.
The results of the research studies showed that many of the practices used at Team Read sites
were deemed effective. They included:

Using high school students to coach elementary students

Relying on one-on-one rather than whole group instruction

Starting with a strong coach training session

Being an extended day rather than a regular day pull-out program

Placing more emphasis on comprehension than phonics

Using reading materials of one paragraph or more

Having reading specialists visit the site and act as models for the coaches

Fostering good tutor/student reader relationships

Having good central program management
In addition to the research on cross-training, Jones mentions several best practices that were
present at some of the sites. They included:

Assessing and encouraging comprehension

Discussing what was read in small groups

Encouraging collegial relationships among coaches

Coordinating material read in Team Read sessions with reading material in the regular
school program

Site coordinators talking supportively with coaches about their coaching practices and
giving encouragement

Students showing pride and responsibility toward the school and respect toward peers,
coaches, and adults

Strategically utilizing volunteers

Having and choosing books of appropriate difficulty level and including some contentarea books

Selecting student readers who need and can benefit from the Team Read program

Having a variety of instructional activities to support reading

Insuring consistency in coach-student pairing

Maintaining a positive learning environment

Having snacks and time to eat them (Team Read Case B pg. 9-10)
For the second evaluation, Jones made several recommendations to improve the Team Read
program. She recommended again more training for the coaches and site coordinators. She also
recommended allocating more reading specialists to the sites to assist the site coordinators as
well as supplementing more staff to better assist site coordinators with administrative functions
and monitor volunteers. There were some issues with using different pre- and post-tests and
graders as an indicator of reading improvement and this would need to be addressed. Lastly,
Jones recommended that future evaluators make more site visits and take into consideration the
differences among the sites and make changes accordingly.
Analysis of Case
According the Chen, “outcome evaluation is a rigorous assessment of what happened
thanks to a program.” Chen also states that outcome evaluations can be time consuming and
expensive and therefore should be carefully planned and should take into consideration
factors such as, the availability of data, patterns of client enrollment, the availability of
surveillance data, and the availability of comparison groups (pg. 195). Outcome evaluations
can be in the form of an efficacy evaluation or an effectiveness evaluation. Both of these
evaluations go a step beyond just a merit assessment and explore the question, “Is this
program achieving its goals?” (pg. 195) Efficacy evaluation assumes ideal circumstances
and efficiency evaluation assumes real world circumstances. The Team Read case study is an
efficacy evaluation because the research, methods, and data are based on a scientific
paradigm of comparing before and after scores in a controlled setting.
For the first goal of assessing the improvement of reading skills among Team Read
Students, Jones used two methods. Unfortunately there were flaws in both. With the first
method in the evaluation Jones made an error by assuming that the pre- and post-tests she
used would measure the same skills. The state tests were different for different grades and
they were given a full year apart when students were in different grades. Also, the post-test
metrics were different for each grade. The next flaw was that she used a correlation criterion
to determine the validity of this measurement but the interpretation of the criterion for
different tests and grades did not accurately assess the improvement of reading skills. The
second method was used to determine what ratio of Team Read students moved from below
grade level to at-or above grade level compared to students from the whole district. The
problem with comparing Team Read students with students in the entire district is two-fold.
First, the Team Read students were selected because they were in the bottom quartile of
reading scores and you can’t compare to the entire district of students. Second, is that there
could be too many outlying differences to be accounted for in measuring that large of a
sample. Examples would be the economic situation of the students, the motivation of the
students to learn, and how much support those students would get from parents, etc. A large
proportion of Team Read students are on the free lunch program. Comparing the Team Read
students to a cross section of the district with similar factors could have produced more
realistic assessment.
She also used a correlation criteria to determine the validity of this measurement that
assumed if the correlation criterion between pre- and post-tests was near or above .8, then a
gain was assumed comparing the pre-to-post-test changes in the scores. Unfortunately, this
was also flawed as only one grade had similar pre-and post-tests and for the other grades
where the correlation coefficient was significantly below the .8, analysis could not be made
because the reading tests were incompatible.
For the second goal of assessing the impact of Team Read on the coaches, Jones used the
method of a survey questionnaire given to coaches during their last week of coaching. The
results of the survey revealed that the coaches felt satisfied with their jobs and the program
appears to be affective based on the survey. However, surveying the coaches might show
how they were impacted by the program and help the program manager and other internal
stakeholders understand how to continue with this portion of the program, but it does not
help assess the impact on student readers. The survey questionnaire evaluation method
ignores the input from the readers and parents. Lastly, surveys can be a useful survey tool if
formulated correctly but also are limited in that they only gather information about the
questions that are asked. (Chen Pg. 77)
For the last goal in the evaluation, Jones used a mixed method to assess the program. She
interviewed stakeholders, did a site visit, and did literature review research. According to
Chen, using mixed methods of collecting qualitative and quantitative data is a strategy used
in outcome evaluation. Additionally, Chen states that the steps involved in an outcome
evaluation are to “clarify the program theory, collect and analyze data, and characterize the
program in its entirety, then by its parts” (pgs. 257-258). While Jones did perform
stakeholder interviews in the second evaluation, it might have been more effective to meet
with the stakeholders initially and “clarify the program theory” before performing a statistical
analysis of the test scores and surveys of the coaches. Also, it appears that she broke down
the program into parts to evaluate instead of looking at the entire program first. Also, she
could have involved more stakeholders such as the students and parents and include the
internal stakeholders such as the program manager and site coordinators more at each stage
of the evaluation. Finally, it might have been more effective to do more than one site visit per
school year to get a clearer picture of the impact of the program for the student readers.
As a final note to the analysis of this case, Team Read started to compare test scores of
student readers to other student readers in the program and monitored the movement of the
individual grade levels of readers pre-program and post program instead of comparing them
to the entire district in 2003. They also, limited the program to 2nd and 3rd grade students
citing that a greater impact would be made on elementary students if they were recruited into
the program prior to 4th grade. The Team Read web site shows the results for improvement in
reading from below grade level to at-or-above reading level for each grade. The results for
the 2006-2007 school year show that 63% of 2nd graders and 55% of 3rd graders tested at
near/at/above grade level after being involved in the Team Read program, up from 23% and
45% prior to being in the program.
Team Read also hired an evaluation consultant, Evergreen Training and Evaluation, to
provide an independent evaluation for Team Read. The consultants surveyed student readers
and reading coaches to demonstrate the value of Team Read. Reading coaches reported that
the program benefits the individual student readers and gave positive feedback concerning
guidance received from their site coordinators. Meanwhile, student readers reported reading
more often and saying reading had become more fun since they had joined the program
(TeamRead.org).
References
San, Cornelia Ng. Evans School of Public Affairs, the University of Washington. (June 1999).
Team Read (A): Improving Literacy in the Seattle School District The Electronic Hallway, 1-29.
San, Cornelia Ng. Evans School of Public Affairs, the University of Washington. (October
2000). Team Read (B): Evaluating the Efficacy of a Cross-Age Tutoring Program in the Seattle
School District. The Electronic Hallway, 1-25.
McKay, Tricia. (2008). TeamRead.org. Team Read 2008 Annual Report.
Chen, Huey-Tsyh. (2005). Practical Program Evaluation.
Download