View paper

advertisement
Implementation Fidelity Across School Contexts
Abstract
Measuring implementation fidelity is an increasingly important aspect of researching and
understanding effective educational practices and programs. The level of fidelity with which a
practice or program is implemented is crucial to understanding whether or not the practice or
program works as intended, and to what extent. By describing, monitoring, and systematically
measuring fidelity, the program developer, instructor, or other educator learns about how to
strengthen application or instruction, whether that increased understanding ends with modifying
practices or removing unnecessary or ineffective strategies. This symposium begins to address
this growth by considering how implementation fidelity is understood across contexts by
reviewing pre-conditions to implementation in Indonesia, measuring implementation fidelity in
Slovenia, and developing indices for reporting purposes in the United States. The symposium is
organized as a discussion seminar with “food for thought” questions interspersed among the
presentations of the research. Understanding the quality of a program or instructional strategy
and its suitable delivery and proficient receipt are important links in translating a conceptually
sound model of instruction into good practice.
Purpose
Historically, few studies of educational programs and practices have published results of
treatment fidelity, especially with the consideration of components and processes necessary to
attain high levels of fidelity (Century, Rudnick, & Freeman, 2010; Hulleman & Cordray, 2009;
O’Donnell, 2008). Furthermore, few educational programs have traditionally used
implementation research results to inform the design and refinement of programs, infrastructures,
and practices. This has influenced an overall lack of understanding about the implementation
process in educational settings (Penuel & Means, 2011; Penuel, Singleton, & Roschelle, 2011).
The symposium proposed here suggests that understandings of implementation fidelity vary
considerably across contexts. As such, the purpose of this symposium is to open a professional
dialogue on implementation fidelity, specifically looking at:





How is implementation fidelity defined across contexts?
How has implementation fidelity been actualized in local, regional, and national scales?
What lessons have been learned about implementation fidelity?
How might our understanding of future iterations of measuring implementation fidelity
be informed by those studies currently running their courses?
What’s next? Where does implementation fidelity of effective school programs and
practices appear to be going? What might this conversation look like five or ten years
from now?
Looking across these questions, the symposium will focus the discussion on three studies that
address implementation fidelity in various settings, including Indonesia, Slovenia, and the United
States. The presenters will stimulate discussion on how these approaches have actualized
implementation fidelity toward establishing and measuring practices of effective schools.
Educational Importance
Measuring implementation fidelity is an increasingly important aspect of researching and
understanding effective educational practices and programs. The level of fidelity with which a
practice or program is implemented is crucial to understanding whether or not the practice or
program works as intended, and to what extent. By describing, monitoring, and systematically
measuring fidelity, the program developer, instructor, or other educator learns about how to
strengthen application or instruction, whether that increased understanding ends with modifying
practices or removing unnecessary or ineffective strategies.
Participant Perspectives
Presentation 1
Assessing the Pre-conditions for Implementing Differentiated Instructional Strategies in an
Inclusive School, “SMP Tumbuh Yogyakarta, Indonesia”
This study aims to assess the pre-condition for implementing differentiated instructional
strategies in an inclusive school in Yogyakarta, Indonesia. The study uses participatory
observations and in-depth interviews to consider the extent to which students in Years 7-9 are
receiving differentiated instructional practices that meet their needs. Because the school offers
scholarship programs, the students come from a variety of ethnic, religious, and socio-economic
backgrounds. Moreover, the variation of students with Special Education Needs (SEN) includes
Down’s Syndrome (DS), Mental Retardation (MR) with a variety of cognitive potentials based
on Binet scale, Asperger’s Syndrome, hearing impairment, traumatic bullying experience,
Attention Deficit Disorder (ADD), autism as well as gifted and talented students.
To address these wide needs, the school used the National Curriculum supplemented with the
Cambridge curriculum as an enrichment. The learning methods include inquiry learning
approach, active learning, cooperative learning, teaching in differentiation, and Interdisciplinary
Unit Program (IDU). Special programs designed include Education for Sustainable
Development (ESD), museum school, student assembly, literacy policy, Area Pertumbuhan (a
school program for facilitating the school’s vision and mission) and extracurricular activities.
Students basically receive the same learning materials, but the assignments and exams are
adjusted to the students’ ability levels. There is a Subject Teacher who teaches the class and one
Teaching Assistant (TA) to support students with SEN. The hours of lessons are reduced for
students who have many difficulties following lessons and replaced with life-skill programs
according to each student’s potential.
Before the teacher implements differentiated instructional practices, some necessary assessments
on the pre-conditions are conducted. The school’s criteria are as follows: 1) assessing the
readiness of the parents and student to study in an inclusive school; 2) determining student
educational needs (SEN); 3) observing the condition of students; 4) conducting micro teaching;
and 5) holding in-depth interviews. Then the school analyzes the results to determine if students
meet the criteria. Once the student is admitted, the school meets with parents to discuss the
school program. The parents must describe their child’s growth (physically, academic, socially
and emotionally), especially for the parents of SEN students, which is supplemented by
information from a psychologist. In addition, the school establishes a team (teacher, inclusive
coordinator, teacher assistant, and school’s psychologist) to assess the student in order to develop
the individual program (curriculum modification). For example, students with special needs
down syndrome (mental retardation) have individual learning time and are pulled out of the
regular class. A proportionally high number of SEN students have been accepted in the last three
academic years, making the delivery of appropriately individualized instructional plans
challenging. The school, however, has requirements that drive these high acceptance rates to
ensure a readiness, which will result in higher fidelity to the program as planned.
Presentation 2
Measuring Implementation Fidelity in Slovenia
There are many theories and approaches of external evaluation – their final challenging aim is
implementation in schools. Practices in many countries show that it is a demanding process and
expertise which could be seen as a separate field of knowledge. The paper discusses
implementation of external evaluation, based on the paradigm of school improvement carried out
by National Leadership School in Education, Slovenia. External evaluation in this context is part
of the system of self-evaluation introduced on the national level. External evaluators are head
teachers and teachers trained and licenced for external evaluation. The evaluation study included
17 representatives of eight out of nine institutions and all 20 of the external evaluators who carried
out the external evaluation. The evaluation study aimed to explore external evaluation at nine
schools. We were interested in the replies to the following evaluation questions:
 How successful do the participants of external evaluation judge the preparation and
implementation of external evaluations to have been?
 How successful do they judge the design of external evaluation in the emerging system of
quality assurance and assessment to have been?
 What kind of feedback on the implementation of improvements and the self-evaluation of
institutions under evaluation can be gained from external evaluation reports?
Data regarding evaluation questions was gained by means of document analysis and group
interviews which took place about one month after external evaluation had been completed. The
findings of evaluation study will be presented from the perspective of the project and the fidelity
of theories when transferred into the practise and same open questions will be discussed: 1) how
to balance process-oriented external evaluation, assessing improvement implementation and selfevaluation in the framework of school activities, with emphasis on assessing goal achievement
(results) from the perspective of student results; 2) how to achieve a proportion between usefulness
of external evaluation for particular school process and/or for the school system level; 3) the extent
of schools preparation for external evaluation needed when it became a part of the national system
of quality assurance and assessment. We will consider the results from the role that school
evaluation plays in managing quality in education.
Presentation 3
Rolling Up Fidelity Measures in an External Evaluation of a Professional Development Program
in the United States
Researchers and program developers alike appear to have had little experience analyzing overall
levels of fidelity based on systematic analysis of program components. The purpose of this
presentation is to provide the steps taken in one study to transform measures of components of
fidelity implementation to overall fidelity ratings by sites, in this case schools. Detailing the steps
of this process benefits evaluators think through possible ways of establishing rules for and
determining levels of fidelity achieved. In addition, this work provides program developers with
a clearer path to identify core program components and developing instruments for their
individual and collective measurement.
The eMINTS professional development (PD) program generates school-wide reform by helping
teachers master the translation of any state standards and information from assessments into
engaging classroom practices that employ technology. The program is based on four underlying
research-based components: inquiry-based learning, high-quality lesson design, community of
learners, and technology integration and addresses issues identified as barriers to the consistent
use of standards-based instruction and technology. This study used a cluster randomized design
that randomly assigned 60 high-poverty rural Missouri middle schools to one of three groups in
fall 2010. Schools assigned to Group 1 receive the eMINTS two-year PD program; Group 2
schools receive eMINTS two-year PD in Years 1 and 2 plus a third year of Intel® Teach PD; and
Group 3 schools conduct business as usual, with no exposure to the PD until the study is
complete. Baseline data was collected in 2010–11 and Year 1 data was collected in 2011–12.
Fidelity measures collected for this study include a school technology coordinator survey, a
teacher survey, records on professional development (and staff attendance), logs of eMINTS
instructional specialists’ teacher coaching visits, and observations of eMINTS professional
development sessions.
In this presentation, we describe an eight-step process that we used to develop implementation
fidelity measures and roll them up to the program level. Specifically, the process included 1)
describing the core program components; 2) identifying data sources to measure the components;
3) working with the program developer to determine relative importance of components; 4)
determining the range of values for the indexes created; 5) developing a method for assigning an
overall fidelity score; 6) confirming the plan with the developer; 7) averaging scores across
schools; and 8) calculating the weighted average for each school.
Symposium Organization
The symposium will be organized as a discussion seminar with “food for thought” questions
interspersed among the presentations of the research in the following format:
 Presentation of the overall topic for the symposium and its educational importance
 Opening discussion questions:
o How do you define and measure implementation fidelity?
o What are the core components of it?
 Presentation on pre-condition factors in Indonesia
 Discussion questions:
o What does research indicate on how to determine the readiness for schools to
implement major reforms?
o What are the next steps in extending the research agenda on readiness for reform?
 Presentation on measuring fidelity in Slovenia
 Discussion questions:



o How can researchers maintain fidelity to the evaluation design and
implementation while still making adjustments that reflect the realities of
evaluation?
o What implications does this study present as a result of their findings?
Presentation on rolling up fidelity in the United States
Discussant
Closing that will focus on what are the big “takeaways.”
o How does this shifting landscape of operationalizing implementation fidelity
influence the research agenda?
o As researchers, what is our role in administering implementation fidelity
research? In advocating for it?
Connection to Conference Themes
The promise of strong implementation fidelity research is well-aligned with the theme
Redefining Education, Learning, and Teaching in the 21st Century: The Past, Present and Future
of Sustainable School Effectiveness. We recognize that understanding the quality of a program or
instructional strategy and its suitable delivery and proficient receipt are important links in
translating a conceptually sound model of instruction into good practice. Implementation fidelity
research is an increasingly necessary way of determining the extent to which researched
practices have been translated into replicable and sustainable actions.
References
Century, J., Rudnick, M., & Freeman, C. (2010). A framework for measuring fidelity of
implementation: A foundation for shared language and accumulation of knowledge.
American Journal of Evaluation, 31(2), 199-218.
Hulleman, C., & Cordray, D. (2009). Moving from the lab to the field: The role of fidelity and
achieved relative intervention strength. Journal of Research on Educational
Effectiveness, 2, 88-110.
O’Donnell, C.L. (2008). Defining, conceptualizing, and measuring fidelity of implementation
and its relationship to outcome in K-12 curriculum intervention research. Review of
Educational Research, 78(1), 33-84.
Penuel, W.R., & Means, B. (2011). Using large-scale databases in evaluation: Advances,
opportunities, and challenges. American Journal of Evaluation, 32(1), 118-133.
Penuel, W.R., Singleton, C., & Roschelle, J. (2011). Classroom network technology as a support
for systematic mathematics reform: Examining the effects of Texas Instruments’
MathForward Program on student achievement in a large, diverse district. Journal of
Computers in Mathematics and Science Teaching, 30(2), 179-202.
Download