Administrative Behavior*The Ripple Effect

advertisement
THE RIPPLE EFFECT OF PRINCIPAL BEHAVIOR:
IMPROVING TEACHER INSTRUCTIONAL PRACTICES THROUGH PRINCIPALTEACHER INTERACTIONS
By
Kim Banta
B.A., Indiana University, 1984
M.A., Northern Kentucky University, 1990
and
Brennon Sapp
B.A., Western Kentucky University, 1992
M.A. Western Kentucky University, 2001
A dissertation submitted to
the faculty of the College of Education and Human Development
of the University of Louisville
in Partial Fulfillment
of the Requirements for the Degree
of
Doctor of Education
College of Education and Human Development
Department of Leadership, Foundations, and Human Education
University of Louisville
Louisville, Kentucky
August 2010
THE RIPPLE EFFECT OF PRINCIPAL BEHAVIOR:
IMPROVING TEACHER INSTRUCTIONAL PRACTICES THROUGH PRINCIPALTEACHER INTERACTIONS
By
Kim Banta
B.A., Indiana University, 1984
M.A., Northern Kentucky University, 1990
and
Brennon Sapp
B.A., Western Kentucky University, 1992
M.A. Western Kentucky University, 2001
A Dissertation Approved on
May 25, 2010
by the following Dissertation Committee:
_________________________________
Dissertation Co-Chair
_________________________________
Dissertation Co-Chair
_________________________________
_________________________________
_________________________________
ii
ABSTRACT
THE RIPPLE EFFECT OF PRINCIPAL BEHAVIOR: IMPROVING TEACHER
INSTRUCTIONAL PRACTICES THROUGH PRINCIPAL-TEACHER
INTERACTIONS
Kim Banta and Brennon Sapp
May 25, 2010
This study investigated the implementation and effects of a school-level
leadership model intended to institutionalize quality principal-teacher interactions into
the culture of a high school. The leadership model for the study was operationalized by
incorporating four new principal-teacher interactions. Over a two year period, the
principals (one head principal and three assistant principals) introduced individual
conferencing (one hour principal-teacher meetings in the summer), snapshots (frequent
short visits to teacher classrooms), data reviews (facilitating frequent teacher analysis of
classroom grade distributions and discipline reports), and teacher self-assessments using
a rubric-based document to aid teachers in self-reflection on their instructional practices.
The purposes of this study were to document the implementation of the principal-teacher
interactions; to analyze changes in instructional practices; to analyze any effects changes
in instructional practices had on student performance (operationalized as classroom grade
performance and discipline related behavior); and to analyze the frequency and focus of
teacher conversations.
As a result the four principal-teacher interactions introduced in this study, teacher
instructional practices improved, student performance increased, and the frequency and
iii
focus of some teacher conversations changed. Results from the analysis of teacher
instructional practices showed that those practices improved, but to varying degrees
among different groups of teachers (high, medium, and low performing). Results from
the analysis of student performance (grades and discipline) demonstrated greater
improvement than would be predicted had the treatment not occurred. Data analyzes
further suggested that improvement in classroom grade distributions and discipline
referrals were affected both by a change in the quality of teacher instructional practices
and increased principal visibility. Survey data indicated that the frequency and focus of
some teacher conversations changed, but did not indicate that the frequency and focus of
principal-teacher conversations or teacher-student conversations changed during the
course of the study.
iv
TABLE OF CONTENTS
PAGE
SIGNATURE PAGE.........................................................................................................ii
ABSTRACT.....................................................................................................................iii
LIST OF TABLES...........................................................................................................xii
LIST OF FIGURES..........................................................................................................xiv
CHAPTER ONE – INTRODUCTION................................................................................1
Introduction and Statement of the Research Problem..................................................1
Significance of Study....................................................................................................2
Context..........................................................................................................................4
Origin of the Study........................................................................................................7
Conceptual Framework.................................................................................................8
Research Questions......................................................................................................11
CHAPTER TWO - LITERATURE REVIEW...................................................................12
The Role of Principal...................................................................................................12
Historical Role of Principal...................................................................................13
Current Role of Principal ......................................................................................14
The Role of Principal in Principal-Teacher Interactions…..................................15
Constructive Effects of Principal-Teacher Interactions…..........................................16
Effects of Principal-Teacher Interactions..............................................................18
Principal-Teacher Interactions and Distributing Leadership.................................19
Summary of Constructive Effects of Principal-Teacher Interactions....................21
v
High Quality Principal-Teacher Interactions............................................................22
The Reality of Principal-Teacher Interactions.....................................................23
Collaborating with Teachers (Summer Meetings)...............................................24
Principals in Classrooms (Origin of Snapshots)..................................................25
Data Based Decisions (Data Reviews)..................................................................28
Teacher Self-Assessment......................................................................................29
Summary of High Quality Principal-Teacher Interactions...................................30
Effective Ways to Measure the Quality of Teacher Instructional Practices................31
Rubric Based Evaluation........................................................................................31
Foundation of the QIR...........................................................................................32
Principal Evaluation of Teacher Instructional Practices.......................................34
Student Performance....................................................................................................35
Grades and Instructional Practices.........................................................................35
Discipline and Instructional Practices....................................................................36
Frequency and Focus of Teacher Conversations.........................................................37
Principals-Teacher Conversations.........................................................................38
Teacher-Teacher Conversations.............................................................................39
Teacher-Student Conversations and Good Teaching............................................40
Conclusion...................................................................................................................41
CHAPTER THREE – METHODOLOGY........................................................................43
Participants..................................................................................................................43
The School and District.........................................................................................43
Teachers.................................................................................................................46
vi
Students..................................................................................................................48
Principals................................................................................................................49
Research Design...........................................................................................................50
Research Question One (Principal & Teacher QIR)..............................................52
Procedures..................................................................................................52
Research Design.........................................................................................52
Research Question Two (Classroom Grade Distributions & Classroom
Discipline Referrals)..............................................................................................53
Procedures..................................................................................................53
Research Design.........................................................................................54
Research Question Three (Teacher & Student Surveys).......................................54
Procedures..................................................................................................54
Research Design.........................................................................................55
Measures and Instruments............................................................................................57
QIR (Quality Instruction Rubric)...........................................................................57
Procedures........................................................................................................57
Validity............................................................................................................57
Development of the QIR...........................................................................57
QIR Training.............................................................................................60
Convergent Validity (District Calibration of the QIR)..............................61
Reliability........................................................................................................62
Classroom Grade Distributions..............................................................................63
Discipline Reports.................................................................................................65
Teacher and Student Surveys................................................................................65
vii
Validity............................................................................................................65
Initial Development of Teacher Survey....................................................66
Initial Development of Student Survey.....................................................68
Expert Review of the Surveys....................................................................69
Reliability.........................................................................................................70
Fidelity of Implementation of Snapshots..............................................................70
Treatment Specifics (Principal-Teacher Interactions)................................................71
One-on-One Summer Meetings.............................................................................71
Snapshots...............................................................................................................72
Data Reviews.........................................................................................................74
Teacher Self-Assessments (QIR)...........................................................................75
Trigger Points..............................................................................................................75
Data Analysis....................................................................................................................76
Research Question One.........................................................................................76
Quantifying the QIR.......................................................................................77
Comparing Pretest and Posttest QIRs…........................................................77
Comparing Principal and Teacher QIRs........................................................78
Research Question Two........................................................................................78
Classroom Grade Distributions.......................................................................79
Classroom Discipline Referrals.......................................................................79
Classroom Grade Distributions and Discipline Referrals of High,
Medium and Low Performing Teachers.........................................................80
Research Question Three.......................................................................................80
Combining Response Categories of Survey Questions.................................81
viii
Data Analysis Plan for Survey Results............................................................83
CHAPTER FOUR – RESULTS........................................................................................86
Research Question One................................................................................................87
Changes in the Quality of Teacher Instructional Practices During the Year of
Full Implementation...............................................................................................87
A Comparison of the Differences of Perceptions of the Quality of Teacher
Instructional Practices between Teachers and Principals......................................89
Analyses of Systematic Differences in Teachers’ Self-ratings..............................90
Comparisons among High, Medium, and Low Performing Teachers
According to Posttest QIR Ratings Completed by the principals.........................91
Changes in the Quality of Teacher Instructional Practices During the Year
of Full Implementation for High, Medium, and Low Performing Teachers.........95
Research Question Two.............................................................................................100
Classroom Grade Distributions............................................................................100
Classroom Discipline Referrals..........................................................................104
Classroom Grade Distributions and Student Discipline Referrals for High,
Medium and Low Performing Teachers............................................................109
Research Question Three..........................................................................................110
Teacher Survey Data...........................................................................................111
Student Survey Data............................................................................................113
CHAPTER FIVE – DISCUSSION.................................................................................115
Overview...................................................................................................................115
Teacher Instructional Practices.................................................................................116
The Quality of Teacher Instructional Practices.................................................118
High Performing Teachers.....................................................................119
Medium Performing Teachers...............................................................120
ix
Low Performing Teachers......................................................................121
Changes in Student Performance.............................................................................122
Classroom Grade Distributions............................................................................123
Discipline Referrals............................................................................................124
Grade Distributions for the Students of High, Medium, and Low Performing
Teachers.............................................................................................................126
Discipline Referrals from the Students of High, Medium, and Low
Performing Teachers..........................................................................................127
Changes in the Frequency and Focus of Teacher Conversations with Principals,
Students and Other Teachers ...................................................................................128
The Frequency and Focus of Principal-Teacher Conversations.........................128
The Frequency and Focus of Teacher-Teacher Conversations...........................131
The Frequency and Focus of Teacher-Student Conversations............................131
Implications............................................................................................................132
Principal Visits and Collaboration with Teachers……………………...............133
Rubric Based Assessment of Instructional Practices.........................................134
Working with Teachers of Differing Qualities of Instructional Practices...........134
Unintended Outcomes.............................................................................................136
Exiting Teachers................................................................................................136
Principal-Student Relationships..........................................................................136
Principal-Parent Discussions..............................................................................137
Increased Job Satisfaction for the Principals......................................................138
Recommendations for Future Research....................................................................138
Core Principal Questions Affirmatively Answered by this Research........................139
REFERENCES...............................................................................................................142
x
APPENDICIES
A: Teacher Survey....................................................................................................152
B: Student Survey.....................................................................................................156
C: Quality Instruction Rubric...................................................................................160
D: Dixie Heights High School Instructional Practices Grading Policy...................164
E: Enhancing Achievement Treatment Plan (Failing Students)..............................165
F: Kenton County School District Code of Acceptable Behavior and
Discipline.............................................................................................................167
G: Sample Snapshot Tracker....................................................................................179
H: Sample Data Review...........................................................................................180
I: Definitions of Terms............................................................................................187
CURRICULUM VITAE-KIM BANTA.........................................................................191
CURRICULUM VITAE-BRENNON SAPP..................................................................197
xi
LIST OF TABLES
TABLE
PAGE
1. Ethnicity Distribution for Dixie Heights High School Students Compared to State
and National...............................................................................................................44
2.
Economically Disadvantaged and Disabled for Dixie Heights High School Students
Compared to State and National.................................................................................44
3. Household Income for Dixie Heights High School Families Compared to State and
National.......................................................................................................................45
4. Age Distribution for Dixie Heights High School Compared to State and
National........................................................................................................................45
5. Dixie Heights High School Faculty Characteristics 2009...........................................47
6. Dixie Heights High School Student Characteristics 2009...........................................48
7. Dixie Heights High School Principal Characteristics 2009.........................................50
8. Research Questions-Design-Measures for the Study...................................................51
9. Available Data for Associated Time Periods of the Study..........................................52
10. Grading Scale for Classroom Grades at Dixie Heights High School..........................64
11. Teacher Survey Questions Aligned with Constructs of the Study...............................67
12. Student Survey Questions Aligned with Constructs of the Study...............................69
13. Regrouping Response Categories on Teacher Surveys ..............................................84
14. Regrouping Response Categories on Student Surveys .............................................85
15. Comparison of QIR Pre-Post Mean Scores (Standard Deviation) for Year of Full
Implementation .........................................................................................................89
16. Comparison of Teacher-completed to Principal-completed QIR Mean Scores
(Standard Deviation) for Year of Full Implementation ............................................90
xii
17. Comparison of Teacher-completed to Principal-completed QIR Mean Scores
(Standard Deviation) for High, Medium, and Low Performing Teachers..................94
18. Comparison of QIR Pre-Post Mean Scores (Standard Deviation) for Year of
Full Implementation for High Performing Teachers.................................................96
19. Comparison of QIR Pre-Post Mean Scores (Standard Deviation) for Year of
Full Implementation for Medium Performing Teachers.............................................98
20. Comparison of QIR Pre-Post Mean Scores (Standard Deviation) for Year of
Full Implementation for Low Performing Teachers.................................................99
21. Actual Classroom Grade Distributions for all Students.............................................101
22. Gap between Actual and Projected Classroom Grade Distributions for All
Students..................................................................................................................101
23. Discipline Referrals for School Years 2003-2004 through 2008-2009....................104
24. Differences of Discipline Referrals from Projected Frequencies.............................105
25. Comparison of Classroom Grade Distribution and Discipline Referrals Mean
Scores (Standard Deviation) for High, Medium, and Low Performing
Teachers for 2006-2007 through 2008-2009...........................................................109
26. Teacher Survey of the Frequency and Focus of Teacher Conversations...................111
27. Student Survey of the Frequency and Focus of Teacher-Student Conversations......113
28. Core Principal Questions Affirmatively Answered by this Research........................140
xiii
LIST OF FIGURES
FIGURE
PAGE
1. The Effects of Principal-Teacher Interactions on Teacher Instructional Practices
Resulting in Effects on Student Performance and Teacher Conversations.................11
2. Exploration of Impacts of Treatment on Teacher Instructional Practices...................53
3. Exploration of Impacts of Treatment on Student Performance...................................54
4. Exploration of Impacts of Treatment on the Frequency and Focus of Teacher
Conversation During the Pilot Year...........................................................................56
5. Exploration of Impacts of Treatment on Frequency and Focus of Teacher
Conversation During the Year of Full Implementation..............................................56
6. Predicted and Actual Classroom Grade Distributions for All Students……….........103
7. Total Discipline and Aggressive Discipline for all Students.....................................106
8. Total Discipline by Gender........................................................................................107
9. Total Discipline for Freshman, Sophomores, Juniors, and Seniors..........................108
xiv
CHAPTER ONE
INTRODUCTION
Introduction and Statement of the Research Problem
The role of principal as an instructional leader is essential to improve teacher
instructional practices. In turn, improved teacher instructional practices may lead to
increased student performance as well as improved frequency and focus of teacher
conversations. In a substantial review of research about the role of a principal, Hallinger
and Heck (1996) began his report by stating, “There is relatively little disagreement in
either lay or professional circles concerning the belief that principals play a critical role in
the lives of teachers, students, and schools” (p. 723). Zepeda (2003) further noted that
“instructional leaders are about the business of making schools effective by focusing their
attention, energy, and efforts toward student learning and achievement by supporting the
work of teachers“ (p. 13). If one accepts, as these researchers and many others do, that
the actions of principals can have a substantial effect on teachers and students, then
principals should carefully consider which tasks they spend time on in terms of achieving
the goals of the school.
This study investigated specific impacts of reinvigorating the role of principals as
instructional leaders. To maximize the possibility that results from this study would be
relevant to practicing principals; one important aspect of the change in principal behavior
was that the changes were made in a public school setting without additional resources,
hours, or personnel. The changes in the behavior of the principals (how they interacted
1
with teachers) were hypothesized to improve teacher instructional practices and thereby
improve student performance. Also, these changes were hypothesized to improve the
frequency and focus of teacher conversations with principals. The frequency of
traditional principal tasks labeled as less desirable (e.g. discipline- related tasks) were
expected to decrease, allowing more time for the principals to function as instructional
leaders, including spending substantial time in classrooms and collaborating with
teachers.
The responsibilities of providing instructional leadership are in addition to the
managerial tasks which traditionally take up a significant amount of principals’ time.
When school principals spend too much time on managerial tasks and neglect being
instructional leaders, teachers can become isolated from building leadership and may
engage in interactions and conversations which fail to enhance and may even degrade the
overall quality of education. In the absence of strong principals’ instructional leadership,
teacher classroom practices which vary from one classroom to the next depend upon
individual teacher’s strengths and weaknesses. Such inconsistencies in classrooms
produce additional inconsistencies in instruction and inhibit consistent increases in
student performance.
Significance of the Study
The research base indicates that improving instructional practices is essential to
increasing student performance within a school (e.g. Cushman & Delpit, 2003; Felner,
Kasak, Mulhall, & Flowers, 1997; Fullan, 2005a; Haycock, 1998; O'Hanlon &
Mortensen, 1980). Implementing a few specific changes to principals’ behavior may
enable principals to function more effectively as instructional leaders and improve
2
teacher instructional practices. In this study, improved instructional practices were
hypothesized to increase student performance and to improve the frequency and focus of
teacher conversations.
This study’s basic structure for enhancing principal instructional leadership
behavior included the addition of four specific principal-teacher interactions (one-on-one
summer meetings with teachers, frequent classroom visits, data reviews, and teacher selfassessments) and can be implemented without the use of extra money, time, or personnel
(all of which are difficult to reallocate within a school). These four principal-teacher
interactions comprised the treatment for this study. Other goals of this study included
situating the treatment within the boundaries of typical practice and contributing evidence
useful to other principals in schools who may choose to implement similar principalteacher interactions.
The principal behavior treatment in this study reflected common practices
integrated into many educational organizations. However, an important difference in this
study’s treatment was the intentionality and intensity of the principal behaviors
maintained over an extended time period (two years in this study). The four interactions
used in the study involved principals systematically conferencing with teachers,
frequently viewing instruction, reviewing data with teachers for specific purposes, and
collaborating with teachers through self-reflection of instructional practices. Each of
these interactions is part of the basic responsibilities of a principal; however, the
difference between typical practice and these interactions were the frequency and focus
of the principals’ classroom visits and the frequency and focus of curricular discussions
with teachers. This study presents empirical evidence that this set of principal-teacher
3
interactions had an impact on instructional practices. It also presents evidence of the
effects that changes in instructional practices may have had on student performance
(operationalized as classroom grade performance, and discipline related behavior) and
evidence of a shift in the frequency and focus of teacher conversations. \
Context
Students at the high school in which these principal-teacher interactions were
introduced, Dixie Heights High School, attained average academic performance at both
national and state levels. Historically, the typical challenges of inducing change and a
number of political forces regularly influence the dynamics of principal-teacher
interactions within the schools. One advantage of this particular school context was the
prior years’ stability of the principals’ tenure and the support of the district central office.
Schools with frequently fluctuating principal leadership or poor support from the district
central office may not have the ability to implement these particular treatments and their
potential effectiveness might be different from that found in this study.
Although at the time of this study principals in Kentucky could not fire teachers
for non-performance or resistance to change, the position of principal was nevertheless
viewed as a position of power and influence. It was in this setting that the researchers,
two of the four principals at Dixie Heights High School, set out to design, implement, and
study a specific set of principal-teacher interactions for the purpose of improving teacher
instructional practices.
Dixie Heights High School, with 1400 students and 57 faculty members was
geographically located between two high poverty urban areas and a number of affluent
private schools. The school had a reputation of high academic expectations and was
4
performing higher than average in state testing (20th percentile within the state) although
it produced only average ACT scores (average score of 21.1 in 2007, which is 50th
percentile nationally). Despite above average performance and reputation for high
expectations for student performance, Dixie Heights High School failed to produce
consistent incremental gains in academic performance over the decade prior to this study
according to national and state standardized tests results.
At the time of this study the culture of the school was perceived by teachers and
students as student centered. For example, the school often gave each student a free Dixie
sweatshirt to begin each school year, and the faculty often chose to contribute money at
Christmas to provide food for needy students on the holiday weekends. Many
interventions were offered for struggling students ranging from after school tutoring to
school-within-a-school credit recovery to a school funded night-school which allowed
students to work during the day and come to school at night.
At the time of this study the district had an open-enrollment policy and a tuition
policy that allowed students to attend district schools from another district or attend
different schools within the district. Open enrollment students lived within Kenton
County School District and chose to enroll at Dixie Heights High School although they
were zoned to attend a different high school. These students provided their own
transportation to and from school. Tuition students were students who did not live within
Kenton County School District but chose to enroll at Dixie Heights High School. Tuition
students paid a fee of $300 a year as well as provided their own transportation to and
from school.
5
From school years 1996-1997 to 2006-2007 the course schedule was based on
block scheduling at Dixie Heights High School. In spring 2007 (immediately prior to the
start of this study), the Dixie Heights High School, School Based Decision Making
Council (SBDM) approved a trimester schedule to allow for a larger variety of elective
classes.
At the time of this study, there were a number of political forces which influenced
and regulated the principal-teacher interactions within Dixie Heights High School.
Kenton County School District had a strong teacher union (Kenton County School
District Education Association-KCEA) where leaders worked closely with district
administration. Changes in how principals interacted with teachers fell under the scrutiny
of KCEA, and any principal-teacher interactions to be introduced by this study had to fall
within union-approved guidelines.
Evaluations of Dixie Heights High School teachers were governed by Kentucky
state laws requiring teacher evaluation every third year after being tenured, and every
year for non-tenured teachers (teachers with less than four years experience within the
state). In each year of official evaluation, two formative assessments were required,
followed by an end of year conference in which a summative evaluation was completed
by the principal. A pre-observation conference was held prior to each classroom
observation and a post-observation conference was conducted after each observation.
This ensures some conversation between principals and teachers, even if not as frequently
as recommended by research (Black, 2007; Downey, Steffy, English, Frase, & Poston,
2004; DuFour & Marzano, 2009; Hirsch, 1999; Jacob & Lefgren, 2006; Leithwood,
6
1987; Marshall, 2008; Montgomery, 2002; O'Hanlon & Mortensen, 1980; Whitaker,
2003).
At the time of this study the school’s principals (one head principal and three
assistant principals) and administrators in central office were stable and supportive of the
school principals. During the pilot year of this study (school year 2007-08), the newest
principal was in his third year as a principal in this school and the most senior principal
was in her eleventh year as a principal in this school. District central office personnel also
worked closely with teacher union leaders to establish good relationships. The district
office, teacher union, and principals shared common goals of instructional improvement
and a desire to improve teacher instructional practices and increase student performance.
Origin of the Study
In the summer of 2007 the four principals of Dixie Heights High School began
consulting with district personnel to investigate possible treatments to improve
instruction. They initiated conversations with the school’s staff and brainstormed ideas
for improving school instruction. In the fall of 2007, two principal-teacher interactions,
data review of classroom grade distributions and frequent, short classroom visits by the
principals, were piloted in the school (the next chapter provides more details of these
principal-teacher interactions).
Personnel from the academic support department of Kenton County School
District’s central office assessed the execution, effect, and teacher perception of these two
principal-teacher interactions. Based on experiences from the pilot year, input from the
Kenton County School District’s central office and Dixie Heights High School teachers,
and a substantial review of the literature (as described in chapter two), a set of four
7
principal-teacher interactions were crafted in detail for 2008-2009 implementation. The
principals would: (a) hold individual one hour summer meetings with each teacher to
establish expectations, discuss the teacher’s individual strengths and weakness, and
establish the teacher’s individual growth plan for the coming year; (b) conduct snapshots
(short classroom visits) on a frequent basis during the school year during which
principals would become part of the educational process and have frequent discussions
with the teachers about their instructional practices; (c) provide and collaborate with
teachers on data reviews in the form of their classes’ grade distributions and discipline
reports at the end of each trimester (once every 12 weeks); and (d) require teachers to
complete self-assessments of their instructional practices in the classroom utilizing a
rubric based document known as the quality instruction rubric (QIR).
Conceptual Framework
Principals must identify methods for strengthening their role as instructional
leaders in order to improve teacher instructional practices. Improved instructional
practices have the potential to increase student performance as well as increase the
frequency and focus of teacher conversations. Public schools must strive to achieve these
goals without additional personnel or financial resources. Ultimately, there are limits to
what principals can directly affect in a school, but principals do have the option of
changing how they interact with teachers.
There are a number of ways to measure teacher instructional practices and student
performance as well as a variety of ways to observe teacher conversations. The desired
outcomes from this study were to document the effects of the selected principal-teacher
interactions. The interaction outcomes would provide an initial knowledge base to enable
8
principals to judge the potential value of this particular set of interactions and to provide
tools for principals in other schools to utilize the treatment in their own schools as
appropriate. To make the study manageable, to stay within the limits of what typically
practicing principals can incorporate in their normal ongoing work, and to be consistent
with usual educational practices, the researchers made specific decisions regarding the
type and frequency of principal-teacher interactions. The four principal-teacher
interactions were selected in part because they are within the parameters outlined above,
yet offer the promise of substantial impact (see chapter two). In addition to these four
principal-teacher interactions, principals implemented follow-up interactions with
specific teachers as needed. Thus, the four specific principal-teacher interactions served
as a treatment and as a feedback loop to initiate individual teacher assistance according to
individual’s specific needs.
In order to measure the impact of the principal-teacher interactions, the
researchers were required to make decisions on how and what data to monitor. Schools
produce data from many complex, interdependent systems which may yield varying
measures of school performance. After considering multiple measures, the researchers
chose teacher instructional practices, student performance, and teacher conversations as
the key constructs to measure for this study. The following indicators were chosen to
operationalize these constructs:
1. Teacher instructional practices were defined and monitored as:
a. How teachers perceived the quality of their instructional practices from a
teacher-completed instructional practices rubric, the Quality Instruction
Rubric (QIR) – (see the instruments section of methodology chapter)
9
b. How principals perceived teachers’ quality of instructional practices as
articulated from a principal-completed instructional practices rubric (the
QIR)
2. Student performance was defined and monitored as
a. Classroom grade distributions
b. Student discipline referrals
3. Frequency and focus of teacher conversations were defined and monitored via a
year end survey as
a. Teachers perceptions of the frequency and focus of principal-teacher
conversations
b. Teachers perceptions of the frequency and focus of teacher-teacher
conversations
c. Students perceptions of the frequency and focus of teacher-student
conversations
Figure 1 illustrates how principal-teacher interactions, teacher instructional
practices, student performance, and teacher conversations interact with one another. Note
the directionality of the influence of principal-teacher interactions on teacher instructional
practices which in turn affects student performance, and the frequency and focus of
teacher conversations.
10
The Ripple Effect of Principal-Teacher Interactions
Principal-Teacher
Interactions
Teacher Instructional
Practices
• One-on-one summer
conference (PrincipalTeacher)
• "Snapshots"
• Periodic analysis of
grade distribution and
discipline referrals
• Reflection of
Instructional Practices
Student
Performance
• Grade Distributions
• Discipline Referrals
• Teachers' Perception of
the Quality of
Instructional Practices
• Principals' Perception of
the Quality of
Instructional Practices
Teacher
Conversations
 Teacher Surveys
 Student Surveys
Figure 1 The Effects of Principal-Teacher Interactions on Teacher Instructional Practices Resulting in
Effects on Student Performance and Teacher Conversations
Research Questions
The overall goal of this study was to determine how a specific set of principalteacher interactions affected teacher instructional practices, student performance, and the
frequency and focus of teacher conversations. This goal was explored by examining the
following specific research questions:
1. How will the treatment of principal-teacher interactions affect teachers’
instructional practices?
2. How will changes in teachers’ instructional practices, initiated by the set
of principal-teacher interactions, affect student performance?
3. How will changes in principal-teacher interactions affect the frequency
and focus of teacher conversations with principals, students, and other
teachers?
11
CHAPTER TWO
LITERATURE REVIEW
Principals must be instructional leaders to improve teacher instructional practices
and increase student performance. While principals cannot directly control every aspect
of the school, they can directly affect the way they interact with teachers. This chapter
provides evidentiary support of this study’s central claim; with high quality principalteacher interactions, principals and teachers can begin to improve instructional practices.
The review of literature also further supports that improving teacher instructional
practices is the key to increasing student performance as well as improving the frequency
and focus of teacher conversations. While measuring these constructs is a challenge,
useful indicators exist to support such measurements.
The Role of Principal
Historically, the role of the principal has changed from lead teacher, to manager
of the building, to the current role of instructional leader with emphasis on the principalteacher interactions to improve instructional practices. As recently as December 2006, the
National Association of State Boards of Education reported that high stakes
accountability has caused the role of school principal to include involvement with teacher
instructional practices. Although the expectations for principal have changed to include
curriculum and instructional leadership, principals often get caught in the managerial
tasks of the building which may diminish their capacity to serve as an instructional
leader. Many of these managerial tasks are time sensitive and must be addressed
12
immediately. As a result, the principal’s attention to curriculum and teaching diminishes
because there is no commensurate sense of urgency surrounding curriculum (Halverson,
Kelley, & Kimball, 2004).
In order for principals to meet instructional leadership expectations and continue
to perform managerial tasks, a distributive leadership approach evolved in many schools
between teachers and principals. Prioritizing and sharing tasks was necessary so that the
number of responsibilities could be completed in a timely and efficient manner.
According to a study by Glanz, Shulman, and Sullivan (2007), “Student achievement
levels were higher in schools with principals with higher ratings. Research also
concluded that principal quality was connected to student achievement” (p. 6), and that
every principal’s goal should be increased student performance and making learning
available and relevant for each and every student (Cochran-Smith, 2008).
Historical Role of Principal
The historical role of principals was formulated based on a typical business model
within which the schools operated. According to Berry and Beach (2006), in the 1850s
and 1860s principals made sure teachers were hired, bills were paid, books were
purchased and supervision duties were performed. Principal preparation then was based
on a need to manage schools and supervise teachers in their work. Principal preparation
programs did not include enhancing administrators’ ability to collaborate with teachers in
relation to improving instructional practices.
In the late nineteenth century education moved away from the one room school
house with the lead teacher functioning as a principal. Academia began to see the need to
train principals for expanded duties that included skills beyond those typically required of
a lead teacher. While universities for training for teachers focused on curriculum,
13
principal preparation focused on managerial roles. At this point, no time in the principal
preparation program was spent enhancing principal’s ability to collaborate with teachers
in relation to improving instructional practices (Berry and Beach, 2006).
In the 1950s, 1960s, and 1970s, social pressures led to increased demands on
public education, and student achievement emerged as a measure of school effectiveness
(Hallinger & Heck, 1996). Principals were charged by their superintendents to lead
initiatives aimed at improving student achievement with no reduction in their managerial
duties and little training toward becoming instructional leaders. This meant that principals
became responsible for both managing the school and improving instructional practices.
In response to high stakes accountability in the 1980s and 1990s, school change
focused on government initiatives, district programs, and top-down practices. These
efforts were based more on compliance with state educational initiatives than on solid
research for strategic change (Danielson, 2007). With continuing demands from state
governments and the public to increase student achievement, the focus on school
improvement remained critical. Principals struggling to meet the demands of their new
roles as instructional leaders continually searched for ways to improve teaching and
instruction.
Current Role of Principal
With the growing profile of high stakes state assessment and accountability, the
need for principals with significant instructional leadership skills continues to grow.
Achievement data and information have become a critical part of principals’ vocabulary
and goal setting for the success of their school. Principals must also be aware of and
involved in classroom instruction because they are ultimately responsible for improving
14
teacher quality. According to Toch and Rothman (2008), “Public education defines
teacher quality largely in terms of credentials that teachers have earned, rather than on the
basis of the quality of the system required knowledge and work they do in their
classrooms or the results their students achieve” (p. 2). Effective principals must move
beyond the credential-based assessment of teacher quality. Currently, principals are
expected to be change agents in schools for improving test scores, to take responsibility
for high stakes accountability, to be the curriculum leader, and to promote success for all
teachers and students. To meet these rigorous demands the principal must interact with
teachers in instructionally meaningful ways and be aware of what students are learning
(Hirsch, 1999; Leithwood & Mascall, 2008; Marshall, 2006b; Reeves, 2004; Wagner &
Kegan, 2006; Whitaker, 2003; Zepeda, 2003).
The Role of Principal in Principal-Teacher Interactions
The instructional leadership role of principals requires high quality principalteacher interactions to improve instructional practices. However, principals typically hire
a teacher with an interview and a gut instinct that the teacher will utilize high quality
instructional practices in class. Then, principal-teacher interactions often become
situationally dependent and may be purely social. For principals to understand and affect
quality instruction, principal-teacher interaction must become a regular, even systematic
component of the school’s structure. According to Frase and Hetzel (1990), “the principal
would start to understand the strengths and weaknesses of every teacher because of the
frequent classroom visits and an understanding of how all the parts are interrelated” (p.
18). Regular and meaningful principal-teacher interactions are essential to empower
principals and teachers to affect instruction and student learning.
15
To be effective instructional leaders, principals need to be involved in
instructional practices on a frequent (e.g. daily) and ongoing basis (Downey & Frase,
2001). Unfortunately, the opportunities for meaningful principal-teacher interactions are
limited because many teachers do not feel confident in their instruction (Halverson,
Kelley, & Kimball, 2004). In other cases, teachers believe principals lack the necessary
knowledge of the teacher’s subject area which does not engender the conversational giveand-take required for quality feedback. Finally, teachers may feel threatened because the
principal is their evaluator (Hord, 1997). Similarly, principals may also be inhibited in
their ability to develop meaningful principal-teacher interactions because they lack
confidence in their knowledge of specific curriculum or may get caught up in managerial
duties of the school day (Marshall, 2008).
In schools where a rich tradition of principal-teacher interaction has developed
despite the impediments discussed above, principals and teachers may interact
conversationally but not intentionally toward improving instructional practices. Marshall
(2008) and Toch and Rothman (2008) agree that principals spend a large amount of time
discussing initiatives and directives, but schools rarely are impacted by these initiatives
because there is no follow up on the initiatives and no real expectations established.
Constructive Effects of Principal-Teacher Interactions
When frequent principal-teacher interactions occur, especially interactions that
focus on principal expectations, goals and strategies, follow-up and feedback are critical.
Many teachers are self-motivated and seek to continually improve, and these teachers
benefit from a supportive, knowledgeable principal. Some teachers need only a small
amount of encouragement to improve, while other teachers will only improve if pressured
16
considerably. Simply put, it is the expectations put forth and meaningfully supported by
principals that facilitate the improvement of instructional practices (Leithwood & Jantzi
2000). Although teacher credentialing is necessary and important as one piece of
evidence to ensure quality teaching, it is not necessarily sufficient to produce good
teaching. All professionals benefit from frequent, ongoing professional guidance and
collaboration to reach their full potential.
Principals who have the expectation of and the ability to monitor instructional
practices can improve the quality of collaboration in their building. Klingner, Arguelles,
Hughes, and Vaughn (2001) wrote,
A second factor affecting the sustainability of classroom specific innovation is
school leadership. Schools at which principals devote time to the development of
an innovation are more likely to have teachers committed to its practice. Teachers
respond to principals actively trying to improve instruction. The principals’
interest in curriculum validates the teachers’ role in student learning. Further,
districts that procedurally rotate principals may have more difficulty sustaining a
classroom specific strategy than schools where principals are retained” (p. 225).
It is the teachers’ and the principals’ responsibility to embrace professional development
and follow it through to instructional practice. Hallinger and Murphy (1985) found that
instructional leadership variables connected to achievement included supervision and
instructions and embedded professional development. If embedded professional
development is focused, ongoing and regularly used in the classroom, it will assist rather
than hinder the educational process and can improve student learning.
When principals demonstrate knowledge of the instructional practices occurring
in the classroom, teachers are more confident that principals are part of the learning
solution leading to student achievement. Marzano (2003) claimed, “Rather than prowling
through classrooms with checklists of ‘correct’ practices, administrators should be
17
looking at interim results with their teachers, identifying the most effective practices” (p.
167). Haycock (1998) found, “successful strategies differ from many professional
development programs…these strategies are ongoing, on-site, and focused on the content
that students should learn” (p. 13). Thus, regular and meaningful principal-teacher
interactions that focus on quality instruction are the key to improved collaboration
between principals and teachers and student achievement.
Effects of Principal-Teacher Interactions
Principal-teacher interactions that focus on social exchanges are an easy and
comfortable way for principals and teachers to interact, but they do not yield quality
reflection to improve classroom instructional practices. Marshall (2008) claimed that in
order to be a change agent in curricular areas in the building, a principal must know what
the teachers are teaching and be familiar with the content and delivery system in order to
suggest useful improvements to the classroom. Glanz, Shulman, and Sullivan (2007)
agreed, finding that “supervision is a non-evaluative process in which instructional
dialogue is encouraged for the purpose of engaging teachers to consider effective
strategies to promote student learning” (p. 7).
The mere leadership status of a principal does not impact teacher knowledge of
instructional practices, but the principal’s interaction with teachers in classrooms around
curriculum and instruction will. Hirsch (1999) found that administrative roles do not
inherently improve quality academic principal-teacher interactions on a professional
level. The principal must intentionally invest time in classrooms and in principal-teacher
interaction focusing on quality instruction. Alig-Mielcarek (2003) claims, “Instructional
leadership has a direct and indirect effect on student achievement” (p. 74). To be
18
effective, principal-teacher interactions cannot be an intermittent or low priority
initiative; it must be a pervasive and regular process immersed in the daily work of
teachers. Teachers want feedback on the instructional practices they use in class. They
want other adults to see the good work they do and to discuss suggestions for
improvement. But, without the information gained from frequent classroom visits,
principals tend to consider teachers’ personalities instead of their classroom practices to
gauge quality instruction; they tend to control rather than enable teachers; and they tend
to interact socially with teachers in lieu of professionally (Marzano,Waters, & McNulty
2005).
Principal-Teacher Interactions and Distributing Leadership
Another area of interest, when considering principal-teacher interactions, is goal
alignment and capacity building through a distributed leadership style. There are many
forms of administrative leadership; however, the research summarized below suggested
that a distributed leadership style is effective; and, if used properly, can increase student
achievement. With leadership responsibilities distributed to many stakeholders sharing a
common goal, school improvement strategies can be implemented effectively.
Many schools, especially large high schools, have multiple principals and staff
who may or may not work well together. With the rigorous demands on school leadership
to affect student achievement, such inefficiency must be addressed. As Fullan (2005b)
stated, “capacity building must become a core feature of all improvement strategies, and
we need to focus explicitly on the difficult issues of sustainability” (p. 180). Principals
must activate and encourage designated leaders in any building (department heads,
curriculum leads, counselors, informal teacher leaders etc.) in order to effectively pursue
19
instructional goals. The degree to which a principal’s goals and methods are aligned with
the many staff engaged in distributive leadership can make a significant difference in the
success of the school.
Goleman, Boyatzis, and McKee (2003), suggested that every person in every role
at some point acts as a leader. This creates a system to build capacity that will sustain
improvement should a particular leader leave. Ritchie and Woods (2007) described a
distributed leadership model that is a “distribution of responsibility, working in teams,
and engendering collective responsibility” (p. 364). In other words, the goals and
responsibilities of leaders and stakeholders become interdependent. Moreover, the
sharing of power and decision making creates an environment that improves the quality
of decisions and builds a collaborative culture that will not disappear with changes in
leadership. In other words, distributive leadership builds real change that is sustained.
Ritchie and Woods (2007) found that a distributed leadership model falls into
different categories. Although the study reviewed self–reported data, identifying factors
for distributed leadership included staffs that (a) were challenged and motivated; (b)
regarded themselves as learners; (c) felt valued, trusted, listened to and supported;
(d) were involved in creating, sharing and developing a collective vision; (e)were aware
of their talents and leadership potential; (f) appreciated responsibilities and opportunities
given to them; (g) felt supported; (h) appreciated autonomy. Faculty involvement in
leadership also yields useful ideas and concepts that could otherwise be overlooked by
the principal acting alone.
When implemented correctly, a distributed leadership style is not only effective
but is also popular among staff. A study by Liethwood (1987) described the “popularity
20
of distributed leadership as a desirable approach to leadership practice in schools.
Justifications for the optimistic consequences associated with this approach to leadership
invoke democratic values, shared expertise, and the commitment that arises from
participation in decision making” (p. 65). In essence, the complete engagement of all
stakeholders forms a community of learners that will positively impact instructional
practices and eventually positively impact student achievement. To succeed there must be
a sense from all stakeholders that everyone is moving toward the same goals, and with all
stakeholders focused on a common vision and mission, positive movement will occur
Summary of Constructive Effects of Principal-Teacher Interactions
Effective principals approach principal-teacher interactions with the objectives of
working together with all stakeholders and using every opportunity to become part of the
educational process. Typically, teachers are trying their best to be effective and want to
improve their instructional practices. Principals can facilitate their endeavors of
distributing leadership and assisting all teachers to be more effective by reducing
isolation, discipline issues, poor climate, and other contributing factors. Since research
exists (e.g. Connors , 2000; Felner & Angela, 1988; Haycock, 1998; Lezotte, 2001; Price,
Cowen, Lorion, &Ramous-McKay, 1988; Raudenbush, Rowan, Cheong, 1992) that
concludes that good teaching does make a difference in student performance, improving
and implementing high quality collaborative principal-teacher interactions may
reasonably be hypothesized to lead to better teaching and subsequently improved student
achievement.
According to Cushman and Delpit (2003), teachers and principals rarely interact
with each other on a professional level (e.g., curriculum, daily lessons, student
21
relationships, new innovations or programs). Many educators claim time constraints
significantly inhibit principal-teacher interactions (Downey & Frase, 2001). Haycock
(1998) nevertheless found that “If education leaders want to close the achievement gap,
they must focus, first and foremost, on developing qualified teachers and document the
clear relationship between low standards, low–level curriculum, under-educated teachers
and poor results” (p. 3). This process takes time, but it is worth the investment to improve
classroom instruction and the culture of the building so high quality teachers can flourish.
Haycock (1998) and Jordan, Mendro, and Weerasinghe (1997) also found that students
who have highly effective teachers for consecutive years increased their performance on
standardized tests more each year. Thus, if we want to close achievement gaps, we must
make sure students have consistently effective teachers rather than intermittently
effective instruction. Principals must make the time to interact with teachers toward
improving test scores, curriculum, daily lessons, student behavior and innovative
programs.
High Quality Principal-Teacher Interactions
Although principals put forth their best efforts to make quality hiring decisions
based on interviews and references, teachers still need ongoing encouragement and
mentoring for continuous improvement. In most schools, intentional high quality principalteacher interactions do not occur on a regular basis. This study posits that if principals and
teachers (a) intentionally discuss the quality of instructional practices and collaboratively
make plans for improving the quality of instruction; (b) use classroom level student data on
a frequent basis to aid teachers in reflection on the success of their students; (c) periodically
collaborate in classrooms throughout the year; (d) use the same rubric based instrument to
22
define quality instruction, then instructional practices, student achievement, and the
frequency and focus of teacher conversations will improve.
The Reality of Principal-Teacher Interactions
Principals typically work within an established structure for observing teachers’
instructional practices for summative evaluation, but they rarely observe student learning in
order to provide frequent ongoing feedback to teachers (Marshall, 2006b). The assumption
seems to be that teachers would not benefit from collaboration regarding their instructional
practices until after the formal evaluation process is completed. Yet even excellent teachers
benefit from encouragement and constructive feedback as well as reviewing data and
reflection on their instructional practices (Downey & Frase, 2001). Principals and teachers
need to collaboratively discuss data and reflect on instruction to improve instructional
practices. Simply put, in order for principals to have an accurate understanding of
instructional practices they must visit classrooms. Many teachers are isolated in their
practice with the exception of the mandatory, scheduled observations during the school
year; and if these observations are the only curricular interaction the principal has with the
teacher, the principal will often get an atypical instructional performance measure during
pre-scheduled observations.
Glanz et al., (2007) found that many principals only see teachers teaching during
official observations, and Marshal (2008) noted that, “Principals make an educated guess
about what’s happening during 99.5 percent of the year when they’re not there, saying a
prayer and relying on teachers’ professionalism” (p. 1). Rather than relying on chance,
principals and teachers should be frequently discussing instructional practices and
observing student learning to continually improve classroom instructional practices.
23
Collaborating with Teachers (Summer Meetings)
The principals’ and teachers’ most important long-term work is improving
instruction (Leithwood, Louis, Stephen, and Wahlstrom, 2004), and Ginsberg (2001)
claimed that knowing exactly what is taught in the classroom will facilitate conversation
and self- reflection in teachers, adding a new level of professionalism often overlooked.
Unfortunately, many principals rarely initiate these conversations with teachers or talk only
about non-instructional topics (Downey & Frase, 2001). To address this problem principals
and teachers must intentionally take time to discuss the quality of instructional practices as
well as collaboratively plan for improving the quality of instruction. Summer meetings
between principals and teachers will facilitate this process. In these meetings principals and
teachers can more clearly focus on adjusting methodology for improving student learning
than during the school year when implementation of strategies commands the attention of
teachers and principals. With administrative involvement in teacher reflection during
summer meetings objectives for quality instruction and student performance become mutual
goals for principals and teachers and frame appropriate principal-teacher interaction
throughout the year. Summer meetings also provide time to share feedback with teachers
from the completed school year in order to consider areas of growth for the coming year.
A study by Martinez, Firestone, Mangin, and Polovsky (2005) found that teachers
desired feedback on their work, and their statements were grounded in distributed
leadership which asserts, “Monitoring progress during change periods of reforms is
important to institutionalize change” (p. 7). Summer meetings between principals and
teachers provide an excellent opportunity to monitor progress and to guide and encourage
continuous instructional improvement. In these meetings monitoring can be formal or
24
informal but must be present so that teachers can be confident their goals are shared and
supported by the principal. This kind of encouragement was also a key piece of the
Martinez et al. study. They found that teachers need to be rewarded for innovation and
know that their ideas and hard work are valued. Collaboration will be more successful and
effective change is more likely to occur when teachers feel valued by their supervisors.
Principals in Classrooms (Origin of Snapshots)
In many schools the principals only visit a few teachers’ classrooms (teachers who
are on their evaluation year) on two to three announced visits for the purpose of formal
observations. This study hypothesized that for principals to have knowledge of the
instructional practices happening on a daily basis in their building, they must be in
classrooms more frequently and consistently. Another substantial change from typical
practice at the core of this study was that principals made frequent unannounced visits to all
teachers’ classrooms throughout each year and that the nature of these visits be a
collaborative effort to improve student learning—not threatening visits for the purpose of
teacher evaluation. The resulting snapshots (short classroom visits) structure required a
significant change to many traditional principal behaviors.
According to Wagner (2006), in the 1970s and 1980s most principals worked in
isolation and focused on management of the building more than instruction. Beginning at
this time principals recognized the need to be more visible throughout the building in order
to ascertain a better understanding of instructional practices and to effectively manage their
buildings. To accomplish this, Management by Walking Around (Peters & Waterman, 1982)
became a popular approach. Unfortunately, this strategy did not achieve a curricular
purpose; and after walking around with a clipboard and or checklist noting details about
25
instruction, there was little to no subsequent interaction with teachers or students to improve
instructional practice.
As technology improved and became more accessible to principals, significant
changes in principal data collection occurred; but the purpose and impact was no
different from the paper and pencil checklist used in Management by Walking Around. In
the 1990s, according to Downey et al. (2004), personal data assistants called e-walks
became the new tool for administrators. It was best described as the electronic check-off
list for the learning walks, and was founded on the belief that if small amounts of data
help, perhaps more data would be better. To collect and analyze such data, districts and
schools began to seek assistance from research in gathering classroom data because
principals could not see every teacher every day. Soon after, the Pittsburg Learning
Walks (Downey et al., 2004) were designed to track the work teachers were doing in their
classrooms while the observers recorded data and talked with students (pulling students
out of the classroom) to gain information. While e-walks and Pittsburg Learning Walks
offered principals many new sources of classroom data; it isn’t the data themselves that
help change schools, it is how data are used that impact instruction and ultimately student
achievement (Downey et al., 2004).
The common threads of classroom visit strategies in the past thirty years were
increased principal visibility and getting principals in the classrooms so they were aware
of what is going on with curriculum and instruction within the school. The critical
missing piece in these initiatives however has been connecting these classroom-based
principal experiences to meaningful principal-teacher interactions to improve instruction.
In schools where principals visit classrooms without interacting with teachers to improve
26
instruction, students have only shown short-term local success within the buildings where
they were implemented, but there is a lack of long term data to follow (Toch & Rothman,
2008).
As indicated by Connors (2000), principals constantly make decisions based on their
perception of a teacher’s performance; and often this perception relies on hearsay and
reputation. Marshall (2008) stated, “it’s important for principals to get into classrooms and
observe” (p. 1), and it would benefit school administrators to be in classrooms in order to
know what is going on versus hoping teachers are doing their best everyday in the
classroom. Unfortunately, many principals begin the year with a momentous speech,
followed by professional development sessions periodically throughout the year, but there is
no sustained procedure for principals getting intimately involved with the teaching and
learning process. Snapshots, as defined in this study, are the next logical step to
improvement.
While short classroom visits may add value to the principal’s knowledge, until now
short classroom visits have been little more than monitoring and collecting data for
administrative teams to examine. Additionally, teachers have not embraced principals
visiting their classrooms in short time periods (10-15 minutes) because of the low frequency
of visits, lack of feedback, and concern that principals viewed activities inaccurately
demonstrating their practice or censoring comments from the visitor. As a result, teachers
often dislike and discount short visits as not helpful for improving instructional practices.
Conversely, the snapshot structure at the heart of this study emphasized frequent principal
interaction with teachers and students and focused specifically on providing collaborative
feedback to improve instructional practices. As Dufour and Marzano (2009) discussed, for
27
growth in instruction to occur, the principal needs to be in the classroom alongside the
teacher to validate learning. Snapshots put principals in classrooms alongside teachers and
students on a frequent basis, promoting collaboration and improvement of instructional
practices.
Data Based Decisions (Data Reviews)
Principals and teachers need to collaborate, discuss data, and reflect on the
instructional process so teachers are encouraged and instructional practices improve.
According to Wayman (2005):
Accountability mandates such as No Child Left Behind (NCLB) have drawn
attention to the practical use of student data for school improvement. Nevertheless,
schools may struggle with these mandates because student data are often stored in
forms that are difficult to access, manipulate, and interpret. Such access barriers
additionally preclude the use of data at the classroom level to inform and impact
instruction (p. 295).
Data are not new to education, and with the addition of advanced technology, data are now
easier for principals and teachers to access and manipulate. But data cannot be acquired and
then simply handed out with the expectation that all teachers will draw reasonable and
actionable inferences from data (Marshall, 2008; Toch & Rothman, 2008). Teachers,
department chairs, and principals must work together to effectively process and interpret the
data. When principals and teachers use data, they have the potential to report gaps, enhance
the education of students, and refine instructional practices. According to Doyle (2003)
“Only when data become genuinely useful and common-place in the classroom will
teachers and administrators welcome it. And only when it is useful will data qualities
improve” (p. 23). Guided by the principal, active introduction and use of quality data will
enhance teaching and teachers’ ability to improve instructional practices.
28
Teacher Self-Assessment
Principals gaining knowledge regarding classroom activities throughout the
school has great potential to improve teacher instructional practices, to impact student
performance, and to increase the frequency and focus of teacher conversations through
self-assessment (DuFour & Marzano, 2009). Many teachers do not take the time to
formally reflect on the quality of their own instructional practices. When teachers do
reflect they usually do not have the advantage of a comprehensive instrument to
accurately evaluate instruction or use a language describing instruction that is common
among educational professionals (Danielson, 2007). Conversely, when teachers evaluate
themselves using the same rubric based instrument principals use to define quality
instruction, they engage in rich dialogue regarding instructional practices and often gain
recognition as curriculum leaders (Danielson & McGreal, 2000). According to Hirsch
(1999), teachers are often unaware of how their classroom environment, evaluations, and
students’ performance compare to others outside of their own classroom. When teachers
learn what other teachers are doing, the experience often initiates collaboration, enhances
instructional practices, and improves student performance; this quality teacher-teacher
collaboration is the basis of a professional learning community.
Teachers and principals can benefit from formally reflecting on their practices
(self-assessment) and participating in discussions with other teachers and principals
utilizing an instrument which accurately evaluates the quality of instruction and supports
a common language among educational professionals. Principals, who recognize teachers
as equal partners in the process of change, acknowledge their professionalism and
29
capitalize on their knowledge and skills to become agents of change (Darling-Hammond
& McCloskey, 2008; Rowan, Chiang, & Miller, 1997).
According to Marshall (2006a):
Over the last decade, a number of districts and charter schools have experimented
with a new way of evaluating teachers – rubrics. A major source of inspiration has
been Charlotte Danielson’s 1996 book, Enhancing Professional Practice: A
Framework for Teaching (ASCD), which contains an extraordinarily thorough set
of scoring guides. Supporters of rubrics say that this approach addresses some of
the most glaring problems of conventional teacher evaluation. First, rubrics are
more ‘judgmental,’ giving teachers clearer feedback on where they stand, usually
on a 4-3-2-1 scale. Second, rubrics explicitly lay out the characteristics of each
level, giving mediocre and unsatisfactory teachers a road map for improving their
performance. And third, rubrics are much less time-consuming for principals to
complete, since lengthy narratives and lesson descriptions are not required. (p.2)
Although Marshall’s (2006a) description implied that the rubric score is sufficient to
provide effective feedback to teachers, the principal-teacher discussions that take place
around a rubric evaluation provide more valuable assessment of instruction than numbers
corresponding to teacher behaviors. Teacher self-assessments combined with principalteacher verbal interaction around that assessment provide a forum for effective
discussions about a teacher’s own practice. With this approach, the teacher may also
provide artifacts that will enhance the principal’s understanding of the teacher’s
performance and permit a strong evidence-base to support the verbal conversation.
Summary of High Quality Principal-Teacher Interactions
Principals typically establish a pattern for observing teacher instructional practices,
but rarely collaborate with teachers. In order to improve instructional practices principals
and teachers must regularly be in the classroom together to review data and to reflect on
instructional practices. The use of an instructional practices rubric provides an effective
structure to accomplish these goals. Through this process, the practices that impact
30
instruction include (a) summer one-on-one discussions focused on quality teaching that
prepare the teachers for future high quality principal-teacher interactions; (b) snapshots that
place principals in classrooms where instruction is taking place and provide a launching
point to enhance discussions of the teachers’ instructional practices as well as opportunities
to model high quality instruction; (c) data reviews that provide both the teacher and the
principal indicators of student performance to aide in recommendations for improving
instructional practices; (d) discussion of the common language in the quality instruction
rubric that provides the participants common concepts and focus necessary to improve
instructional practices.
Effective Ways to Measure the Quality of Teacher Instructional Practices
The Quality Instruction Rubric (QIR) used in this study was developed in the
Kenton County School District School System (the school system for Dixie Heights High
School where this study took place) from the work of Charlotte Danielson (1996). Cofacilitated by the teachers union and central office, the QIR articulates a shared
understanding of the indicators for quality instructional practices. As an informal, formative
assessment tool the QIR uses a research supported rubric to assess the quality of
instructional practices and establish the correlation between good teaching, classroom
grades, and student discipline (Danielson, 1996; Halverson, Kelley & Kimball, 2004;
Danielson & McGreal, 2000). The development and validation process of the QIR is
described in more detail in the instrumentation section of chapter three.
Rubric Based Evaluation
As discussed earlier in this chapter, rubric evaluation can enhance discussions of
instructional practices between a principal and teacher. The principal can also be provided
31
with artifacts of teaching that can further help the principal understand the nuanced details
of the teacher’s instructional practices. Assigning ambiguous numbers to teacher behavior,
a common practice of most teacher evaluations, may or may not reflect the teacher’s true
performance and determination as either an effective teacher or a teacher requiring support.
Often teachers do not value this type of evaluation and dismiss accompanying feedback
(Hirsch, 1999; Marshall 2008; Whitaker, 2003).
According to Mathers, Oliva, and Laine (2008), “An evaluation is considered
reliable if two or more evaluators use the same evaluation instrument and come to the same
conclusion” (p. 8). The rubric used in this study was pilot tested in the district by principals
and central office personnel. To ensure reliability and validity of any evaluation instrument,
Mujis (2006) recommends training for teachers and principals. The central office staff
trained the principals intensely on the use of this instrument. See measure and instruments
section of chapter three for more details on reliability and validity of the rubric based tool
(QIR) used to measure teacher instructional practices during this study.
Foundation of the QIR
The Quality Instruction Rubric (QIR) was developed by a committee of
stakeholders from the Kenton County School District based on the work of Charlotte
Danielson (1996). See development of the quality instruction rubric section of chapter
three, for more details on the development process for the QIR. Danielson’s (1996) work
established standards to assess and promote teacher development across career stages,
school levels, subject matter fields, and performance levels. The Kenton County Schools
QIR embedded Danielson’s standards within a framework that included the four domains
32
of planning and preparation, the classroom environment, instruction, and professional
responsibilities. These domains included 22 components:
The teacher demonstrates knowledge of course content, core content and depth of
knowledge and considers prerequisite knowledge necessary for successful student
learning environment, instructional goals that represent high expectations, reflect
relevant learning and demonstrate conceptual understanding of curriculum,
standards and frameworks, coherent lesson and unit plans, instructional grouping
and learning activities, integrates resources for rigor and relevance, creates a
respectful environment, develops a rich learning, recognition of acceptance of and
sensitivity towards diverse opinions and cultures, effectively manages student
behavior, demonstrates clear content-related practices, incorporates higher order
questioning and manages effective instructional activities assignments, and
student grouping, effectively differentiates, instruction, demonstrates a clear
knowledge of developmental characteristics, assessment criteria and standards,
reflects upon own teaching and uses self-assessment to improve future teaching,
integrates new knowledge from professional development, contributes to school
and district through active involvement in school initiatives, professionalism and
positive relationships, communicates with families, manages accurate records ,
manages non-instructional records, demonstrates professionalism in demeanor,
dress, language and punctuality. (Danielson, 1996, p. 4)
For each component the rubric specifies a 4-category range for assessing
appropriate teacher behaviors that can be referenced and modeled. Good teaching can
become a quantifiable practice once teachers are familiar with the rubric. The QIR
framework offers multiple applications for teaching and learning. It provides guidance for
experienced professionals regarding the significance of principal-teacher conversations to
validate quality instruction and to discuss effective teaching techniques. For a mentor or
coach, the framework provides a guide to assist inexperienced teachers through their first
few years (and beyond). The framework also fosters teachers’ development by specifying
techniques for assessing each aspect of practice, establishing a program for evaluator
training, and using the framework for formative as well as summative evaluation. For the
community, the framework communicates the attributes of high quality instruction in its
schools (Danielson & McGreal, 2000).
33
Another important feature of the QIR framework has been that it was publically
derived, comprehensive, and generative – a living document that changed with the culture
of the school and did not endorse a specific teaching style. It was not a checklist of
specific behaviors, but rather a launching pad for discussion to enrich instruction.
Because the QIR was grounded in Danielson’s work, had a common language and
communicated a shared understanding of quality teaching practices, it facilitated effective
professional conversations between principals and teachers.
Principal Evaluation of Teacher Instructional Practices
Teacher evaluations are most effective when principals and teachers agree about
what instructional practices should be evaluated and when the principal has been trained
in evaluation protocol. According to Jacobs and Lefgren, (2006), “When trained in
quality evaluation techniques and how to recognize quality teaching, most principals can
become effective evaluators of teachers” (p.67). When principals are further trained to
recognize effective instructional practices and to replicate these practices, the principal’s
role as instructional leader becomes indispensible for school improvement.
As a study by Jacobs and Lefgren (2006) found, principals can evaluate teachers
effectively and teachers were comfortable with principals evaluating them, though not for
merit pay. Although there is little research on principals being the primary evaluators of
teachers, they are the supervisor of the teacher and therefore the most logical evaluator.
As this study demonstrated, in an effective evaluation process the principal must be
trained and the teacher must agree that the evaluation instrument is an accurate reflection
of what good teaching looks like. According to Epstein (1985), “Principals and
curriculum supervisors have long been recognized as appropriate evaluators of teachers,
34
despite the problems of partiality in ratings or infrequent and incomplete observations“
(p. 4).
Student Performance
Student performance is defined in a number of ways by different researchers and
educators, but student grades and discipline are clearly connected to student performance
regardless of definition. Grades are universally used as stand-alone indicators of student
performance in most educational institutions and research has clearly established the link
between discipline and student achievement (Jimerson, 2001). Although neither grades nor
discipline is an ideal indicator of student performance, they are common, easily accessible,
and valued by stakeholders.
Grades and Instructional Practices
Although not one of the most reliable indicators of student achievement, classroom
grades provide some measure about how well students are performing in class, according to
the teacher. Grades also allow principals to mark changes across the year that reflect the
teacher’s perspective of student performance. According to Guskey (2006), grades indicate
how a particular student is learning at a given time in a subject area. While not essential to
learning, grades provide important information regarding the progress of a student at a
given time, but according to O’Connor (2009), “It is essential to be clear about the primary
purpose of grades, which is to communicate students’ achievement of learning goals” (p.2).
Teachers and students may disagree on how a grade should be assigned. For example,
students believe effort should be a major factor in grades, while teachers do not believe
effort should be a factor in the students’ grades (Adams, 2002). O’Connor (2009) reported
in a recent review of all literature concerning grading practices in secondary schools, “that
35
most teachers have combined achievement with behavior to varying extents in determining
grades because they believe it demonstrates what they value and will motivate students to
exhibit those behaviors” (p. 3). According to McMillan (2001) “findings from other studies
show that this practice (intertwining grades and behavior) is still pervasive” (p. 30).
Gathercoal (2004) noted that “due to the excessive entanglement between achievement and
behavior, achievement grades are often misinterpreted” (p. 153). Myers, Milne, Baker,
Ginsberg (1987) claimed that research clearly indicates that grades are a combination of
student achievement and effort in some varying combination. Thus, grades may be viewed
as an indicator of student performance that reflects instructional practices and student effort
and allow researchers to track changes in student performance from the teacher’s
perspective (effort and achievement).
Discipline and Instructional Practices
Students who are not prepared for class, who are bored, or who have
underdeveloped social skills will act out and grades will be lower than a student who has
the afore mentioned skills and preparation (Danielson, 1996). Hence discipline may be a
good indicator of quality instructional practices because if a teacher’s instructional practices
are high quality, the number and frequency of discipline infractions, especially aggressive
infractions, tend to be reduced.
Students with behavior problems earn lower grades and suffer a higher failure rate.
Teachers are more likely to find success with these students when they have the support of
the principal and both work together to help the student. According to McIntosh, Flannery,
Sugai, Braun, and Cochrane (2008) “Problem behavior presents another distinct barrier to
high school graduation because of school disruption and increased use of exclusionary
36
discipline, such as suspensions and expulsions” (p. 244). Frequent principal-teacher
collaboration better equips the teacher in the classroom to reduce such disruptions and
supports the students’ learning. In classrooms where teachers worked with students and
believed that students could achieve, students earned better grades and demonstrated fewer
behavior problems than teachers who felt that students were in complete control of their
environment (Jimerson, 2001). According to Cotton (1996), for quality instruction to take
place, teachers need a supportive principal, standards for student behavior, high
expectations for students, student input into discipline policies, consistent application of
rules and the authority to discipline students. With these structures in place, discipline will
be reduced, allowing the teacher to improve and maintain quality instructional practices.
Frequency and Focus of Teacher Conversations
The frequency and focus of teacher conversations impacts the success of the
school and it is the principal’s responsibility to establish procedures that result in a
collaborative environment among instructional leadership, teachers, and students.
According to some research, collaborative school environments yield improved
instructional practices and increased student performance (Leithwood, Seashore,
Anderson, & Wahlstrom, 2004). Thus, the principal must nurture a culture of
collaboration among teachers because if teachers feel valued, they will value the students.
To create this climate of collaboration, teachers must speak frequently with each other,
with the principals, and with the students. Establishing high quality conversations
between teachers and students is imperative for students to know that teachers care about
them and are interested in their learning (Gurr, Drysdale, & Mulford, 2006; Halawah,
2005; Hirsch, 2008; Liethwood et al., 2004; Wagner, 2006).
37
Principal-Teacher Conversations
This study institutionalized quality principal-teacher interactions into the culture
of the school; but, as mentioned earlier, these interactions would be ineffective without a
collaborative component. Collaboration between principals and teachers is essential. In
the absence of such, many common interventions like walkthroughs provide only basic
data and fail to focus on improving instructional practices.
Principals must establish a culture of collaboration with teachers to support their
teaching. According to Barth (1990), collegiality should be the norm and principals are
the catalyst to initiate collegiality for teacher conversations. Principals must then set the
expectation that the collegiality continues in teacher to teacher conversation. Routman
(2002) goes as far as to suggested that principals should be viewed as a “learner and
equal group member” when working with teachers (p.35). This type of intense
involvement of the principal establishes great credibility when working with teachers on
specific curriculum and discipline issues.
According to Ginsberg (2001), “Frequent, brief, unscheduled walkthroughs can
foster a school culture of collaborative learning and dialogue” (p. 44). Such unscheduled
classroom visits outside the context of the evaluation can foster collaborative
conversation between the principal and teacher. Additionally, Downey et al. (2004)
concluded, “the frequent sampling of a teacher’s actions give greater validity to what you
observe and often lower teacher apprehension over time, making formal observations
more effective” (p. 6). By providing follow-up feedback to teachers, principals’
demonstrate knowledge of teacher instructional practices and these quality follow-up
38
conversations with teachers improve teacher instructional practices, and lead to increased
student performance.
Teacher-Teacher Conversations
Teaching has changed from a lonely profession with little or no feedback from
principals, to a profession that includes principals who frequently visit classrooms,
participate in team learning, and guide school improvement. Because teacher-teacher
conversations are beneficial for each teacher’s pedagogy, principals must also foster such
collaboration. Dufour and Marzano (2009), state that “by promoting teacher learning
collaborative teams, a principal is far more likely to improve student achievement than by
focusing on formal teacher evaluation” (p. 63). Teachers interacting together also
promote good teaching practices and problem solving conversations. It eliminates the
lonely teacher effect by combining work effort in the building. As Danielson (1996),
stated, “Educators have learned the value of a common vocabulary to describe teaching”
(p. 5). This vocabulary gives educators common ground to discuss improvement,
difficulties, and goals for their classrooms. Teacher-teacher conversations rich in this
common language for instructional purposes also establish common ground for those new
to the profession to learn and establish baselines for their own classrooms.
Clearly teacher-teacher conversations are instrumental to improving instructional
practices and increasing student performance. This strong statement is relevant to this
study because, although the interactions did not provide time or a formal means for
teachers to discuss their work with each other, we hypothesized that the enriched
principal-teacher interactions would generate substantial teacher-teacher collaborations.
As Goddard and Heron (2001), argued, teacher-teacher collaboration improves teaching
39
and learning, and “teachers must be central to any meaningful change in schools” (p.44).
According to Goddard, Goddard, and Tschannen-Moran (2007), “there is a positive link
between student achievement and teacher collaboration” (p.6) and some research further
indicates that teacher isolation hinders increases in student achievement. For example,
Smylie, Lazarus, and Brownlee-Conyers (1996), found that teacher autonomy often
negatively affected student achievement; conversely student achievement improved when
teachers discussed curriculum, classroom management, and other aspects of the
profession.
Teacher-Student Conversations and Good Teaching
According to Prusak,Vincent, and Pangrazi (2005), “whether giving instructions,
offering compliments, or delivering discipline, how teachers talk can make the difference
between success and failure” (p. 21). Students appreciate teachers being involved in their
educational lives, and within a school culture of strong collaboration, teachers interact
with students to improve learning. In response to quality teacher-student conversation and
collaboration, students learn more, are more motivated, and more inspired by good
teaching.
As Felner, Kasak, Mulhall, and Flowers (1997) stated, “The assumption of a
certain number of acceptable educational causalities is no longer viable” (p. 524).
Educators cannot discount any student regardless of ability, and good teaching can
significantly impact achievement. For example, studies by Haycock (1998) and Rivkin,
Hanushek, and Kain (2001) found that effective teachers show higher gains with low
achieving students than less effective teachers show with average students. Student
40
achievement also improved when teachers and principals worked together to improve
instructional practices.
According to Cushman and Delpit (2003), “Just teaching ‘by the book’ bores
anybody, not only teenagers” (p.105); and in quality instruction, teachers “make learning
a social thing, make sure students understand, respond with interest when students show
interest, care about students and their progress, take pride in student work and provide
role models” (p.161). When this good teaching occurs students value their learning.
They’re empowered when a teacher asks their opinion and gives them a voice in their
own instruction; and teachers who seek student feedback gain a powerful tool for
improving student learning and their own instructional practices.
Conclusion
Historically, principals have managed buildings without appreciating the value of
collaborating with teachers to improve instructional practices. This study examined the
effect of specific principal interactions intended to improve teacher instructional
practices, increase student achievement, and improve the frequency and focus of teacher
conversations. These principal-teacher interactions were further designed within
parameters to make them feasible to a wide range of principals interested in improving
their leadership.
When principals understand the strengths and weaknesses of all teachers under
their supervision, improvement can be attained. Principals and teachers can collaborate to
improve instructional practices based on real data with actual teachers and students and
even the best teacher can benefit from feedback and reflection. At the very least, with the
treatment investigated in this study, principal opinion will be based on observed
41
instruction in the classroom rather than guessing or hearsay. This study further suggests
that principals must accept and adopt a collaborative approach to improving instructional
practices by being in classrooms and taking part in the educational process. The day-today grind of school management must not displace the priority of ongoing school
improvement. A typical day should include principals visiting classrooms and discussing
teaching and learning with teachers in some fashion. Research clearly suggests that the
more involved and knowledgeable about instructional practices the principals become,
the more improvement will happen in schools. As Cubberly (1923) said, ”as goes the
principal so goes the school” (p. 351).
This study’s goal was to explore how a specific set of principal-teacher
interactions affected teacher instructional practices, and to analyze any effects changes in
teacher instructional practices had on student performance and the frequency and focus of
teacher conversations. This goal was explored by examining the following specific
research questions:
1. How will the treatment of principal-teacher interactions affect teachers’
instructional practices?
2. How will changes in teachers’ instructional practices, initiated by the set
of principal-teacher interactions, affect student performance?
3. How will changes in principal-teacher interactions affect the frequency
and focus of teacher conversations with principals, students, and other
teachers?
42
CHAPTER THREE
METHODOLOGY
Participants
There were three groups of participants involved in this study: teachers, students,
and principals. All three groups were nested in one high school, Dixie Heights High
School, in the Kenton County School District in Northern Kentucky.
The School and District
According to the Kentucky Performance Reports (2008), Dixie Heights High
School was ranked 48th out of 236 High Schools in the state. This placed Dixie Heights
High School in the top 20% of high schools in Kentucky. Kentucky’s national education
ranking in 2005 was 34th by one measure (Watts, 2007). This ranking was consistent with
Education Week’s Quality Counts 2007 34th rank and the Morgan Quinto Press 20062007 31st rank (Morgan & Morgan, 2007). Expenditure per pupil in Kenton County
School District was, $7,350/student in 2006-2007, relatively similar to Kentucky as a
whole, $7,668/student, and the nation, $8,345/student. (School Data Direct, 2007)
At the time of this study, Dixie Heights High School had a student population
only slightly different from Kentucky’s average in ethnicity. However, these same
demographics varied considerably from the national average as referenced in Table 1 for
state and national comparisons.
43
Table 1
Ethnicity Distribution for Dixie Heights High School Students Compared to State and National
Dixie
Heights1
Kentucky2
United
States2
White (%)
95.0%
90.2%
55.0%
Black (%)
1.4%
7.5%
16.6%
Hispanic (%)
1.2%
2.0%
21.1%
Asian/Pacific
Islander (%)
0.9%
1.0%
4.6%
American Indian/
Alaska Native (%)
0.1%
0.2%
1.2%
Racial/Ethnic Groups
1
2
Dixie Heights High School, 2009
School Matters, 2010
In 2009, Dixie Heights High School student body contained considerably fewer
economically disadvantaged students (25%), compared to Kentucky (48.5%) and the
nation (41.8%). The percentage of students with disabilities was similar to the state and
nation as referenced in Table 2.
Table 2
Economically Disadvantaged and Disabled for Dixie Heights High School Students Compared to State and
National
1
2
Students with
Special Needs
Dixie
Heights1
Kentucky2
United
States2
Economically
Disadvantaged
25.0%
48.5%
41.8%
English Language
Learners
0.9%
1.6%
5.0%
Students with
Disabilities
13.5%
16.0%
12.7%
Dixie Heights High School, 2009
School Matters, 2010
Within the community associated with Dixie Heights High School, adult
education levels were slightly higher than in Kentucky and the nation in 2009. For the
Dixie Heights High School community, 93.7% of the population of adults held a high
44
school diploma or higher compared to 81.1% for the state and 85.1% for the nation.
Adults in the local community with a bachelor’s degree represented 35.3% of the
population, while the state average was only 20.9%, and the national average was 27.8%.
Only 8.8% of the households in the Dixie Heights High School community were single
parent homes, compared to the state (11.1%) and the nation (11.4%)
(SchoolDataDirect.org, 2007). Dixie Heights High School household income
distributions were slightly higher than national average and the community was slightly
younger than the state and national averages (see Tables 3 and 4 for details). The age of
the population in the Dixie Heights High School community was similar to the state and
national averages.
Table 3
Household Income for Dixie Heights High School Families Compared to State and National
Household Income
Distribution
Dixie
Heights1
Kentucky2
United
States2
Less than $15,000
10.5%
17.1%
11.7%
$15,000 - $29,999
10.5%
18.9%
5.3%
$30,000 - $49,999
17.6%
21.5%
20.3%
$50,000 - $74,999
23.4%
18.3%
19.3%
$75,000 - $99,999
16.7%
10.5%
12.4%
$100,000-$149,999
16.6%
8.8%
12.5%
$150,000 or more
9.6%
Dixie Heights High School, 2009
2
School Matters, 2010
5.0%
8.5%
1
Table 4
Age Distribution for Dixie Heights High School Compared to State and
National
Population Distribution
by Age
Dixie
Heights1
Kentucky2
United
States2
4 Years or Younger
7.6%
6.5%
6.8%
5-19 Years
20.1%
18.3%
18.9%
20-44 Years
34.7%
35.5%
35.8%
45-65 Years
28.0%
26.5%
25.7%
65 Years or Older
9.7%
Dixie Heights High School, 2009
2
School Matters, 2010
13.3%
12.9%
1
45
Teachers
The population distribution below describes the 55 high school teachers who
taught at Dixie Heights High School during the 2008-2009 school year. These descriptive
characteristics for 2008-2009 were representative for both years of the study (school
years 2007-2008 and 2008-2009). At the time of this study, the diverse staff had varied
backgrounds, experience, and expertise. Table 5 provides details on some of these
characteristics.
46
Table 5
Dixie Heights High School Faculty Characteristics 2009
Gender
M
Race
Departments
National Board Certified
Post Graduate Education
Years of teaching experience
21
(38.2%)
F
34
(61.8%)
Caucasian
53
(96.4%)
Black
1
(1.8%)
Hispanic
1
(1.8%)
English
9
(16.4%)
Arts & Hum
6
(10.9%)
Science
7
(12.7%)
Business
4
(7.3%)
Social Studies
7
(12.7%)
Math
11
(20.0%)
Foreign Language
4
(7.3%)
Physical Education
3
(5.5%)
CAD
1
(1.8%)
Technology
1
(1.8%)
Family Science
2
(3.6%)
Yes
7
(12.7%)
No
48
(87.3%)
60+ hours
13
(23.6%)
30-59 hours
35
(63.6%)
0-29 hrs
7
(12.7%)
Average
12.9
0 Years
3
(5.7%)
1-2 years
5
(9.4%)
3-5 years
10
(18.9%)
6-10 years
9
(17.0%)
11-20 years
12
(22.6%)
21-30 years
12
(22.6%)
30+ years
2
(3.8%)
Dixie Heights High School, 2009
The teacher population for the study included the entire faculty that was available
to the treatment throughout the 2007-2008 and 2008-2009 school years. Teachers who
were absent or inactive for a significant part of the two year study (more than 12 weeks)
47
or who were not in charge of a classroom on a daily basis were excluded from data
collection. However, since all of the interactions in this study were integrated into the
expected principal-teacher interactions, teachers not officially in the sample were also
exposed to the treatment.
Students
Student participants for this study were the 1322 high school students who
attended Dixie Heights High School during the 2007-2008 school year and the 1340 high
school students who attended Dixie Heights High School during the 2008-2009 school
year. These diverse students had varied backgrounds, experience, and expertise. Table 6
provides details on some of the student characteristics for 2009.
Table 6
Dixie Heights High School Student Characteristics 2009
Gender
Male
Race
Social Economical
Status
Other
694
(51.0%)
Female
611
(49.0%)
White
1263
(93.4%)
Black
23
(1.7%)
Hispanic
26
(1.9%)
Asian
14
(1.0%)
Other
14
(1.0%)
Free Lunch
255
(19.0%)
Reduced Lunch
83
(6.0%)
Paid Lunch
1014
(75.0%)
English Language Learners
5
(0.4%)
Migrant
0
(0.0%)
Students with Disabilities
Single Parent
11th Grade Math Proficiency
10th Grade Reading Proficiency
176
403
(13.5%)
(31.0%)
57.0%
70.0%
Attendance Rate
94.6%
Dropout Rate
2.1%
Retention Rate
7.4%
Successful Transition to Adult Life
99.6%
Dixie Heights High School, 2009
48
Principals
The school’s principals (one head principal and three assistant principals) had
worked at Dixie Heights High School as an administrative team for two years prior to the
implementation of the interactions for this study in the fall of 2007. Three of the four
principals worked as an administrative team at Dixie Heights High School for an
additional three previous years. The head principal and one assistant principal had been
on the same administrative team all nine years of their administrative tenure. During the
initial year of this study, the least experienced principal had served three years at this
school and the most senior principal had served eleven years at this school. These two
principals were also the researchers for this study. See Table 7 for more detail on the four
principals responsible for the principal-teacher interactions used in this study.
49
Head
Principal*
Assistant
Principal 1
Assistant
Principal 2
Assistant
Principal 3*
45
44
40
39
Highest Degree
of Education
Teaching
Background
Years Teaching
Experience
Major
Administrative
Duties
Years of
Administrative
Experience
Age at Beginning
of Study
Table 7
Dixie Heights High School Principal Characteristics 2009
9
Seniors, Arts &
Humanities,
Dropouts,
Personnel,
Facilities
13
Social
Studies and
Psychology
Rank I in
Education
9
Curriculum and
Professional
Development,
Counseling
6
Business
Rank I in
Education
6
Sophomores and
Juniors, Head
Athletics, Football
Coach
8
Business
Rank I in
Education
4
Freshman
Academy,
Technology,
Assistant Athletics
13
Science and
Math
Rank I in
Education
Dixie Heights High School, 2009
*Researchers for this study
Research Design
The research design used in this study was quasi experimental with multiple
quantitative analytic techniques including Single Group Pretest Posttest Design
(Campbell & Stanley,1963) and Single Cross-Sectional Interrupted Time Series (Glass,
Willson, & Gottman, 1975). The research design section is organized by research
question and research design. That is, each research question is discussed utilizing one of
the research designs referenced earlier. See Table 8 for details.
50
Table 8
Research Questions-Design-Measures for the Study
Research Question
Design
Measures
RQ1
How will the set of principal-teacher
interactions affect teacher’s instructional
practices?
Single Group Pretest
Posttest Design
Principal and Teacher
QIR
RQ2
How will the changes in teachers’
instructional practices, initiated by the set of
principal-teacher interactions, affect student
performance?
Single CrossSectional Interrupted
Time Series
Longitudinal
Classroom Grade
Distributions and
Student Discipline
Referrals
RQ3
How will the changes in principal-teacher
interactions, affect the frequency and focus
of teacher conversations?
Single Group PretestMidtest- Posttest
Design
Student and Teacher
Surveys
This study was conducted with three specific time frames in relation to measures
and principal-teacher interactions:
1. Prior to the pilot year (prior to fall 2007)
2. Pilot year (2007-2008 school year)
3. Year of full implementation (2008-2009 school year)
Two principal-teacher interactions (snapshots and data reviews) were implemented
during the pilot year and the full set of four principal-teacher interactions (one-on-one
summer meetings, snapshots, data reviews, and teacher self-assessment) were
implemented in the year of full implementation. Student grade and discipline data were
available and collected from each of these three time frames. Teacher and student surveys
data were only available in the two time frames of the pilot year and the year of full
implementation. Principal and teacher QIR data were only available in the year of full
implementation. See Table 9 for details on available date for each time period.
51
Table 9
Available Data for Associated Time Periods of the Study
Principal and Teacher
QIR
(Measures for RQ1)
Single Group Pretest
Posttest Design
Classroom Grade
Distributions and
Discipline Referrals
(Measures for RQ2)
Single Cross-Sectional
Interrupted Time Series
Student and Teacher
Surveys
(Measures for RQ3)
Single Group Pretest
Posttest Design
Prior to Pilot Year
(2003-2004
through 20062007)
X
Pilot Year
(2007-2008 School
Year)
X
X
X
X
Year of Full
Implementation
(2008-2009 School
Year)
X
Research Question One (Principal & Teacher QIR)
Procedures. Before and after the year of full implementation principal and
teacher QIRs were completed for all teachers. In addition, in August 2008 and again in
May 2009, each teacher completed a QIR as an assessment of their own instructional
practices. At the same time each of the four principals independently completed a QIR
for each teacher as an assessment of instructional practices as referenced in the treatment
and measures section of this chapter.
Research design. The principal-teacher interactions and QIR data were used in a
single group pretest posttest research design as modeled in Figure 2.
52
O1
Teacher Instructional
Practices
Teacher QIR
Principal QIR
X
Principal-Teacher
Interactions
Snapshots
Data Reviews
Summer Meetings
Teacher QIR
O2
Teacher Instructional
Practices
Teacher QIR
Principal QIR
Figure 2 Exploration of Impacts of Treatment on Teacher Instructional Practices
Research Question Two (Classroom Grade Distributions & Classroom Discipline
Referrals)
Procedures. The researchers collected four years of student performance data,
classroom grade distributions and classroom discipline referrals prior to the pilot year in
order to obtain a baseline of performance. Establishing a baseline of student performance
data provided support for a connection between changes in student performance to the
principal-teacher interactions.
Classroom grade distributions and discipline referrals were collected and analyzed
for the six school years of 2003-2004 through 2008-2009, prior to the pilot year, during
the pilot year, and during the year of full implementation. Two of the principal-teacher
interactions associated with treatment investigated in this study (snapshots and data
reviews) were implemented during the pilot year (2007-2008) and continued through the
year of full implementation (2008-2009). The second two principal-teacher interactions
associated with the treatment investigated in this study (summer meetings and teacher
self-assessment) were implemented during the summer before and during the year of full
53
implementation (2008-2009). For the purpose of this single cross-sectional interrupted
time series design, student performance measures, classroom grade distributions and
discipline referrals from school years 2003-2004, 2004-2005, 2005-2006, and 2006-2007
were used as pretreatment data. Student performance measures from school years 20072008 and 2008-2009 were used as post treatment data.
Research design. In order to detect changes in student performance, single crosssectional interrupted time series, as modeled in Figure 3 was used to analyze data
collected from classroom grade distributions and classroom discipline referrals.
O
Grades & Discipline
O
Grades &
Discipline
2003-2004
School Year
O
Grades &
Discipline
2004-2005
School Year
O
Grades &
Discipline
2005-2006
School Year
O
Grades & Discipline
O
Grades &
Discipline
2007-2008
School Year
2008-2009
School Year
Snap Shots
Snap Shots
2006-2007
School Year
Data
Reviews
Data
Reviews
Summer
Meetings
Teacher
QIR
-Observations
Teacher
QIR
- Principal-Teacher Interactions
Figure 3 Exploration of Impacts of Treatment on Student Performance
Research Question Three (Teacher & Student Surveys)
Procedures. Teacher surveys were sent electronically to all teachers, and student
surveys were given to random samples of 200 students, during each of the three specific
time frames for this research in order to utilize a single group pretest posttest research
54
design as modeled in Figures 4 and 5. The same teacher and student surveys were
administered electronically in May 2007 as the pre-test and then again in May 2008 at the
end of the pilot year (post-test for pilot year, pretest for the year of full implementation).
These same surveys were given to the same teachers and a different random sample of
students again in May 2009 (post-test for the year of full implementation).
Research design. The assumption associated with the use of single group pretest
posttest design was that the pretest scores were representative of the group in years prior
to the treatment being implemented. As discussed in chapter two, evidence supports that
without a change agent teacher behavior reflects little change from year to year
(Marshall, 2003; Toch & Rothman, 2008).
As modeled in Figures 4 and 5, student and teacher surveys from before and after
the pilot year as well as before and after the year of full implementation were used in the
single group pretest posttest design as measures of the frequency and focus of teacher
conversations. Data from student and teacher surveys was used specifically to analyze the
changes in the frequency and focus of principal-teacher conversations, teacher-teacher
conversations, and teacher-student conversations.
55
Pilot Year
O1
Teacher Conversations
X1
Principal-Teacher
Interactions
O2
Teacher Conversations
Teacher Survey
Student Survey
Snapshots
Data Reviews
Teacher Survey
Student Survey
Figure 4 Exploration of Impacts of Treatment on the Frequency and Focus of Teacher Conversations
During the Pilot Year
Year of Full
Implementation
O2
X2
Principal-Teacher
Interactions
Teacher Conversations
Teacher Survey
Student Survey
O3
Teacher Conversations
Snapshots
Data Reviews
Summer Meetings
Teacher QIR
Teacher Survey
Student Survey
Figure 5 Exploration of Impacts of Treatment on the Frequency and Focus of Teacher Conversations
During the Year of Full Implementation
The year by year analysis from Figure 4 permitted exploration of changes in the
frequency and focus of teacher conversations that occurred over the course of the pilot
year. The year by year analysis from Figure 5 permitted exploration of changes in the
frequency and focus of teacher conversations that occurred over the course of the year of
full implementation.
56
Measures and Instruments
There were several instruments used to take a number of measures in this study.
Principal-completed and teacher-completed QIRs were used as measures of instructional
practices. Classroom grades and discipline referrals were used as measures of student
performance. Teacher and student surveys were used as measures for the frequency and
focus of teacher conversations.
QIR (Quality Instruction Rubric)
The quality instruction rubric (QIR) was used as measures of teachers’
instructional practices and as part of the treatment to structure teacher self-assessments
and guide instructional conversations between individual teachers and principals as
referenced in Appendix C (Quality Instruction Rubric).
Procedures. The QIR was completed individually by each teacher and by the
team of principals in August and May of the year of full implementation (2008-2009). As
a measure of teacher instructional practices (independent variable) and as a part of the
treatment, each teacher completed a self-evaluation of their instructional practices using
the QIR document in the fall of 2008 (pre-test) and then again and the conclusion of the
school year in May 2009 (post test).
Validity. Development of the QIR. The QIR, based on the work of Charlotte
Danielson (1996), as discussed in chapter two, was developed and adapted by teachers,
principals, central office personnel, and teacher union personnel in Kenton County
School District. The process began with a committee of four principals (two elementary,
one middle, one high) four central office personnel (superintendant, deputy
superintendent, assistant superintendant, special education coordinator) and eight teachers
(all members of the teacher union including the president and assistant president) who
57
indicated a need for a better evaluation tool to use with teachers. The high school
principal on this committee was one of the two researchers who conducted this study.
The committee began to investigate work on teacher quality and agreed that the work of
Danielson (1996) and Halverson et al. (2004) were most helpful to improve teaching in
the Kenton County School District.
The committee cited a general dissatisfaction with the current evaluation system
by many employees due to seemingly vague descriptors and ambiguous ranking of
instructional practices. This made Danielson’s work particularly interesting to the
committee because of the core beliefs associated with this specific body of research.
First, the committee felt a common language of instructional practices between
administrators and teachers would provide a much more valid evaluation of teacher
instructional practices. Second, the committee agreed that a common language of
instructional practices would provide the opportunity for coaching teachers to
proficiency. Finally the committee believed that a continuum of quality of instructional
practices would provide teachers specific feedback on how instructional practices can be
improved year to year.
Next, the committee laid out a plan for developing an instructional rubric which
could be used to evaluate teachers. The first drafts were developed by large groups of
teachers and building level administrators and led by members of the original committee
working together in groups of four to five to define good teaching. The rubric was then
reviewed by a group of teachers throughout the district chosen by their principals for this
committee because they were perceived as good teachers.
58
Next, the rubric was field tested by groups consisting of principals, central office
staff, and teachers in several hundred classrooms throughout the school district in an
attempt to establish how the instrument would perform in practice. Field tests were
conducted at each school by a team of four observers observing three to five classrooms
once each month throughout the 2007-2008 school year. Included in the field tests were
three high schools, four middle schools and eleven elementary schools for an estimated
700 field tests. Each observer team using the rubric during a classroom observation
included two building principals and two central office personnel. Occasionally (less that
10% of the time), a teacher would accompany this team as a fifth observer. Each
observation lasted approximately ten minutes. After a team of observers left a room, they
would debrief for four to five minutes to discuss what was witnessed during the
observation (based on the QIR) and to discuss good coaching tips for the teacher. While
principals only participated in field tests at their own schools, the central office staff of
five people involved in these field tests of the QIR were consistently observing in
multiple buildings. The cross-building perspective of the five central office staff
contributed to norming the use of the rubric across various school contexts in the district.
When teachers participated in the field tests, they did so in a school different from their
own.
From these field tests, many strands of the QIR were found to be redundant and
thus combined or eliminated. With hundreds of field tests completed, the original
committee (four principals, four central office personnel, and eight teachers) began
making adjustments to drafts of the quality instruction rubric (QIR). Other slight changes
were made to improve word consistency. For example, some elements used the word
59
“consistently” under the proficient indicator while others used the words “most of the
time.” It was agreed by the committee that using a common language across indicators
would be more beneficial for teachers and observers and changes were made accordingly.
QIR training. To increase the validity of data from the teacher self-assessment on
the QIR, training was provided to the faculty of Dixie Heights High School. Before
conducting the QIR as a self-assessment, the faculty at Dixie Heights High School
received whole group preliminary training on the intent and meaning (a self-reflection
tool) of this instrument. While this may have been the first time some teachers actually
examined the QIR, all teachers had received emails and drafts of the tool from the district
office during its development during the previous year. Additionally, three Dixie teachers
had served on committees that developed the document. During the training teachers
were also informed that the principals would complete a separate evaluation of each
teacher using the same instrument. Finally, the data was compiled to aid the principals in
analyzing how the perceptions of the principals and teachers differed as well as to
provide input for professional development and individual principal-teacher interactions.
Because the principals of Dixie Heights High School were involved in the development
of the QIR, multiple snapshot walks, and periodic calibration meetings, no additional
training with the QIR was conducted with them.
The QIR used in this study was more complex than traditional evaluation
instruments used for teacher evaluation. Each indicator included five descriptors and the
evaluator had to identify each indicator of performance as Unsatisfactory, Beginning,
Developing, Proficient or Exemplary. The complexities and newness of the QIR brought
into question the validity of the teachers’ original assessment of their instructional
60
practices (on the pre-test). While all principals conducted multiple group snapshot visits
and engaged in discussions about the QIR ratings different principals assigned based on
common observations, only a few teachers were afforded this same exposure to the QIR.
However, after using the document and experiencing a number of principal-teacher
interactions during the school year, the validity of teacher data gathered from this
instrument on the posttest increased.
Convergent validity (district calibration of the QIR). Kenton County School
District central office personnel used the QIR in multiple schools within the county.
District personnel accompanied the principals on classroom snapshot visits (treatment
providers) on a monthly basis to aid in the calibration of the QIR. Although measures of
teachers of other schools in the district were not within the scope of this research, the
cross-district calibration of QIR ratings enhanced confidence in the generalizability of
this particular instrument beyond one particular school.
Each month district personnel (usually an assistant superintendant and a district
level curriculum coach) conducted snapshot visits with at least two of the building
principals for at least three different teachers. After each snapshot visit, the group
discussed what each observer noted while on the classroom visit regarding instructional
practices demonstrated by the teacher. The group also discussed coaching tips with the
principals for each teacher in order to improve principal-teacher interactions. A final
version of coaching notes was then sent to each teacher visited by the building principal.
Each observer in the snapshot visits kept individual notes of what they learned while
observing teachers for reference in future committee meetings. Through these multiple
61
calibration observations, staff members developed an operational definition of the ratings
of the various QIR components.
Reliability. As an interrater reliability check of principals’ completion of the
QIR, in August of 2009 the QIR instrument was completed separately by the four
principals at Dixie Heights High School on fifty two individual teachers. A review of the
results yielded a 92% overall agreement on the individual components from all four
principals. A 100% agreement was observed on twenty of the twenty-four components
for all 52 teachers with only one of the twenty-four components demonstrating more than
one principal’s ratings different than the group. None of the differently rated components
were rated more than one level higher or lower than the group. This level of reliability on
such a complex instrument seems difficult to achieve, but through periodic calibration
meetings and calibration observations with district personnel, a strong interrater
reliability was achieved.
Cronbach alpha was calculated within each domain (Planning & Preparation,
Learning Environment, Instruction, Assessment) of the QIR. The QIR items required an
ordinal judgment of teacher instructional practices for each item. Because of the ordinal
nature of the instrument, a Likert scale was imposed upon the categories of
Unsatisfactory, Beginning, Developing, Proficient, and Distinguished (as defined in the
QIR rubric itself) by assigning the values of 1, 2, 3, 4, and 5, respectively. It is important
to note that these numbers were assigned to the domains of the QIR for data analysis
related to this research only. Assigning numbers to a level of performance was not part of
the principal-teacher interactions used in this study. Danielson (1996) advised against
62
assigning numbers to a teacher’s performance level, claiming such practices as
detrimental to the evaluation process and the growth of the teacher as a professional.
Classroom Grade Distributions
As a measure of student performance, classroom grade distributions were
collected every twelve weeks and monitored for trends. As evidenced in the literature
review, improved teacher instructional practices leads to improved student performance
(Connors, 2000; Felner & Angela, 1988; Haycock, 1998; Lezotte, 2001; Price et al.,
1988; Raudenbush et al., 1992). Classroom grade distributions were analyzed in reference
to past distributions of the same classes by the same teacher, school distributions,
department distributions, grade level distributions, statistical analysis of shape, normal
distribution and flat distribution. Classroom grade distributions were collected for six
school years—four years prior to the pilot year, the pilot year and the year of full
implementation.
Dixie Heights High School, the setting for this research, had a limited number of
practices and policies in the school which could have potentially affected the validity and
reliability of classroom grade distributions. The grading scale, which was stable for ten
years previous to this study, was established by district policy as referenced on Table 10.
63
Table 10
Grading Scale for Classroom Grades at Dixie Heights High School
Quality Points
Quality Points
Letter
Grade
AP/Honors
Regular Classes
A+
99-100
5.3
4.3
A
95-98
5
4
A-
93-94
4.7
3.7
B+
91-92
4.3
3.3
B
87-90
4
3
B-
85-86
3.7
2.7
C+
83-84
3.3
2.3
C
78-82
3
2
C-
76-77
2.7
1.7
D+
75
2.3
1.3
D
71-74
2
1
D-
70
1.7
0.7
F
69 & below
0
(Dixie Heights High School, 2009)
0
Although not an official policy, final exams traditionally counted for no more than twenty
percent of the overall grade.
Policies which may have affected the validity and reliability of classroom grade
distributions included a policy requirement for grades to be updated every two weeks into
an electronic online grading program accessible to parents (see Appendix D for the Dixie
Heights High School Instructional Practices Grading Policy) and a policy which
established procedures for placing failing students on academic probation (see Appendix
E, Enhancing Achievement Treatment Plan).
In general, Dixie Heights High School grading practices and policies were not
dissimilar from many public high schools. Additionally, none of the grading practices or
polices changed significantly during the years investigated in this study. Thus, data
collected and analyzed in this study was not expected to be effected by a change in
grading practices or policies.
64
Discipline Reports
Discipline reports were collected as a measure of student performance. As
evidenced in the literature review, improved instructional practices lead to improved
student behavior (Cushman & Delpit, 2003; Felner, Seitsinger, Brand, Burns, & Bolton,
2007; Rowan et al., 1997). Discipline referrals were collected for six school years—four
years prior to the pilot year, the pilot year, and the year of full implementation.
In order to enhance the validity of interpretations drawn from analyses of
discipline referrals, all teachers received training at the beginning of each school year on
the appropriate procedures and behaviors to include on discipline referrals. The principals
also received training at the beginning of each year, and periodically conferenced on the
proper handling and recording of student discipline referrals. The Kenton County School
District defined different types of behaviors as well as acceptable consequences for each
action (see Appendix D for a copy of the Kenton County School District Code of
Acceptable Behavior and Discipline).
Teacher and Student Surveys
Teacher and student surveys were used as measures of the frequency and focus of
teacher conversations. The same teacher and student surveys were administered
electronically to teachers and students in May of 2006, 2007, and 2008 anonymously.
(See Appendices A and B for copies of the student and teacher surveys).
Validity. The teacher and student surveys were developed by an administrative
team (four principals and three counselors) at the participating high school which
included the principals (treatment providers) of the high school. Topics for survey
questions were decided by the administrative team reflecting on their perception of
65
instructional needs for the school. Question format was modeled after professional
surveys which had been used by members of this team recently within the educational
setting (e.g. We Teach and We Learn Surveys published by the International Center for
Leadership in Education, 2006; My Voice Survey published by NCS Pearson Inc, 2006).
Initial development of teacher survey. Survey questions four through six were
written to measure teacher perception of the frequency and focus of principal-teacher
conversations. Survey questions one through three of the teacher survey were written to
measure teacher perception of the frequency and focus of teacher-teacher conversations.
Questions seven and eight of the teacher survey were written to measure teachers’
perceptions of the frequency and length of principal classroom visits. Questions nine
through fourteen on the teacher survey, were written in order to obtain information
related to district initiatives not related directly to this research; results from those
questions were not analyzed as part of this study. For specific details regarding how each
question from the teacher survey aligns with the constructs of this study, see Table 11.
66
Table 11
Teacher Survey Questions Aligned with Constructs of the Study
1.
Teacher Survey
How many times per day do you speak to
another teacher?
2.
How often do you discuss curriculum
issues with other teachers?
3.
How often do you discuss discipline issues
with other teachers?
4.
How often do you discuss curriculum
issues with a principal?
5.
How often do you discuss discipline issues
with a principal?
6.
How often do you discuss teaching
strategies with a principal?
7.
How often did an administrator visit your
classroom last year?
8.
What was the average length of the
principal's visit to your classroom (not
counting official observations)?
How many times were you officially
observed last year?
If you were officially observed, what was
the length of the observation?
Other than CATS Scores, have you
reviewed any data related to your
classroom (i.e. discipline, failures. . .)
Have you reviewed any data related to the
community in which you teach (i.e.
socioeconomic level, parental
involvement, education level)?
Have you implemented in your classroom
any of the initiatives discussed by your
principals?
Globalization is international integration. It
can be described as a process by which the
people of the world are unified into a
single society. This process is a
combination of economic, technological,
sociocultural and political forces.[1] Have
you adjusted your instruction/curriculum
to meet the changing needs of
globalization?
9.
10.
11.
12.
13.
14.
Question is used as a measure of:
RQ3-Teacher Conversations: Teacher’s
perceptions of the frequency of Teacher-Teacher
interactions
RQ3- Teacher Conversations: Teacher’s
perceptions of the focus Teacher-Teacher
interactions
RQ3- Teacher Conversations: Teacher’s
perceptions of the focus Teacher-Teacher
interactions
RQ3- Teacher Conversations: Teacher’s
perceptions of the focus Principal-Teacher
interactions
RQ3- Teacher Conversations: Teacher’s
perceptions of the focus Principal-Teacher
interactions
RQ3- Teacher Conversations: Teacher’s
perceptions of the focus Principal-Teacher
interactions
RQ3- Teacher Classroom Visits: Teacher’s
perceptions of the frequency of Principal-Teacher
interactions
RQ3- Teacher Classroom Visits: Teacher’s
perceptions of the length of Principal-Teacher
interactions
Not used in this study
Not used in this study
Not used in this study
Not used in this study
Not used in this study
Not used in this study
67
Initial Development of Student Survey. Questions one, two, three, six, seven, and
eleven of the student survey were written to measure student perception of the frequency
and focus of teacher-student conversations. It was expected that by increasing the number
of quality principal-teacher interactions teachers would engage in more individual
instructional conversations with students. Other questions on the student survey were not
directly connected to this study and thus were not analyzed. For specific detail regarding
how each question on the student survey aligns with the constructs of this study, see
Table 12.
68
Table 12
Student Survey Questions Aligned with Constructs of the Study
1.
Student Survey Questions
How many times per day do you speak to a
teacher?
2.
How often do you discuss personal issues
with teachers?
3.
How often do you discuss discipline issues
with your teachers?
4.
5.
How often do you speak with a principal?
How often do you discuss discipline issues
with a principal?
How often do you discuss learning
strategies with a teacher (how to study, test
taking strategies, learning styles)?
6.
7.
How often does a teacher in this building
motivate and inspire you.
8.
How often did a principal visit your
classroom last year?
When a principal visits your classroom,
does teacher instruction change?
When a principal visits your classroom,
does student behavior change?
Do your teachers discuss class performance
with you/the class (i.e. class average, test
averages, etc)?
How often do your teachers relate
instruction to your "School of Study"?
When a teacher uses technology in class,
does it benefit your learning?
Do you feel that when you leave school you
will be prepared for your next step in life,
whether it be college, trade school, armed
services, or work?
Which of the following statements describes
most of your classes?
9.
10.
11.
12.
13.
14.
15.
Question is used as a measure of:
RQ3-Teacher Conversations: Student’s
perceptions of the frequency and focus of TeacherStudent conversations
RQ3-Teacher Conversations: Student’s
perceptions of the frequency and focus of TeacherStudent conversations
RQ3-Teacher Conversations: Student’s
perceptions of the frequency and focus of TeacherStudent conversations
Not used in this study
Not used in this study
RQ3-Teacher Conversations: Student’s
perceptions of the frequency and focus of TeacherStudent conversations
RQ3-Teacher Conversations: Student’s
perceptions of the frequency and focus of TeacherStudent conversations
Not utilized in this study
Not used in this study
Not used in this study
RQ3-Teacher Conversations: Student’s
perceptions of the frequency and focus of TeacherStudent conversations
Not used in this study
Not used in this study
Not used in this study
Not used in this study
Expert review of the surveys. To further enhance the construct validity of teacher
and student surveys, the survey questions were reviewed and wording was adjusted by
district level personnel with experience in writing surveys and working in schools. The
three central office members who reviewed these surveys were:
69
1.
A deputy superintendant with a doctorate in educational leadership, with
more than twenty years experience in public education and a background in
school law.
2. An assistant superintendant with more than twenty years experience in public
education and a background in English and writing.
3. A content curriculum specialist with more than twenty years experience in
public education with a background in counseling and social studies.
This group was given the surveys and a description of the original intent of specific
survey questions decided by the school administrative team. They were asked to give
feedback in any form to improve the surveys.
After expert review of the surveys, small adjustments were made to the wording
of questions to make the language audience friendly and consistent. See Appendices A
and B for a copies of the final teacher and student surveys.
Reliability. As described earlier in this section, the teacher and student surveys
were developed by an administrative team at the participating high school and were not
tested for reliability. Utilizing Cronbach’s alpha, each set of questions related to the
frequency and focus of principal-teacher conversations, teacher-teacher conversations,
and teacher-student conversations were analyzed for internal reliability. Results of all this
analysis did not support combining the analysis of sets of questions in one measure. As a
result, each question from the teacher and student surveys was analyzed separately.
Fidelity of Implementation of Snapshots
Teacher, date, observer, and number of snapshots per classroom by principals
were recorded on an excel spreadsheet as they took place (see Appendix E for an
70
example of the snapshot tracker). This measure was a count of unambiguous data with
instantaneous calculations of number of visits per teacher, number of visits per principal,
average number of visits and standard deviation for each teacher, and average number of
visits and standard deviation for each day. These data were self-reported by the treatment
providers. Visitations were also discussed at periodic calibration meetings, and questions
related to this principal-teacher interaction were present on both the teacher and student
surveys.
Treatment Specifics
(Principal-Teacher Interactions)
One-on-One Summer Meetings
During the summer of 2008 (between the pilot year and the year of full
implementation) each teacher individually attended a one hour summer meeting with at
least one principal. Each of these summer meetings were preceded and followed by a
principal team discussion to properly prepare for the meeting and then debrief. Each
meeting followed the same basic structure but details of the discussions varied depending
upon the needs and reactions of the teachers. The length of each meeting was planned to
last about one hour, based on the number of teachers and available time of the principals
during the date range.
Each summer meeting addressed past performance and goals the teacher had for
the coming year; a review/discussion of the teacher’s instructional practices as related to
grade distributions/failure rate, discipline records, state tests scores, previous formal
evaluations, the teacher’s former and future individual professional growth plan, the
teacher’s use of technology, and the future focus for the principals regarding each
71
teacher’s needs over the next year. These were detailed in the QIR (see Appendix C for a
copy of the Quality Instruction Rubric). The topics of discussion for the summer
meetings were decided by the principals following a review of data collected during the
previous year, grade distributions and discipline referrals, and synthesis across prior
snapshots from the teacher’s classroom.
Snapshots
Snapshots were formatted to draw from observer behaviors as associated with, but
surpassing, the walkthrough processes such as Management by Walking Around (Peters
& Waterman, 1982) and the Three-Minute Walkthrough (Downey et al. 2004).
Specifically, principals visited several classrooms weekly to assess each teacher’s
progress as it pertained to items on the QIR. Each snapshot visit lasted approximately
five to fifteen minutes and the principal became part of the class when possible by taking
part in the educational process to aid and model proficient instructional practices. For
example, if a class was involved in a discussion the principal contributed or asked
questions as appropriate to improve the educational process. If students were working
independently, the principal walked around the room and interacted with the students to
observe and improve the quality of the independent practice. If students were taking a
test, the principal walked around and aided in supervision while observing the type and
quality of the assessment. If students were involved in group work, the principal
circulated while observing and interacting in a way which advanced the students’
learning. Regardless of the educational practice being used in the room where a snapshot
was taking place, the principal attempted to become part of the teaching process while
72
modeling proficient instructional practices. For detailed descriptions of proficient
instruction practices, see the QIR in Appendix C.
After a snapshot or series of snapshots, not to exceed three visits in one week, the
principal provided feedback to the teacher based on the QIR. Feedback took the form of
email, hand written/typed notes, or verbal exchange, based upon the teacher’s preference
as determined in advance between the principal and the teacher. For example, some
teachers requested oral communication so principals communicated with those teachers
orally. Other teachers requested communications by email, so those teachers received
email communications.
Once every two weeks the principals met to calibrate snapshots taken during the
previous weeks as well as to make adjustments to patterns and behaviors for future
snapshots. Special attention was given to snapshots and interactions two or more
principals had with the same teachers. This was used as an indicator of how well each
principal was calibrated. Notes, quality of instruction, frequency, and number of QIR
indicators were discussed. At least once every two months district level personnel
accompanied principals on at least four snapshot visits in order to fine tune the
authenticity of what principals were noting during snapshots.
The principals did not keep notes or other qualitative data during snapshots for the
following reasons:

Maintaining the collaborative nature of principal-teacher interactions

Assuring teachers of the intentional separation of the set of principal-teacher
interactions and official evaluation

Avoiding policy issues associated with official records and official evaluation
73

Avoiding issues with the teacher union
Qualitative data is an essential component of the snapshot interaction as well as an
important component in all treatment. As described earlier in the section Treatment
Specifics, the follow up to snapshots included feedback from principals to teachers which
was intended to promote collaboration and improve instructional practices. Recording
qualitative data of snapshot observations may have inhibited the productivity of these
principal-teacher interactions, so the principals did not take notes. Additionally,
according to policy officially recording observations on teacher behavior becomes part of
a teacher’s official records. Such official observations could have invoked a number of
rules and policies that would have a detrimental effect on the collaborative intention of
the snapshots. Thus, the decision was made to intentionally NOT retain any written
documentation of the snapshot observations or feedback in order to avoid undermining
their core purpose. Officially recording observations would have been cumbersome and
may have been perceived as negative by the teachers.
Data Reviews
Each teacher received several sets of data throughout the year related to their
students’ performance. These data included details of how many students were assigned
an A, B, C, D or F for a final grade in all the classes which were just completed (see
Appendix F for examples). The numbers were compiled into one graph. At the end of
each trimester each teacher received numeric and graphic representations of their
classroom grade distributions as well as data indicating how these results related to the
other teachers’ data within the school and the school as a whole. At the end of each
trimester each teacher received numeric and graphic representations of discipline
74
infractions they reported to the principals, as well as how these results compared
anonymously to the other teachers within the school and the school as a whole.
Teachers were instructed by email to examine their data and reflect upon their
growth plans as they examined their data. Teachers who exhibited unusually high failure
rates, abnormal grade distributions, or an unusually high number of discipline referrals
were called for a one-on-one meeting with the principal to brainstorm strategies for
improving teacher instructional practices.
In September, each teacher received numeric and graphic representations of their
students’ performance on the previous year’s state assessment as well as results for each
department and the school as a whole.
Teacher Self-Assessments (QIR)
At the beginning (August 2008) and then again at the end of the year (May 2009),
teachers completed a self-evaluation using the Quality Instruction Rubric to evaluate their
perception of their own instructional practices in four domains of the QIR. The four
principals independently completed assessments of each faculty member’s instructional
practices using the same assessment tool at the same times as described earlier. Each set
of results, teachers/principals/beginning/end of year was analyzed and compared using ttests; see data analysis section for more detail.
Trigger Points
For this study trigger points were defined as indicators of concern from summer
meetings, data reviews, snapshots, or formal observations that prompted an increase in
the number and frequency of principal-teacher interactions with specific teachers. Often
these trigger points were discussed during the calibration meetings principals held once
every two weeks and included items such as: teacher seemed resistant to change
75
exhibited during summer meetings or planning period meetings, students expressed
concerns about teachers instructional practices to principal, continued poor performance
on the same indicators for the QIR after multiple snapshots and feedback from principals,
teacher request, or poor official classroom evaluation.
Teachers with poor performance as defined by the QIR or data anomalies in the
grade distributions or discipline referrals were called to the principal’s office for more
intense principal-teacher interactions. The focus of this meeting was to trouble shoot and
design strategies to improve teacher instructional practices and increase student
performance.
Data Analysis
Research Questions One: How will the treatment of principal-teacher interactions
affect teacher’s instructional practices?
The principal-completed and teacher-completed QIRs were used as measures of
teacher instructional practices in combination with synthesis of snapshot observations. In
order to screen for the data shape of results from the principal-completed and teachercompleted QIR, scores from the pre and post test for principals and teachers were
analyzed for normality by running skewness and kurtosis tests while examining the mean
and standard deviation of each. According to University of Surrey Psychology
Department (2007), a distribution with skew and kurtosis values in the range +2 to -2 are
near enough to be considered normally distributed for most purposes. Distributions were
assessed for normality using these two statistics in order to establish if the data met
underlying assumptions of ANOVA analyses.
Related sets of data were compared using t-tests in each of the four domains and
overall, before and after the year of full implementation, to utilize a single group pretest
76
posttest research design as modeled in Figure 2. Teacher-completed pretest data were
compared to teacher-completed posttest data to discover teachers’ perceptions of changes
in the quality of instructional practices. Principal-completed pretest data were compared
to principal-completed posttest data to discover principal’s perceptions of changes in the
quality of teacher instructional practices. Teacher-completed data were compared to
principal-completed data for both the pretest and posttest, to discover differences in
teacher and principal perceptions of instructional practices. Next, the teachers were
parsed into three groups (high performing, medium performing, and low performing)
based on principal-completed posttest results. The same analysis described above was
then performed on each group of high, medium, and low performing teachers.
Quantifying the QIR. In order to quantify the data from the QIR, a five point
Likert scale was imposed on the five categories of Unsatisfactory, Beginning,
Developing, Proficient, and Exemplary. The responses within each domain of the QIR,
Planning & Preparation, Learning Environment, Instruction, and Assessment, were
averaged to achieve a mean score for each teacher in each of the domains. The ratings
from each response across all four domains were averaged in order to obtain an overall
mean for each teacher.
Comparing pretest and posttest QIRs. The process described above yielded
four sets (teacher rated pretest, teacher rated posttest, principal rated pretest, and principal
rated posttest) of five datum points (Planning & Preparation, Learning Environment,
Instruction, Assessment, and Overall) for each teacher in the study. The teachercompleted QIR data from pre to post test were analyzed in each domain and overall using
t-test, in order to discover any changes which occurred during the year of full
77
implementation. However, as noted earlier, many teachers on the pre-QIR may have had
an unrealistic sense of their teaching as characterized by the QIR because they hadn’t yet
had the opportunity to either use it themselves or to receive feedback based on it. If that
were the case, the pre-post comparison of teacher-completed QIR may not have yielded
valid and interpretable results. The principal-completed QIR data from pre to post test
was analyzed in each domain and overall using t-test to discover any changes which may
have occurred during the year of full implementation. Because all the principals had been
deeply involved in developing and calibrating the QIR across many teachers, their pretest
ratings were likely to be much more aligned, by comparison to the teachers, with the
intent of the QIR.
Comparing principal and teacher QIRs. Although not directly related to
research question two, compelling data was discovered when comparing teachers’ ratings
of their own instructional practices to the principals’ ratings of teachers’ instructional
practices on the pretest and posttests. Teacher and principal results from the QIR on the
pretest and posttests were compared using t-tests in order to discover any significant
differences which may have existed between teacher and principal perceptions of teacher
instructional practices.
Research Question Two: How will changes in teachers’ instructional practices,
initiated by the set of principal-teacher interactions, affect student performance?
For this study student performance was defined as classroom grade distributions
and discipline referrals. In order to use a single cross-sectional interrupted time series
design as modeled in Figure 3, classroom grade distributions and discipline reports were
collected for the six school years of 2003-2004 through 2008-2009, prior to the pilot year,
during the pilot year, and during the year of full implementation. Linear regression
78
modeling was used to analyze any changes which may have taken place after the initial
set of principal-teacher interactions in this study was introduced. For each set of data,
linear regression was used to predict expected values for the pilot year and year of full
implementation based on the pretreatment data. Expected values for each datum point
were then compared to the observed values.
Classroom Grade Distributions. Final classroom grade distributions for each
school year were collected for all grades assigned to students within the school, for
grades 9-12 and categorized by the traditional high school grading scale: A, B, C, D, and
F. For each year and each letter grade the percentage of assigned grades were used as the
datum point. Each year the same calculation and assigned percentage were used in order
to establish the five datum points, the percentage of As, Bs, Cs, Ds, and Fs.
Classroom discipline referrals. Classroom discipline referrals for each school
year were collected and analyzed for changes using linear regression. Classroom
discipline referrals were categorized by the school district as aggressive to school
employee, defiant, failure to comply with discipline, fights, harassment, profanity, tardies
and skipping, tobacco, disorderly conduct, or repeated violations (see Appendix D for
detailed descriptions of these offenses). For this study, the referrals of these categories
were summed up to produce a number identified as the total discipline infractions. A
second category (aggressive discipline) was also used by adding the discipline categories
of aggressive to school employee, defiant, failure to comply with discipline, fights,
harassment, profanity, disorderly conduct, and repeated violations, and were analyzed as
well. Discipline referral data for the nested groups of males, females, freshmen,
sophomores, juniors, and seniors were each analyzed in the same fashion.
79
Classroom grade distributions and discipline referrals of high, medium, and
low performing teachers. Classroom grade distributions and total discipline referrals,
percent As, Bs, Cs, Ds, Fs, and total number of discipline referrals for the three groups of
high, medium, and low performing teachers were averaged for each of the school years
2006-2007, 2007-2008, and 2008-2009. Grades and total discipline for these three groups
of teachers were compared utilizing t-tests to investigate any significant differences
between the three groups during the three years. Grades and total discipline were
compared across high, medium, and low performing teachers to investigate any
significant differences between the three groups during the school years of 2006-2007,
2007-2008, and 2008-2009.
Research Question Three: How will changes in teachers’ instructional practices,
initiated by the set of principal-teacher interactions, affect the frequency and focus
of teacher conversations?
Specifically, how did the changes in teachers’ instructional practices change the
frequency and focus of principal-teacher conversations, teacher-teacher conversations,
and teacher-student conversations? As noted in Tables 11 and 12, questions four through
eight of the teacher survey were used as measures of the frequency and focus of
principal-teacher conversations, questions one through three of the teacher survey were
used as measures of the frequency and focus of teacher-teacher interactions, and
questions one, two, three, six, seven, and eleven of the student survey were used as
measures of frequency and focus of teacher-student conversations. Pretest and posttest
distributions of responses on the teacher and student surveys where analyzed using chi
square to detect changes in response patterns which may have occurred. In particular, chi
square compared the survey result from before and after the pilot year and before and
80
after the year of full implementation to discover if there were any significant differences
in the distributions.
Combining response categories of survey questions. All survey questions used
as measures of frequency and focus of teacher conversations consisted of a Likert scale of
five possible responses each. The purpose of using a pretest posttest research design was
to discover any significant changes in the frequency and focus of teacher conversations
which occurred from the administration of the pretest to the administration of the posttest.
In retrospect after having collected the survey data, the distinctions between adjacent
points on the Likert scale seemed small enough to not be meaningfully different. In those
cases, statistically significant changes with fewer, grouped response options would have
been a better indication of real change. So, each survey question and corresponding
responses were examined by two independent teams to determine if any response
categories could or should be combined.
One of the examining teams was made up of the four principals, two of which
were the primary researchers of this study responsible for providing the set of principalteacher interactions for this study. A second team was made up of the original three
people who aided in the original expert review of the surveys (see Reviewing the Surveys
under the measurements and instruments section of this chapter for a description of this
group).
Each group was given copies of the survey questions of interest, questions one
through eight of the teacher survey, and questions one, two, three, six, seven, and eleven,
of the student survey, and the following directions:
81
Please examine the following question responses and see if it makes sense
to group any of the responses. That is, there are five responses for each
survey question. Are there response choices from teachers which are
basically no different? For example, if one of the options for a response
was ‘never’ and another option was ‘almost never’, would these two
responses likely indicate the same frequency/focus of teacher
conversations? You may combine none or as many responses as make
sense. Thank you for your time.
Each group worked independently as a team to come to a consensus regarding which, if
any, combining of responses were logical.
At the conclusion of the activity of the two teams, resultant combinations were
remarkably similar. Each set of five responses for all questions were combined into three
groups except responses to question seven of the teacher survey which were combined
into four categories. See Table 13 and 14 for resultant groupings of each question
responses in the teacher and student surveys. There was only one question for which the
two groups disagreed on groupings. The question which the two groups disagreed on was
question seven of the student survey. The first team (the set of four principals responsible
for providing the set of principal-teacher interactions for this study) grouped the original
responses of “Daily”, “Weekly”, “Monthly”, “Yearly”, “Never” into the three groups
“Daily”, “Weekly or Monthly”, “Yearly or Never.” The second team, made up of the
original three people who aided in the original review of the surveys, grouped the original
responses of “Daily”, “Weekly”, “Monthly”, “Yearly”, “Never” into the three groups
“Daily or Weekly”, “Monthly”, “Yearly or Never.” The two groups discussed together
82
the reasons for the groupings they chose and came to a consensus that the grouping
“Daily”, “Weekly or Monthly”, “Yearly or Never” made more sense for this question. In
the end it was agreed that a teacher who is perceived as daily motivating and inspiring a
student (question seven) would periodically engage in different teacher conversations
than a teacher that is perceived as only doing these things weekly or monthly. Although
all members agreed that monthly and weekly were good frequencies for these indicators,
leaving students with the perception that they are occurring daily is exceptional.
Data analysis plan for survey results. Questions from the teacher and student
surveys yielded a measure of the frequency and focus of teacher conversations. Response
frequencies for each option in the combined response groupings in Tables 13 and 14 for
each question were analyzed for the pilot year and the year of full implementation using a
chi square test.
83
Table 13
Regrouping Response Categories on Teacher Survey
Grouping
for
Analysis
Question
Original Response
Options
Teacher-Teacher Conversations
1. How many times per day do you speak to another teacher?
1
2
3
2. How often do you discuss curriculum issues with other teachers?
1
2
3
3. How often do you discuss discipline issues with other teachers?
1
2
3
8 or more times
5-7
2-4
One
None
Daily
Weekly
Monthly
Annually
Never
Daily
Weekly
Monthly
Annually
Never
Principal-Teacher Conversations
4. How often do you discuss curriculum issues with a principal?
1
2
3
5. How often do you discuss discipline issues with a principal?
1
2
3
6. How often do you discuss teaching strategies with a principal?
1
2
3
Daily
Weekly
Monthly
Annually
Never
Daily
Weekly
Monthly
Annually
Never
Daily
Weekly
Monthly
Annually
Never
Frequency and Length of Classroom Visits
7. How often did an administrator visit you classroom last year?
1
2
3
4
8. What was the average length of the principal's visit to your
classroom (not counting official observations)?
1
2
3
84
8 or more
5 to 7
2 to 4
Once
None
60 minutes or more
30 to 60 minutes
10 to 30 minutes
Less than 10 minutes
I was not visited
Table 14
Regrouping Response Categories on Student Survey
Grouping
for Analysis
Original
Response
Options
1
8 or more times
5-7
2-4
One
None
Question
Teacher-Student Conversations
1. How many times per day do you speak to a teacher?
2
3
2. How often do you discuss personal issues with teachers?
1
2
3
3. How often do you discuss discipline issues with your teachers?
1
2
3
6. How often do you discuss learning strategies with a teacher?
1
2
3
7. How often does a teacher in this building motivate and inspire you?
1
2
3
11. Do your teachers discuss class performance with you/the class?
1
2
3
85
Daily
Weekly
Monthly
Annually
Never
Daily
Weekly
Monthly
Annually
Never
Daily
Weekly
Monthly
Annually
Never
Daily
Weekly
Monthly
Annually
Never
Daily
Weekly
Monthly
Yearly
Never
CHAPTER FOUR
RESULTS
Results in this chapter are organized around the research questions of this study.
1. How will the treatment of principal-teacher interactions affect teachers’
instructional practices?
2. How will changes in teachers’ instructional practices, initiated by the set of
principal-teacher interactions, affect student performance?
3. How will changes in principal-teacher interactions affect the frequency and
focus of teacher conversations with principals, students, and other teachers?
The research design used in this study was quasi experimental with multiple
quantitative analytic techniques. There were three specific time frames associated with
this research in relation to the measures and principal-teacher interactions:
1. Prior to the pilot year (prior to fall 2007)
2. Pilot year (2007-2008 school year)
3. Year of full implementation (2008-2009 school year)
Two principal-teacher interactions, snapshots and data reviews were implemented during
the pilot year and the full set of four principal-teacher interactions, one-on-one summer
meetings, snapshots, data reviews, and teacher self-assessment were implemented in the
year of full implementation. Classroom grade distributions and student discipline referral
data were collected from each of these three time frames. Teacher and student survey
data were collected from two of these time frames, the pilot year and the year of full
86
implementation. Principal-completed and teacher-completed QIR data were collected
only in the year of full implementation.
Research Question One: How will the treatment of principal-teacher interactions
affect teachers’ instructional practices?
QIR data were used in a single group pretest posttest research design in order to
explore any affect the introduction of a set of principal-teacher interactions had on the
quality of teacher instructional practices as defined by the Quality Instruction Rubric
(QIR) during the year of full implementation, the 2008-2009 school year. Data from the
QIRs completed by the principals and the QIRs completed by teachers from both the
pretest and posttest were analyzed using paired sample t-tests.
One assumption of t-tests is that the data yields a normal distribution. The
normality of the QIR score distributions were assessed by computing kurtosis and
skewness for each of the four subscales and the overall QIR score for both the pretest and
posttest data. The ten kurtosis values ranged from -0.54 to 1.69. The ten skewness values
ranged from -0.01 to 1.03. A common interpretation is that absolute values of kurtosis
and skewness less than 2 are approximately normal enough for most statistical
assumption purposes (Minium, King, & Bear, 1993). These results support that normality
assumptions were upheld for these data.
Changes in the Quality of Teacher Instructional Practices During the Year of Full
Implementation
The results of a comparison of data from pretest and posttest QIR are presented in
Table 15. According to analysis results of QIR ratings completed by teachers, the quality
of teacher instructional practices improved in the two domains of Planning &
Preparation and Learning Environment at a significance level of p<0.01, indicating a
87
small effect size in each of these two domains. Analyses results of QIR ratings completed
by the principals did not detect a change in the quality of teacher instructional practices in
the same two domains of Planning & Preparation and Learning Environment.
Analyses results of QIR ratings completed by teachers did not indicate a change
in the quality of teacher instructional practices in the two domains of Instruction and
Assessment. According to analyses results of QIR ratings completed by the principals, the
quality of teacher instructional practices improved in the same two domains of
Instruction and Assessment, at a significance level of p<0.001, indicating a small effect
size in each domain.
Overall, according to the analyses results of QIR ratings completed by the
principals, the quality of teacher instructional practices did improve, at a significance
level of p<0.05, producing a small effect size. Analyses results of QIR ratings completed
by teachers also indicated the quality of teacher instructional practices improved overall
at a significance level of p<0.05, producing a small effect size. However, as noted in
Table 15, the specific domains in which significant changes occurred according to
teachers’ ratings were exactly the opposite of those indicated by principals’ ratings.
88
Table 15
Comparison of QIR Pre-Post Mean Scores (Standard Deviation) for Year of Full Implementation
Pre
Post
Effect size
t
p-value
(SD)
(SD)
(Cohen’s d)
Planning & Preparation
TEACHER-COMPLETED
3.56
3.74
2.75
(0.48)
(0.38)
0.008
0.42*
Learning Environment
3.69
(0.46)
3.85
(0.36)
2.75
0.008
0.39*
Instruction
3.51
(0.50)
3.58
(0.48)
0.90
0.374
–
Assessment
3.30
(0.62)
3.39
(0.48)
1.17
0.249
–
Overall
3.52
(0.51)
3.64
(0.42)
2.23
0.031
0.26*
PRINCIPAL-COMPLETED
3.16
3.2
1.32
(0.78)
(0.770)
0.194
–
Planning & Preparation
Learning Environment
3.26
(0.72)
3.29
(0.70)
0.858
0.395
–
Instruction
2.84
(0.64)
3.09
(0.60)
4.99
< 0.001
0.40*
Assessment
2.69
(0.76)
2.98
(0.60)
4.29
< 0.001
0.42*
2.98
3.14
3.75
< 0.001
0.23*
(0.73)
(0.67)
*
Indicates a small effect size (0.2<d< 0.5); **Indicates a medium effect size (0.5<d< 0.8); ***Indicates a
large effect size (d >0.8). (Cohen, 1988)
Overall
A Comparison of the Differences of Perceptions of the Quality of Teacher
Instructional Practices between Teachers and Principals
The mean scores of QIR ratings completed by teachers and QIR ratings completed
by the principals, presented in Table 15, appear to differ systematically. Results of a
comparison of pretest and posttest data from QIR ratings completed by the principals to
QIR ratings completed by teachers are presented in Table 16. Teachers rated the quality
of their instructional practices higher than did principals in each domain and overall. The
differences in the results of QIR ratings completed by the principals and QIR ratings
89
completed by teachers were significant at a level of p<0.001 and indicated large
differences in each domain and overall.
Table 16
Comparison of Teacher-completed to Principal-completed QIR Mean Scores (Standard Deviation) for Year
of Full Implementation
Teacher
Principal
Effect size
t
p-value
(SD)
(SD)
(Cohen’s d)
Planning & Preparation
3.56
(0.48)
PRETEST
3.16
(0.78)
Learning Environment
3.69
(0.46)
Instruction
3.39
0.001
0.62***
3.26
(0.72)
3.95
< 0.001
0.71***
3.51
(0.50)
2.84
(0.64)
6.54
< 0.001
1.17***
Assessment
3.3
(0.62)
2.69
(0.76)
4.78
< 0.001
0.88***
Overall
3.52
(0.51)
5.03
< 0.001
0.86***
Planning & Preparation
3.74
(0.38)
2.98
(0.73)
POSTEST
3.2
(0.770)
4.54
< 0.001
0.89***
Learning Environment
3.85
(0.36)
3.29
(0.70)
5.42
< 0.001
1.01***
Instruction
3.58
(0.48)
3.09
(0.60)
4.65
< 0.001
0.90***
Assessment
3.39
(0.48)
2.98
(0.60)
3.95
< 0.001
0.75***
3.64
3.14
5.18
< 0.001
0.89***
(0.42)
(0.67)
*
Indicates a small effect size (0.2<d< 0.5); **Indicates a medium effect size (0.5<d< 0.8); ***Indicates a
large effect size (d >0.8). (Cohen, 1988)
Overall
Analyses of Systematic Differences in Teachers’ Self-Ratings
Teachers with differing depths of quality of instructional practices may have
differed systematically in their self-ratings. The prior analysis of all teachers in one group
may mask any possible systematic differences. There were a number of different
grouping methods which seemed logical in order to search for systematic differences in
these data.
90
Teachers’ QIR self-ratings were separately analyzed by high, medium and low
performing groups based on the overall posttest QIR ratings completed by the principals.
Other options for generating teacher groups could have been overall pretest QIR ratings
completed by the principals, overall posttest QIR ratings completed by teachers, or
pretest QIR ratings completed by teachers.
Consideration was given to group teachers based on the overall pretest QIR
ratings completed by the principals. In anticipation of the question, a correlation
coefficient was calculated between the overall pretest QIR ratings completed by the
principals and the overall posttest QIR ratings completed by the principals and found to
be 0.873. Such a high correlation between the pretest and posttest indicate that using
either set of data for grouping purposes would result in similar groupings and similar
results.
Consideration was given to grouping teachers based on the overall QIR ratings
completed by teachers. However, as established in chapter two, principal ratings of
instructional practices are more likely to be more valid than teacher ratings of
instructional practices. As discussed in chapter three, we implemented several procedures
during the course of this study, such as field tests, norming, and calibration procedures, to
increase the validity and reliability of the QIR ratings completed by the principals. Thus,
for discussion purposes it seemed more logical to group teachers according to overall
QIR ratings completed by the principals.
Comparisons Among High, Medium, and Low Performing Teachers According to
Posttest QIR Ratings Completed by the Principals.
Teachers’ QIR self-ratings were separately analyzed groups defined by their depth
of quality instructional practices as determined by their placement on the QIR. The total
91
sample (N=50), were split into three different, nearly equal sized, groups based on the
overall posttest QIR ratings completed by the principals.
Group One-High Performing Teachers (n= 16)
Group Two-Medium Performing Teachers (n=17)
Group Three-Low Performing Teachers (n=17)
The purpose of splitting the teachers into groups was to obtain as much
discrimination between groups as possible. More than three groups are preferable.
However, for this comparison we planned to compute means for each group to make
potentially generalizable claims. Separating the original sample of 50 teachers into more
than three groups would likely produce sample sizes which were too small for this
purpose.
The results of an ANOVA of the overall posttest QIR ratings completed by the
principals on each of these three groups indicated that the ratings for each group were
statistically different at a significance level of p<0.0001. The results of an ANOVA of the
overall pretest QIR ratings completed by teachers indicated that high, medium, and low
performing teachers ratings were equivalent. The results of an ANOVA of the overall
posttest QIR ratings completed by teachers, also indicated that high, medium, and low
performing teachers ratings were equivalent.
Table 17 reports results of a comparison of QIR ratings completed by the
principals and QIR ratings completed by teachers for high, middle, and low performing
teachers. Table 17 shows that high performing teachers’ ratings of the quality of their
instructional practices were statistically equivalent to the principals’ ratings. By contrast,
medium performing teachers’ ratings of their instructional practices were significantly
92
higher, with medium to large effect sizes, than the principals’ ratings in each domain and
overall. Likewise, low performing teachers’ ratings of their instructional practices were
significantly higher than the principals’ ratings in each domain and overall. The effect
sizes between the low performing teachers’ and principals’ ratings were consistently
larger than those between medium performing teachers and principals.
93
Planning & Preparation
Effect size
(Cohen’s d)
Teacher
Principal
(SD)
(SD)
PRETEST-MEDIUM PERFORMING
TEACHERS
3.66
3.28
0.005
0.77**
(0.51)
(0.47)
p-value
Effect size
(Cohen’s d)
Teacher Principal
(SD)
(SD)
PRETEST-HIGH PERFORMING
TEACHERS
3.60
3.83
0.333
–
(0.56)
(0.61)
p-value
Effect size
(Cohen’s d)
p-value
Table 17
Comparison of Teacher-completed to Principal-completed QIR Mean Scores (Standard Deviation) for High, Medium, and Low Performing Teachers
Teacher Principal
(SD)
(SD)
PRETEST-LOW PERFORMING
TEACHERS
3.43
2.41
< 0.001 2.52***
(0.35)
(0.45)
94
Learning Environment
3.68
(0.46)
3.79
(0.64)
0.583
–
3.77
(0.44)
3.47
(0.35)
0.023
0.75**
3.64
(0.50)
2.59
(0.49)
< 0.001
2.13***
Instruction
3.60
(0.53)
3.34
(0.55)
0.168
–
3.53
(0.55)
2.99
(0.23)
0.001
1.28***
3.42
(0.42)
2.23
(0.45)
< 0.001
2.75***
Assessment
3.34
(0.71)
3.28
(0.66)
0.775
–
3.40
(0.61)
2.85
(0.48)
0.013
1.00***
3.15
(0.54)
2.02
(0.44)
< 0.001
2.3***
Overall
Planning & Preparation
3.56
3.56
0.987
–
(0.50)
(0.57)
POSTTEST-HIGH PERFORMING
TEACHERS
3.85
4.09
0.072
–
(0.40)
(0.380)
3.59
3.15
0.002
1.11***
(0.47)
(0.31)
POSTTEST-MEDIUM PERFORMING
TEACHERS
3.68
3.26
0.001
1.27***
(0.33)
(0.33)
3.41
2.31
< 0.001 2.80***
(0.38)
(0.41)
POSTTEST-LOW PERFORMING
TEACHERS
3.70
2.45
< 0.001 3.15***
(0.39)
(0.40)
Learning Environment
3.93
(0.37)
4.03
(0.33)
0.384
–
3.86
(0.31)
3.38
(0.21)
< 0.001
1.81***
3.78
(0.42)
2.62
(0.56)
< 0.001
2.36***
Instruction
3.77
(0.51)
3.71
(0.35)
0.677
–
3.38
(0.35)
3.17
(0.23)
0.039
0.71**
3.60
(0.51)
2.52
(0.47)
< 0.001
2.22***
Assessment
3.48
(0.53)
3.56
(0.50)
0.642
–
3.32
(0.38)
3.01
(0.19)
0.013
1.03***
3.37
(0.52)
2.41
(0.40)
< 0.001
2.07***
3.76
3.85
0.423
3.56
3.21
< 0.001 1.73***
3.61
2.50
< 0.001
–
(0.39)
(0.31)
(0.25)
(0.14)
(0.40)
(0.37)
*
Indicates a small effect size (0.2<d< 0.5); **Indicates a medium effect size (0.5<d< 0.8); ***Indicates a large effect size (d >0.8). (Cohen, 1988)
2.87***
Overall
Changes in the Quality of Teacher Instructional Practices During the Year of Full
Implementation for High, Medium, and Low Performing Teachers.
Results of a comparison of data from pretest and posttest QIR ratings for high
performing teachers are presented in Table 18. According to analyses results of pretest
and posttest QIR ratings completed by teachers, the quality of teacher instructional
practices of high performing teachers improved in the domain of Learning Environment
at a significance level of p<0.001, indicating a medium effect size. Analyses results of
QIR ratings completed by teachers did not indicate a change in the quality of teacher
instructional practices of high performing teachers in the three domains of Planning &
Preparation, Instruction, or Assessment. Analyses results of QIR ratings completed by
teachers indicated the quality of teacher instructional practices of high performing
teachers increased overall at a significance level of p<0.05, producing a small effect size.
95
Table 18
Comparison of QIR Pre-Post Mean Scores (Standard Deviation) for Year of Full Implementation for High
Performing Teachers
Pretest
Posttest
Effect size
t
p-value
(SD)
(SD)
(Cohen’s d)
TEACHER-COMPLETED
Planning & Preparation
3.60
3.85
1.82
0.088
–
(0.56)
(0.40)
Learning Environment
3.68
(0.46)
3.93
(0.37)
4.05
0.001
0.60**
Instruction
3.60
(0.53)
3.77
(0.51)
1.63
0.124
–
Assessment
3.34
(0.71)
3.48
(0.53)
0.96
0.351
–
Overall
3.56
(0.50)
3.76
(0.39)
2.18
0.046
0.45*
PRINCIPAL-COMPLETED
3.83
4.09
2.04
(0.61)
(0.380)
0.060
–
Planning & Preparation
Learning Environment
3.79
(0.64)
4.03
(0.33)
2.35
0.033
0.47*
Instruction
3.34
(0.55)
3.71
(0.35)
3.38
0.004
0.80***
Assessment
3.28
(0.66)
3.56
(0.50)
2.42
0.029
0.48*
3.56
3.85
3.18
0.006
0.63**
(0.57)
(0.31)
*
**
Indicates a small effect size (0.2<d< 0.5); Indicates a medium effect size (0.5<d< 0.8); ***Indicates a
large effect size (d >0.8). (Cohen, 1988)
Overall
Results of a comparison of data from pretest and posttest QIR ratings for medium
performing teachers are presented in Table 19. According to analyses results of the
pretest and posttest QIR ratings completed by teachers, the quality of instructional
practices of medium performing teachers did not change during the year of full
implementation. According to analyses results of the pretest and posttest QIR ratings
completed by the principals, the quality of instructional practices of medium performing
teachers improved in the domain of Instruction at a significance level of p<0.05,
indicating a medium effect size. According to analyses results of the pretest and posttest
QIR ratings completed by the principals, the quality of instructional practices of medium
96
performing teachers did not change in any other domain or overall. Given that of the ten
possible indicators of a change in the quality of instructional practices for medium
performing teachers, only one (Instruction; principal-completed) indicated a change, it is
likely that the quality of instructional practices of medium performing teachers were
impacted significantly less by the set of principal teacher interactions during the year of
full implementation than other teachers.
Analyses results of QIR ratings completed by the principals did not detect a
change in the quality of teacher instructional practices of high performing teachers in the
domain of Planning & Preparation. According to analyses results of QIR ratings
completed by the principals, instructional practices of high performing teachers improved
in the three domains of Learning Environment, Instruction, and Assessment, at a
significance level of p<0.05, indicating a small effect size in the two domains of Learning
Environment and Assessment, and a large effect size in the domain of Instruction.
Overall, according to analyses results of QIR ratings completed by the principals,
instructional practices for high performing teachers improved at a significance level of
p<0.01, producing a medium effect size.
97
Table 19
Comparison of QIR Pre-Post Mean Scores (Standard Deviation) for Year of Full Implementation for
Medium Performing Teachers
Pretest
Posttest
Effect size
t
p-value
(SD)
(SD)
(Cohen’s d)
TEACHER-COMPLETED
Planning & Preparation
3.66
3.68
0.19
0.852
–
(0.51)
(0.33)
Learning Environment
3.77
(0.44)
3.86
(0.31)
0.75
0.462
–
Instruction
3.53
(0.55)
3.38
(0.35)
1.08
0.296
–
Assessment
3.40
(0.61)
3.32
(0.38)
0.50
0.626
–
Overall
3.59
(0.47)
3.56
(0.25)
0.26
0.795
–
PRINCIPAL-COMPLETED
3.28
3.26
0.18
(0.47)
(0.33)
0.862
–
Planning & Preparation
Learning Environment
3.47
(0.35)
3.38
(0.21)
0.87
0.396
–
Instruction
2.99
(0.23)
3.17
(0.23)
2.29
0.036
0.78**
Assessment
2.85
(0.48)
3.01
(0.19)
1.36
0.194
–
3.15
3.21
0.69
0.500
–
(0.31)
(0.14)
*
Indicates a small effect size (0.2<d< 0.5); **Indicates a medium effect size (0.5<d< 0.8); ***Indicates a
large effect size (d >0.8). (Cohen, 1988)
Overall
Results of a comparison of data from pretest and posttest QIR ratings for low
performing teachers are presented in Table 20. According to analyses results of pretest
and posttest QIR ratings completed by teachers, the quality of instructional practices of
low performing teachers improved in the domain of Planning & Preparation at a
significance level of p=0.05, indicating a medium effect size. Analyses results of pretest
and posttest QIR ratings completed by teachers did not indicate a change in the quality of
instructional practices of low performing teachers in the three domains of Planning &
Preparation, Instruction, or Assessment. Analyses results of pretest and posttest QIR
ratings completed by teachers indicated the quality of instructional practices of low
98
performing teachers increased overall at a significance level of p<0.05, producing a
medium effect size.
Table 20
Comparison of QIR Pre-Post Mean Scores (Standard Deviation) for Year of Full Implementation for Low
Performing Teachers
Pretest
Posttest
Effect size
t
p-value
(SD)
(SD)
(Cohen’s d)
TEACHER-COMPLETED
Planning & Preparation
3.43
3.70
2.83
0.012
0.72**
(0.35)
(0.39)
Learning Environment
3.64
(0.50)
3.78
(0.42)
1.34
0.200
–
Instruction
3.42
(0.42)
3.60
(0.51)
1.50
0.152
–
Assessment
3.15
(0.54)
3.37
(0.52)
1.74
0.101
–
Overall
3.41
(0.38)
3.61
(0.40)
2.35
0.032
0.51**
PRINCIPAL-COMPLETED
2.41
2.45
0.35
(0.45)
(0.40)
0.728
–
Planning & Preparation
Learning Environment
2.59
(0.49)
2.62
(0.56)
0.26
0.797
–
Instruction
2.23
(0.45)
2.52
(0.47)
2.88
0.011
0.63**
Assessment
2.02
(0.44)
2.41
(0.40)
3.86
0.001
0.94***
2.31
2.50
2.84
0.012
0.49*
(0.41)
(0.37)
*
Indicates a small effect size (0.2<d< 0.5); **Indicates a medium effect size (0.5<d< 0.8); ***Indicates a
large effect size (d >0.8). (Cohen, 1988)
Overall
Analyses results of pretest and posttest QIR ratings completed by the principals,
did not detect a change in the quality of instructional practices of low performing teachers
in the two domains of Planning & Preparation and Learning Environment. According to
analyses results of pretest and posttest QIR ratings completed by the principals, the
quality of instructional practices of low performing teachers improved in the two domains
of Instruction and Assessment, at a significance level of p<0.01 and p<0.001 respectively,
indicating a medium and large effect size. Overall, according to analyses results of pretest
99
and posttest QIR ratings completed by the principals, the quality of instructional practices
of low performing teachers increased at a significance level of p<0.05, producing a small
effect size.
Research Question Two: How will changes in teachers’ instructional practices,
initiated by the set of principal-teacher interactions, affect student performance?
Classroom grade distributions and student discipline referrals were used in a
single, cross-sectional group interrupted time series research design in order to explore
any effect changes in teacher instructional practices, initiated by the set of principalteacher interactions, may have had on student performance during the pilot year and the
year of full implementation. Data from classroom grade distributions and student
discipline referrals from four years prior to the pilot year were analyzed using linear
regression in order to predict expected levels of student performance during the pilot year
(2007-2008) and the year of full implementation (2008-2009). Actual levels of student
performance, operationalized as grade distributions and discipline referrals, from the pilot
year and year of full implementation were then compared to the levels of student
performance predicted from the regression analysis.
Classroom Grade Distributions
Classroom grade distributions are presented in Table 21 for four years previous to
the pilot year (pre-treatment), the pilot year, and the year of full implementation.
100
Table 21
Actual Classroom Grade Distributions for all Students (n=approximately 1400)
School Year
Percentage
of As
Percentage
of Bs
Percentage
of Cs
Percentage
of Ds
Percentage
of Fs
2003-2004
31.35
27.85
21.65
10.90
8.05
2004-2005
32.85
26.80
21.00
9.20
9.80
2005-2006
32.85
28.10
20.50
9.50
8.55
2006-2007
35.40
28.50
18.80
8.30
8.35
2007-20081
36.30
28.37
18.80
9.63
6.93
2
2008-2009
41.05
27.73
17.04
8.28
5.57
Two of this study’s principal-teacher interactions –snapshots and data reviews—were in place for this
school year.
2
This study’s treatment – set of four principal-teacher interactions – were in place for this school year.
1
Using grade distributions data from four years previous to the pilot year, expected levels
of grade distributions were calculated using linear regression. Figure 6 depicts the actual
grade distributions of As, Bs, Cs, Ds, and Fs for school years 2003-2004 through 20082009. A line of best fit for each grade distribution has been placed on the graph based on
data collected in the years prior to the pilot year. Dashed lines on Figure 6 indicate
expected levels of grade distributions according to pretreatment data. The Greek symbol
delta (is used to indicate the difference between expected and actual values for each
grade distribution in the pilot year and the year of full implementation. The differences
between expected and actual values for each are indicated within parentheses on Figure 6.
Table 22 reports these differences.
Table 22
Gap between Actual and Projected Classroom Grade Distributions for all Students
(n=approximately 1400)
As
Bs
Cs
Ds
0.17%
-0.28% 0.60% 2.03%
Gap ( inPilot Year)
Fs
-1.65%
3.71%
-1.25% -0.25% 1.43% -2.97%
Gap2) in Year of Full Implementation)
Percentages reported are the differences from the projected values based on linear regression of
pretreatment data (School years 2003-2004 through 2006-2007)
The percent As and percent Fs produced the differences of the largest magnitudes
from expected values. The higher than expected percentage of Ds may have been due to a
101
portion of the Fs becoming Ds. The higher than expected percentage of As may have
been due to a portion of Bs becoming As.
102
Grade Distributions 2003-2004 through 2008-2009
45%
40%
A2 (3.7%)
(0.2%) A1
35%
30%
B2 (-1.3%)
B1 (-0.3%)
A's
25%
B's
C's
D's
20%
F's
C1 (0.6%)
C2 (-0.3%)
15%
(2.0%) D1
10%
5%
D2 (1.4%)
F1 (-1.7%)
F2 (-3.0%)
0%
03-04
2003-2004
04-05
2004-2005
05-06
06-07
2005-2006
2006-2007
School Year
07-08
2007-2008
08-09
2008-2009
Figure 6 Predicted and Actual Classroom Grade Distributions for all Students. Dashed lines represent
predicted values based on pre treatment data from years 2003-2004 through 2006-2007. Differences
between expected and actual values are represented within parentheses.
103
Classroom Discipline Referrals
The number of reported classroom discipline referrals, aggressive discipline
referrals (aggressive to school employee, defiance, failure to comply with discipline,
fights, harassment, profanity, disorderly conduct, and repeated violations), and discipline
referrals for several disaggregated groups are presented in Table 23 for four years
previous to the pilot year (pre-treatment), the pilot year, and the year of full
implementation.
Table 23
Discipline Referrals for School Years 2003-2004 through 2008-2009 (n=approximately 1400)
Total
Aggressive
School Year
Discipline Discipline Male Female Fresh
Soph
Jr
Sr
2003-2004
1792
666
1148
644
513
475
449
355
2004-2005
1756
709
1042
714
522
510
374
350
2005-2006
1708
740
1116
591
473
458
390
389
2006-2007
1997
1039
1433
564
727
513
366
391
2007-20081
1712
720
1158
554
552
455
313
392
2008-20092
1255
481
735
520
493
369
229
164
1
Two of this study’s principal-teacher interactions –snapshots and data reviews—were in place for this
school year.
2
This study’s treatments – set of four principal-teacher interactions – were in place for this school year.
Using discipline referral data from four years previous to the pilot year, expected
levels of discipline referrals were calculated using linear regression. The differences
between expected levels of discipline referrals and actual levels of discipline referrals, in
each category, from the pilot year and year of full implementation are presented in Table
24. The differences between expected and actual values for each are indicated within
parentheses on Figure 7.
104
Table 24
Differences of Discipline Referrals from Projected Frequencies (n=approximately 1400)
Total
Aggressive
Male Female
Fresh
Soph
Discipline
Discipline
Jr
Sr
(M)
(Fe)
(Fr)
(So)
(TD)
(AD)
Gap1) in Pilot
-243
-387
-259
17
-155
-50
-23
-16
Year
Gap2) in Year
-757
-759
-775
19
-273
-142
-84 -259
of Full
Implementation
Numbers reported are the differences in frequencies from the projected value based on linear regression of
pretreatment data (School years 2003-2004 through 2006-2007)
Figures 7, 8, and 9 depict the actual discipline referral frequencies for school
years 2003-2004 through 2008-2009. A line of best fit for each grade distribution has
been placed on the graph based on data collected in the years prior to the pilot year.
Dashed lines on each figure indicate the expected frequency of discipline referrals
according to pretreatment data. In the data table s have been indicated on each graph in
order to indicate the difference between expected and actual values for each level of
discipline in the pilot year and the year of full implementation in each category. The
differences between expected and actual values for each are indicated within parentheses
on each figure.
As indicated on Figure 7, the actual frequency of total discipline referrals was
12% lower than expected in the pilot year and 38% lower than expected in the year of full
implementation. Additionally, the actual frequency of aggressive discipline referrals was
35% lower than expected lower than expected in the pilot year and 61% lower than
expected in the year of full implementation. This pattern seems to indicate that essentially
all of difference in actual discipline referrals and expected discipline referrals is due to
actual aggressive discipline referrals being much lower than expected.
105
Total Discipline and Aggressive Discipline 2003-2004 through 20082009
2500
2000
TD1
-243)
1500
TD2
-756)
Total Discipline
AD1
-387)
1000
500
0
AD2
-759)
Aggressive
Discipline
03-04
04-05
05-06
06-07
07-08
08-09
2003-2004 2004-2005 2005-2006 2006-2007 2007-2008 2008-2009
School Year
Figure 7 Total Discipline and Aggressive Discipline for all Students. Dashed lines represent predicted
values based on pre treatment data from years 2003-2004 through 2006-2007. Differences between
expected and actual values are represented within parentheses.
As indicated on Figure 8, the actual frequency of male discipline referrals was
18% lower than expected in the pilot year and 51% lower than expected in the year of full
implementation. However, the actual frequency of female discipline referrals for both the
pilot year and the year of full implementation is essentially equivalent to the expected
106
value, 3% and 4% higher respectively.
Total Discipline by Gender 2003-2004 through 2008-2009
1600
1400
M1
-259)
1200
1000
800
7)
Fe1
600
M2
-775)
19)
Fe2
Males
Females
400
200
0
03-04
2003-2004
04-05
2004-2005
05-06
06-07
2005-2006 2006-2007
School Year
07-08
2007-2008
08-09
2008-2009
Figure 8 Total Discipline by Gender. Dashed lines represent predicted values based on pre treatment data
from years 2003-2004 through 2006-2007. Differences between expected and actual values are represented
within parentheses.
Figure 9 indicates discipline referrals for individual grade levels during the school
years 2003-2004 through 2008-2009. The actual frequency of freshman discipline
referrals was 22% lower than expected in the pilot year and 36% lower than expected in
the year of full implementation. The actual frequency of sophomore discipline referrals
was 10% lower than expected in the pilot year and 28% lower than expected in the year
of full implementation. The actual frequency of junior discipline referrals was only 7%
lower than expected in the pilot year, but 27% lower than expected in the year of full
implementation. The actual frequency of senior discipline referrals is essentially the same
as expected in the pilot year, 4% lower than expected, and 61% lower than expected in
the year of full implementation,.
107
Total Sophomore Discipline 2003-2004 through 20082009
Total Freshman Discipline 2003-2004 through 2008-2009
1600
1400
1200
1000
800
Fr1
-155)
600
400
Fr2
-273)
200
0
03-04
2003-2004
04-05
2004-2005
05-06
2005-2006
06-07
2006-2007
07-08
2007-2008
1600
1400
1200
1000
800
600
400
200
0
08-09
2008-2009
So1
-50)
03-04
2003-2004
04-05
2004-2005
School Year
108
1600
1400
1400
1200
1200
1000
1000
800
800
600
600
Jr1
-23)
0
2003-2004
03-04
2004-2005
04-05
2005-2006
05-06
2006-2007
06-07
School Year
07-08
2007-2008
08-09
2008-2009
Total Senior Discipline 2003-2004 through 2008-2009
1600
200
06-07
2006-2007
School Year
Total Junior Discipline 2003-2004 through 2008-2009
400
05-06
2005-2006
So2
-142)
2007-2008
07-08
Jr2
-84)
2008-2009
08-09
400
Sr1
-16)
200
Sr2
-259)
0
03-04
2003-2004
04-05
2004-2005
05-06
2005-2006
06-07
2006-2007
07-08
2007-2008
08-09
2008-2009
School Year
Figure 9 Total Discipline for Freshman, Sophomores, Juniors, and Seniors. Dashed lines represent predicted values based on pre treatment data from years
2003-2004 through 2006-2007. Differences between expected and actual values are represented within parentheses.
Classroom Grade Distributions and Student Discipline Referrals for High, Medium,
and Low Performing Teachers
Analyses of QIR results indicated that ratings of teacher instructional practices
completed by teachers and principal were divergent for high, medium, and low
performing teachers. Thus it was of interest to investigate if there were differences in
student outcomes across these three teacher groups. Mean classroom grade distributions
and student discipline referrals disaggregated by high, medium, and low performing
teachers for 2006-2007 through 2008-2009 are reported in Table 25. A comparison of
classroom grade distributions and student discipline referrals indicated that there was a
lack of statistically significant differences in classroom grade distributions or student
discipline referrals for high, medium, or low performing teachers from 2006-2007
through 2008-2009.
Table 25
Comparison of Classroom Grade Distributions and Discipline Referral Mean Scores (Standard Deviation)
for High, Medium, and Low Performing Teachers for 2006-2007 through 2008-2009 (n=approximately
1400)
High Performing
Medium Performing
Low Performing
Teachers
Teachers
Teachers
06-07
07-08
08-09
06-07
07-08
08-09
06-07
07-08
08-09
% of A
(SD)
25%
(12%)
30%
(11%)
35%
(12%)
36%
(17%)
41%
(17%)
44%
(19%)
33%
(18%)
30%
(9%)
37%
(15%)
% of B
(SD)
33%
(8%)
31%
(7%)
31%
(7%)
26%
(11%)
24%
(8%)
25%
(8%)
26%
(10%)
30%
(6%)
28%
(6%)
% of C
(SD)
22%
(4%)
22%
(6%)
19%
(6%)
18%
(8%)
18%
(7%)
17%
(6%)
19%
(9%)
20%
(5%)
18%
(6%)
% of D
(SD)
9%
(5%)
11%
(4%)
9%
(5%)
7%
(4%)
9%
(4%)
8%
(5%)
9%
(6%)
11%
(6%)
9%
(6%)
% of F
(SD)
11%
(8%)
6%
(3%)
6%
(3%)
12%
(9%)
8%
(5%)
5%
(4%)
12%
(9%)
9%
(5%)
8%
(5%)
Discipline
Infractions
(SD)
12
(12)
17
(27)
9
(9)
15
(11)
9
(9)
11
(16)
15
(13)
10
(13)
12
(9)
109
Research Question Three: How will changes in principal-teacher interactions affect
the frequency and focus of teacher conversations with principals, students, and
other teachers?
Teacher and student survey data were used in a single group pretest-midtestposttest design in order to explore any effect changes in principal-teacher interactions
coupled with changes in instructional practices had on the frequency and focus of teacher
conversations with principals, students, and other teachers during the pilot year and the
year of full implementation. Data from teacher and student surveys were compared using
chi square from the spring of 2007, prior to the introduction of the set of principal-teacher
interactions, to the spring of 2008, end of the pilot year, and from spring of 2008, before
the year of full implementation, to spring 2009, after the year of full implementation.
Some of the questions on the teacher and student surveys are conceptually related
research question three, the frequency and focus of teacher conversations. However,
according to analysis using Cronbach's alpha as reported in chapter three, there was a
lack of internal consistency between the resultant responses of similar questions from the
teacher and student surveys. Therefore, each question on the teacher and student surveys
was analyzed individually.
Although chi square is an acceptable analysis tool to compare the distributions of
the survey data in this study, two assumptions were occasionally violated during analysis.
One assumption of a chi square analysis, violated by some survey data, was that no cells
contain a zero frequency count. The second assumption of a chi square analysis, violated
by some survey data, was that no more than 20% of cells report less than a five frequency
count. However, these are not rules, but guidelines and researchers support analyses
which uses chi square when these assumptions are violated (Levin, 1999).
110
Teacher Survey Data
Results of teacher surveys are presented in Table 26 for spring 2007 (pretest),
spring 2008 (posttest for pilot year/pretest for year of full implementation), and spring
2009 (posttest for year of full implementation).
Response
Teacher-Teacher Conversations
How many times per day do
5 35
8 or more times
you speak to another teacher?
53 38
2-4 or 5-7
13
0
None or One
How often do you discuss
32 65
Daily or Weekly
curriculum issues with other
25
4
Monthly
teachers?
14
4
Never or Annually
How often do you discuss
30 55
Daily or Weekly
discipline issues with other
8 14
Monthly
teachers?
33
4
Never or Annually
Principal-Teacher Conversations
How often do you discuss
14 16
Weekly or Daily
curriculum issues with a
21 24
Monthly
principal?
36 33
Never or Annually
How often do you discuss
25 28
Weekly or Daily
discipline issues with a
19 30
Monthly
principal?
27 15
Never or Annually
How often do you discuss
8 10
Weekly or Daily
teaching strategies with a
19 20
Monthly
principal?
44 43
Never or Annually
Frequency & Length of Classroom Visits
How often did a principal visit
8 or more
0
30
your classroom last year?
5 to 7
0
30
2 to 4
51 11
None or Once
20
2
What was the average length
30 to 60 minutes or
1
0
of the principals’ visits to your
60 minutes or more
classroom (not counting
0 29
10 to 30 minutes
official observations)?
I was not visited or
70 44
Less than 10 minutes
1
Data are frequency counts of teacher responses in this category
*=p<0.05, **=p<0.01, ***=p<0.001
Spring 091
Question
Spring 081
Spring 071
Table 26
Teacher Survey of the Frequency and Focus of Teacher Conversations
33
38
0
54
14
3
61
7
3
8
24
39
22
22
27
3
27
41
42
19
10
0
χ2
(df=2)
07/08
χ2
(df=2)
08/09
37.95***
0.03
31.97***
6.69*
31.70***
2.76
0.44
3.14
6.04*
5.35
0.23
4.83
100.53***
6.49
35.91***
5.77
2
17
52
The first three teacher survey questions in Table 26 relate specifically to the
frequency and focus of teacher-teacher interactions. Data analyses indicated a signifigant
111
difference in the distribution of teachers responses to questions about the conversations
with other teachers before and after the pilot year. There were few significant differences
in the distribution of teachers responses to the same questions about conversations with
other teachers from the pilot year to the year of full implementation; the one exception
was teachers’ responses to how often they discussed curriculum issues with other
teachers. The results of data analyses indicated that teachers did percieve an increase in
the frequency of teacher-teacher conversations related to curriculum and discipline as
well overall. The same level of teacher-teacher conversations were sustained during the
year of full implementation.
The next three teacher survey quesitons in Table 26 relate specifically to the
frequency and focus of principal-teacher conversations. With the exception of teachers’
responses to how often they discussed discipline issues with a principal, data analyses
indicated a lack of significant differences in the distributions of teachers’ responses to
questions concerning the frequency and focus of principal-teacher conversations either
during the pilot year or the year of full implementation. Teachers’ responses to how often
teachers discussed discipline issues with a principal were significantly different from
before to after the pilot year. The results of data analyses indicated that teachers, did not
percieve a change in the frequency of principal-teacher interactions related to curriculum,
discipline, or teaching strategies during the pilot year or the year of full implementation.
The last two teacher survey questions in Table 26 relate to the length and
frequency of principal classroom snapshots. Data analyses indicates a significant
difference, in the distribution of teachers responses to questions about the length and
frequency of principal classroom visits before and after the pilot year. There were no
significant differences in the distribution of teachers responses to the same questions
112
about the length and frequency of principal classroom visits from the pilot year to the
year of full implementation. The results of data analyses indicated that teachers did
perceive an increase in the frequency and duration of principal classroom visits during
the pilot year and that change was sustained during the year of full implementation.
Student Survey Data
Results of student surveys are presented in Table 27 for spring 2007 (pretest),
spring 2008 (posttest for pilot year/pretest for year of full implementation), and spring
2009 (posttest for year of full implementation).
Spring 091
Question
Spring 081
Spring 071
Table 27
Student Survey of the Frequency and Focus of Teacher-Student Conversations
Responses
Frequency of Teacher-Student Conversations
How many times per day do
124 128 100
8 or more times
you speak to a teacher?
82
81 108
2-4 or 5-7
10
7
8
None & One
Focus of Teacher-Student Conversations
How often do you discuss
Daily or Weekly
47
54
48
personal issues with
Monthly
27
33
20
teachers?
Annually or
142 129 148
Never
How often do you discuss
Daily or Weekly
34
36
30
discipline issues with your
Monthly
9
20
13
teachers?
Annually or
173 160 173
Never
How often do you discuss
Daily or Weekly
57
60
49
learning strategies with a
Monthly
55
57
42
teacher (how to study, test
Annually or
taking strategies, learning
104
99 125
Never
styles)?
How often does a teacher in
Daily
49
58
63
this building motivate and
Weekly or
89
96
98
inspire you?
Monthly
Annually or
78
62
55
Never
Do your teachers discuss
Daily or Weekly
120 131 112
class performance with
Monthly
71
70
76
you/the class (i.e. class
Yearly or Never
25
15
28
average, test averages, etc)?
1
Data are frequency counts of teacher responses in this category
*p<0.05, **p<0.01, ***p<0.001
113
χ2
(df=2)
07/08
χ2
(df=2)
08/09
0.60
7.36*
1.71
4.84
4.74
2.54
0.24
6.40*
2.85
0.65
2.99
5.66
The first student survey question in Table 27 relates specifically to the daily
frequency of teacher-student conversations. Data analyses indicated that students did not
perceive a change in the frequency of teacher-student conversations during the pilot year.
Analyses of the student responses indicate that students did perceive a decrease in the
daily frequency of teacher-student conversations, to a p<0.05 signifigance level, during
the year of full implementation.
The next five student survey questions in Table 27 relate to the frequency and
focus of teacher-student conversations. Data analysis indicated that, according to students
there were essentially no perceived differences in the frequency and focus of teacherstudent conversations related to personal issues, discipline issues, learning strategies,
motivation, or class performance during the pilot year or the year of full implementation.
There were significant differences in the distribution of student responses to how often
students discuss learning strategies with their teachers during the year of full
implementation, to a significance level of p<0.05, indicating a slight decrease.
A slight change in the frequency of teacher-student conversations was indicated in
the year of full implementation. The results indicated that fewer students talked to a
teacher eight or more times a day, but more students talked to a teacher between two to
seven times per day. This difference was likely trivial, especially since results to other
survey questions failed to indicate significant shifts in the focus of teacher-student
conversations. It is logical to think that since data clearly indicates an improvement in the
quality of teacher instructional practices that the frequency and focus of teacher-student
conversations would have also improved but such improvement was not indicated.
114
CHAPTER FIVE
DISCUSSION
This discussion chapter begins with an overview of the findings presented in
chapter four, and is followed by an in-depth discussion of each research question: quality
of teacher instructional practices; changes in student performance; and frequency and
focus of teacher conversations. After the discussion of each research question,
implications of the study’s unintended outcomes and recommendations for future
research are discussed. The chapter concludes with a table of core questions principals
may have which are affirmatively answered by this research; the audience for these
questions/answers is principals interested in this work.
Overview
As a result of the four principal-teacher interactions introduced in this study,
teacher instructional practices improved (according to analyses of QIR data), student
performance increased (according to analysis of student grade distributions and
discipline), and the frequency and focus of some teacher conversations changed
(according to analysis of teacher and student surveys). While teacher instructional
practices improved, the degree of improvement varied among different groups of
teachers. Additionally, teachers and principals did not always agree on the quality of
instructional practices exhibited in the classroom, particularly for lower performing
teachers.
115
According to data analyses of grade distributions and discipline referrals, student
performance improved more than predicted if the treatment had not occurred.
Improvement in classroom grade distributions and discipline referrals may have been
impacted both by changes in teacher instructional practices and by increased principal
visibility. This study did not investigate the direct or mediated impacts of teacher
instructional practices or principal visibility on student performance.
Results indicated that the frequency and focus of teacher conversations changed
during the course of this study; however, there were essentially no indications that the
frequency and focus of principal-teacher conversations or teacher-student conversations
changed during the course of the study.
Teacher Instructional Practices
Teacher instructional practices formed the key construct for research question
one. Because the treatment introduced was school-wide, the entire teaching staff of the
school was the initial focus of data analysis for research question one. As a group,
teachers in this school indicated the quality of their instructional practices improved in
the two domains of Planning & Preparation and Learning Environment (see Table 15).
Principals indicated the quality of teacher instructional practices improved in the two
domains of Instruction and Assessment. Teacher’s perceived self-improvement in
instructional practices within specific domains that differed from the specific domains
principals’ rated as showing teacher improvement. Several factors potentially explain this
divergent outcome in perception of improvements in different domains.
Teacher planning and preparation for lessons is generally a private endeavor that
is not easily observable by principals. Rather than observing the process and intellectual
effort required for quality preparation, during instruction principals observe the result of
116
the planning and preparation (Hirsch, 1999; Yinger, 1980). Likewise, teachers would
have a much deeper knowledge of their classroom learning environment than would
principals who are only visitors from time to time. Thus, teachers’ self-rated
improvements in the two domains of Planning & Preparation and Learning Environment
suggested that their thorough self-knowledge of these two critical instructional
components may not be directly observable by principals.
Principals traditionally receive extensive training in analyzing, recognizing, and
characterizing a wide range of quality of assessment and instructional techniques
employed by teachers. As a result, they tend to develop perspectives that are attuned to
these differences in quality. Thus, principals may often characterize changes they observe
in their teachers as improvements in instruction and assessment, whereas the teachers
they observe may only recognize that their assessments and instructional practices have
changed but not necessarily characterize that change as improvement (Fullan, 2005b).
As a group, teachers rated their instructional practices as demonstrating greater
quality than did the ratings of principals (see Table 16). Because many in the teaching
profession are hard working, conscientious individuals that put forth a great deal of effort
in their core responsibility of planning and delivering instruction, teachers may rate the
quality of their instructional practices artificially high based on that effort (Ross, 1995;
Schacter & Thum, 2004). Essentially they equate effort with quality. Due to observation
cycles and evaluation timelines, principals evaluate the quality of instructional practices
within specific time intervals with specific standards in mind. They have not been
personally immersed in the planning and delivering of the instruction they are evaluating,
so they tend to bring an outside perspective to this observation task. Because principals
are only generally aware of the time and effort teachers spend preparing outside the
117
classroom, this awareness does not typically influence their interpretation of what they
observe (Torff & Sessions, 2005). In many cases, teacher self-ratings may be the result of
confusing planning and delivery effort with the quality of instructional practices. As
discussed in chapter three this supports the stronger validity of principal ratings over
teacher ratings.
The Quality of Teacher Instructional Practices
Groups of teachers identified as having different quality of instructional practices
may have also differed in their self-ratings. In order to obtain as much discrimination in
groups as possible, teachers were split into three groups according to the overall posttest
QIR ratings completed by the principals as discussed in chapter four. Statistically, there
was no difference in the QIR self-ratings completed by teachers who were identified as
high, medium, or low performing teachers based on the principals’ ratings as evidenced
in Table 17. According to analysis of QIR ratings completed by the principals, high
performing teachers improved the most, and low performing teachers improved more
than medium performing teachers who essentially did not improve (see Tables 18, 19, &
20).
High performing teachers rated their instructional practices equivalent to the
principal ratings. Medium performing teachers rated their instructional practices slightly
higher than principals (about 0.3 of a performance level). Low performing teachers rated
their instructional practices considerably higher than principals, almost a full
performance level above in every domain throughout the QIR. As a result, high
performing teachers accounted for little to none of the variance between principal and
teacher ratings of instructional practices. Medium performing teachers accounted for only
118
a small amount of the variance. Low performing teachers accounted for most of the
variance between principal and teacher ratings of instructional practices.
High Performing Teachers
High performing teachers primarily perceived their only improvement to
instructional practices to be in the domain of Learning Environment as evidenced in
Table 18. This result may be associated with teacher efficacy. Teachers with a strong
sense of efficacy are open to new ideas and more willing to experiment with new
methods to better meet the needs of their students (Guskey & Passaro, 1994; Stein &
Wang, 1988). According to Tschannen-Moran, Woolfolk, and Hoy (1998), “teacher
efficacy influences teachers’ persistence when things do not go smoothly and their
resilience in the face of setbacks” (p. 222).
The belief that there is always room for improvement seems to apply to how high
performing individuals in many professions view their own performance. High
performing individuals in professions outside of education tend to be critical of their
performance and continually search for ways to improve (Dunning, Johnson, Ehrlinger,
& Kruger, 2003; Young, 2009). High performing teachers may also tend to be critical of
their own instructional practices, constantly looking for ways to improve their teaching.
Thus, high performing teachers may have underrated their instructional practices
compared to their actual performance in this study.
Principals perceived improvement in all domains of instructional practices for
high performing teachers with the exception of the domain of Planning & Preparation.
This result supports the conclusion that high performing teachers actually improved their
instructional practices during the course of this study. Though teachers themselves may
not have perceived as much improvement, the reason that principals did not observe
119
improvement in high performing teachers’ planning and preparation was likely due to the
private nature of activities related to planning and preparation. As previously discussed,
teachers’ planning and preparation often takes place in isolation away from the school
and is more difficult to for a principal to observe (Hirsch, 1999; Yinger, 1980).
Medium Performing Teachers
Improvement in instructional practices was demonstrated in one domain out of
eight possible for medium performing teachers; teachers who principals rated in the
middle group in terms of instructional practices as evidenced in Table 19. According to
principals’ ratings the quality of medium performing teachers’ instructional practices
remained essentially unchanged during the year of full implementation. Like the
principals, the medium performing teachers themselves perceived no improvement in
their own instructional practices. The only domain in which principal-rated improvement
was indicated in the QIR for the medium performing teachers was in the domain of
instruction. That change, while statistically significant, was only 0.2 of a QIR
performance level. These results seem to indicate that the instructional practices of
medium performing teachers did not improve during the course of this study.
The lack of change to instructional practices by medium performing teachers
implies that in order to improve instructional practices for medium performing teachers in
this school, additional strategies might be necessary, or perhaps the treatment employed
needed more repetition to be effective.
These findings for medium performing teachers contradicted some
recommendations regarding the breadth of teacher improvement. According to Marzano
(2003) and Downey et al. (2004) specific strategies for improving instructional practices
of teachers should not be differentiated among teachers performing at different levels.
120
Low Performing Teachers
Low performing teachers indicated that they improved in the domain of Planning
& Preparation (see Table 20). As was the case for all teachers, a possible cause for this
result was that teachers are more familiar with their own planning and preparation than
principals (Fullan, 2005b). This may have indicated that low performing teachers were
putting forth increased effort in planning and preparation, which they perceived as
improvement (Ross, 1995).
Low performing teachers in this study were rated by the principals as Developing
within five levels of performance on the QIR (Unsatisfactory, Beginning, Developing,
Proficient, Exemplary) on both the pretest and posttest. The low performing teachers,
when rating themselves, selected the proficient range overall. Through the lens of social
cognitive theory, Bandura (1986) proposed that what people perceive about themselves
affects the outcome of their endeavor. Thus, if a low-performing teacher believes he or
she is doing proficient work, but does not have the skill set to improve or the desire to
improve, then perhaps no improvement in teaching will occur.
Although not supported by data presented in this study, principals determined
through discussions and calibration meetings that the low performing teachers did not
believe there was any room to improve their instructional practices. Instead, they blamed
poor student effort instead of their instructional practices for poor student performance.
Across the three levels of teacher performance (high, medium, and low
performing) principals and teachers perceived improvement differently, but overall
improvement of instructional practices was indicated. The data also demonstrated that
low performing teachers consistently overrated their instructional practices as compared
to principal ratings. QIR ratings completed by the principals and teachers showed that,
121
while the set of four principal-teacher interactions did improve the instructional practices
of the low performing teachers, the principal-teacher interactions did not improve the
accuracy of the low performing teachers’ perceptions of their own instructional practices.
(This assumes principal ratings of instructional practices were more valid, as discussed
earlier in chapters two and four). According to the findings in this study, low performing
teachers either perceived their instructional practices to be of equal high quality as other
teachers or low performing teachers were unwilling to admit their instructional practices
were of lower quality than their peers. Both of these interpretations reflect current
research in self-perceptions of low performing teachers (Dunning et al., 2003; Kruger &
Dunning, 1999; Yariv, 2009).
Changes in Student Performance
For the purpose of this study, student performance was defined at the school level
as school-wide student grade distributions and school-wide student discipline referrals.
As established in chapter two, student grades and discipline are often approximate
indicators of the overall quality of teacher instructional practices. Research question two
asked if the change in teacher instructional practices would have an effect on these two
measures of student performance. Analyses of data in response to question one
documented a changes in instructional practices for the year of full implementation, thus
a change in student performance was hypothesized.
There were some strong indicators that student performance (grades and
discipline) improved more than trends would have predicted in both the pilot year and the
year of full implementation. Since the changes in instructional practices clearly vary for
different groups of teachers (high, medium, and low performing), the researchers
expected to see differences in the student performance indicators of the students of high,
122
medium, and low performing teachers. However, there were not significant differences in
the student performance indicators for students of high, medium, or low performing
teachers.
Student Grade Distributions
Although historically improvements in student grade distributions have been
difficult to achieve, overall student grade distributions improved slightly (see Table 21).
A ceiling effect may have influenced the grade distributions of As. The data analyses of
student grade distributions suggested that some Bs moved to As and some Fs moved to
Ds as shown in Figure 6. Analyses also indicated more improvement to grade
distributions during the year of full implementation than during the pilot year.
As indicated earlier in a discussion of the quality of teacher instructional
practices, principal ratings suggested improvements to teacher instructional practices
were made in the two domains of Instruction and Assessment (see Table 15).
Improvements in instruction and assessment may account for some of the improvement in
student grade distributions (Haycock, 1998; Jordan, Mendro, & Weerasinghe, 1997;
Rivkin et al., 2001). A behavior known as the Kohler effect, where subjects will change
behavior to reflect what they perceive an observer wants, may also account for some of
the improvement in student grade distributions (Lount, Park, Kerr, Messe, & Dong-Heon,
2008). That is, the nature of the data reviews, which included reviews of each teacher’s
grade distributions, may have prompted teachers to alter classroom grading practices to
avoid exhibiting higher failure rates than their colleagues.
It is important to note that any altering of grading policies to avoid being an
outlier was a self-imposed process, since all data presented to teachers was anonymous.
No one but the individual teacher and the principal knew the results of the individual
123
classroom data collection. Although not supported by data presented in this study,
principals perceived through discussions and calibration meetings that some teachers
made changes in grading practices to allow failing students to improve their grades.
These same changes then made it easier for other students to see an increase in their
grades. According to principal discussions with teachers, many teachers extended
deadlines for assignments, allowed students to retake assessments, or prove mastery of
content in an alternate format. Such accommodations were often provided for all students
within a class, not just those who were failing.
Another strategy that may have influenced some of the improvements in grade
distributions was the use of trigger points as mentioned in chapter two. A trigger point
initiated more detailed principal follow-up with a teacher who seemed to need additional
assistance in order to improve grade distributions. For example, principals met with
teachers whose classroom grade distributions exhibited substantially higher percentages
of Fs compared to their anonymous peers (typically scores of 5% or higher than their
peers teaching similar classes, grade level and subject area). Teachers who exhibited a
high percentage of Fs were required to consult with the principals for help and guidance
on how to improve their instructional practices so that students would be more successful
academically. These teachers then began implementing various strategies, such as
requiring students to stay after school to make up work and retake tests. This resulted in
Fs becoming less frequent.
Discipline Referrals
Overall, discipline referrals decreased dramatically during the treatment years of
this study as seen in Table 23 and Table 24. The data analyses indicated that most of the
reduction in discipline infractions was due to a decrease in aggressive discipline referrals
124
as seen in Figure 7. The data analyses also indicated that essentially all of the reduction in
discipline infractions was due to a decrease in male discipline referrals. For the pilot year,
aggressive discipline referrals actually decreased more than overall discipline. During the
year of full implementation, the drops in aggressive discipline referrals were equivalent
to the drop in overall discipline referrals. Before the introduction of principal-teacher
interactions, males were responsible for about two thirds of the discipline referrals;
however, by the end of the year of full implementation, male discipline referrals were
responsible for little more than half of the overall discipline referrals.
The drop in male discipline referrals over the course of this study may be
accounted for in two ways. First, as boys mature, they learn to tame their aggressive
tendencies in part through learned appropriate interactions with adults. Boys learn
nurturing through conversation and example and are more aware of the presence of
authority. They respond more to authority figures than do females (Gurian, 2001). Given
that the quality of instructional practices did not change dramatically (see small effect
size in Table 15) over the course of this study, while aggressive and male discipline were
considerably less than expected, it is most likely that increased visibility of principals (as
a result of snapshots) substantially contributed to the reduction of aggressive and male
discipline issues. Consistent with the theory of management by walking around (Peters &
Waterman, 1982), students interacting with the principals on a daily and ongoing basis
may have been more likely to act appropriately. Of the four different grade levels,
freshman discipline referrals were most impacted (see Figure 9). Although not supported
by data presented in this study, through discussions and calibration meetings, principals
concluded that freshman were more closely monitored and perhaps had more interactions
with numerous adults in support of their success as they transitioned to the ninth grade. It
125
is likely then that increased involvement with freshman by both principals and teachers
impacted their behavior. Sophomore, junior, and senior discipline referrals also
decreased, and saw the largest decrease during the year of full implementation.
Principal visibility increased due to the nature of snapshots which initiated a kind
of positive feedback loop between interaction and discipline. That is, as principals
performed snapshots, they interacted more with the students and discipline referrals
decreased as a result. As discipline referrals decreased, more time was available for
principals to perform more snapshots which increased the interactions between students
and principals even more. This resulted in fewer discipline referrals and more time for
snapshots during the second year of the study. According to Sarason (1990), eventually
behaviors such as snapshots will likely become part of the school culture.
Grade Distributions for the Students of High, Medium, and Low Performing
Teachers
Analyses of grade distributions indicated that there was no statistical difference
between high, medium, or low performing teachers’ student grade distributions as
evidenced in Table 25. Student performance data from each of these groups exhibited
large variances for such a small sample size, which suggested that the variance of grade
distributions within teacher groups may have been greater than variance across the three
groups. Thus, one interpretation suggests that the impact of varying quality of instruction
on grades may not be strong enough for this research design to measure.
Another interpretation of the grade distributions issue might conclude that
improvement in classroom grade distributions was not related at all to the quality of
teacher instructional practices. Although grade distributions and discipline data were
reported anonymously, the data may have been affected by teachers resisting to be
126
outliers (Lount et al., 2008). Therefore, teachers may have discussed and attempted to
change and regulate performance indicators to fit statistically with their peers as a whole
since the individual data was anonymous. Once again, this may be a function of the
conscientious, success-driven traits of teachers (Martinez et al., 2005).
Discipline Referrals from the Students of High, Medium, and Low Performing
Teachers
As discussed previously, teacher instructional practices improved for high
performing and low performing teachers. Improvements in these instructional practices
may have accounted for some of the decrease in student discipline referrals. However, if
the number of discipline referrals were negatively correlated with quality of instruction
scores, then one would expect the number of discipline referrals to also be related to the
teacher score on the QIR instructional practices scale as well as to any improvement they
demonstrated. Contrary to this expectation, there were no significant differences in the
number of discipline referrals from high, medium, or low performing teachers as
evidenced in Table 25. As with grade distributions, if the quality of teacher instructional
practices were the main effect for decreasing student discipline, then students of teachers
exhibiting the best instructional practices would have had fewer discipline referrals (Hall
& Hord, 2000; Myers, et al., 1987).
This lack of any significant difference in the discipline referrals between high,
medium, and low performing teachers may indicate that most of the reduction in
discipline was due to the nature of snapshots, which increased principal visibility in the
classrooms (Downey et al., 2004; Frase & Hetzel, 1990). Students actually said to the
principals “Are you following me?” and “How do you know everything that is going
on?” These examples illustrate student awareness of the presence of principals and the
127
possible subsequent impact on behavior. In the future, it would be beneficial to study
what effect principal visibility has on student discipline as an isolated treatment. This
study used four specific principal-teacher interactions as one treatment. Principal
visibility was not an intended outcome of the interactions, but it may have been
responsible for much of the reduction in discipline infractions.
Changes in the Frequency and Focus of Teacher Conversations with Principals,
Students and Other Teachers
Unlike the clear indication of results in student discipline referrals, results from
student and teacher surveys regarding the frequency and focus of teacher conversations
were mixed. According to teacher perceptions, there were changes in the frequency and
focus of conversations between teachers and other teachers, but not between principals
and teachers. Additionally, students did not perceive changes in the focus of teacher
conversations with students.
The Frequency and Focus of Principal-Teacher Conversations
The four principal-teacher interactions implemented in this study were
specifically designed to alter the frequency and focus of principal-teacher conversations,
but teachers perceived little to no change in the focus of these conversations. Opportunity
for principal-teacher conversations clearly increased, as indicated in the teacher survey;
teachers clearly perceived a dramatic increase in the frequency of principal visits to the
classroom, with at least 80% of teachers indicating more than five principal visits to the
classroom during the pilot year and again during the year of full implementation. This
was not surprising since a large number of snapshots did happen as documented for these
years. Teachers also indicated on the survey that the average length of a classroom visit,
discounting official observations, was approximately ten minutes. This result was not
128
surprising either as, by definition, snapshots were to be about ten minutes in length. What
these results do show is that teachers were aware of the frequency and the duration of the
many principal visits (more than 1600 during the pilot year and more than 2400 during
the year of full implementation). Teacher perceptions of what principals and teachers
discussed, though, remained essentially unchanged during the course of this study as
evidenced in Table 26.
Teachers did not perceive a shift in the focus of principal-teacher conversations
toward curriculum, discipline, or teaching strategies. Because these topics were originally
intended to be a core component of the principal-teacher interactions, this result was
unexpected by the researchers. As a result of reflection and examination of the treatment
in the study, the four principals who were the treatment providers were confident that all
four principal-teacher interactions yielded more principal-teacher conversations focused
on curriculum, discipline, and teacher strategies. However, teachers did not perceive
these same changes.
There are several possible reasons for this disconnect between the perceptions of
the principals and those of the teachers. First, at this school, principals and teachers had
many conversations prior to the pilot year on a frequent basis. It may be that the
collaborative nature of the treatment used in this study was perceived as friendly and
benign as discussions about weekend plans or family. Teachers seem much less sensitive
to principal-teacher interactions of a collaborative nature, not perceiving them as
threatening (Peters & March, 1999). Second, it may be that the survey instrument itself
was not sensitive enough to detect the actual changes in principal-teacher conversations
which occurred during this study (see validity and reliability of teacher and student
surveys in chapter three).
129
The collaborative nature of the treatment does not help explain why teachers did
not perceive a shift in emphasis in principal-teacher conversation related to curriculum
and instructional strategies. Each of the principal-teacher interactions implemented in this
study had both curriculum and teaching strategies embedded in the treatment. According
to Schunck, Aubusson, and Buchanan (2008),
“We argue that the contribution of this role [principal] is not merely in what the
critical friend offers to the observed teacher, but rather, lies in the opportunity for
discussion that probes the assumptions of all concerned, challenges views of what
good teaching looks like, and enables analysis of the practices of all concerned.
An understanding of the complexity of the factors that made this process work for
us will inevitably be partial, yet important factors would seem to include:
willingness to take risks; respect for one another’s expertise in teaching; and
ability to reflect collaboratively on our teaching and learning. The strength of the
process is difficult to quantify and lies in our acknowledgment of its complexity;
it cannot be reduced to a checklist of critical factors. We are reminded by the
process of how intensely personal is professional” (p. 225).
Considering the decrease in discipline which occurred over the course of this
study, it is logical to think that such a dramatic drop in discipline would provide fewer
occasions for principals and teachers to discuss discipline issues as evidenced in Table
23. Therefore even if principals were initiating discussion with teachers about discipline,
thereby increasing discipline discussions, fewer actual discipline referrals could
counteract the increased discussion and result in an overall drop in principal-teacher
discipline conversations.
130
The Frequency and Focus of Teacher-Teacher Conversations
The set of four principal-teacher interactions implemented in this study were not
designed specifically to affect changes in frequency and focus of teacher-teacher
conversations but, according to teacher surveys, changes in the frequency and focus of
teacher-teacher conversations were indicated in the pilot year and sustained through the
year of full implementation as evidenced in Table 26. Beginning with the pilot year and
continuing through the year of full implementation, teachers reported speaking with each
other much more on a daily basis. This increase in teacher-teacher conversations may
have been a result of teachers discussing changes in principals’ behavior or a response to
the added level of attention to classroom level data, grades and discipline, emphasized
during the two years of this study (Lount et al, 2008).
Results from teacher surveys also indicated that teachers more frequently
discussed curriculum and discipline issues with each other in response to the treatment in
this study. Each of the principal-teacher interactions in this study attempted to
intentionally increase the teachers’ reflections on instructional practices. So it is logical to
conclude that if teachers talked to principals about curriculum and discipline, they would
talk to each other about similar concepts. According to Schacter & Thum (2004),
although there are vast differences between disciplines, there are common pedagogical
issues that teachers can discuss that will improve instruction. The survey results indicate
that these types of teacher-teacher conversations increased as a result of the treatment
from this study.
The Frequency and Focus of Teacher-Student Conversations
By definition, the quality of teacher instructional practices includes the frequency
and focus of teacher-student conversations as discussed in chapter two. Yet, according to
131
student surveys, essentially no change in the frequency of teacher-student conversations
occurred during the course of this study as evidenced in Table 27. This may have been a
result of resistance to change in culture as noted by Sarason (1990). The results from
student surveys may indicate that students were not sensitive to changes in the frequency
and focus of teacher-student conversations. Students often focus on classroom
assignments and social issues in their school; and although students claim to want direct
feedback from teachers, they do not care for overly personal conversations or interactions
with teachers (Cushman & Delpit, 2003). Students are also accustomed to changes in
classroom interaction dynamics with teachers because they have different teachers for
different classes. So many periodic changes for students may result in change becoming
so commonplace that students fail to recognize it. Hence, responses to a survey about
changes in conversation patterns with teachers may not indicate changes in the previous
patterns. It is possible that the survey instrument provided to students was not sensitive
enough to detect changes in the frequency and focus of teacher-student conversations (see
the discussion of reliability and validity of teacher student surveys in chapter three).
Implications
Study in the area of specific treatments with teachers, especially lower performing
teachers, adds to the body of research already available on teacher treatments from
Danielson (2007), Marzano (2003), Fullan (2005b) and Whitaker (2002). The amount of
time required to improve instructional practices for underperforming teachers presents a
significant challenge. We were disappointed by the insignificant changes noted in the
quality of instructional practices for medium performing and low performing teachers
after the introduction of the four principal-teacher interactions. Consistent with prior
research (Dunning et al., 2003; Kruger & Dunning, 1999), low performing teachers often
132
do not have the ability or the knowledge to accurately self-assess their own practices and
may require more intense treatment to improve. Given our experience, the same treatment
that demonstrates impact on high performing teachers would not induce improvements in
the quality of instructional practices for medium performing and low performing
teachers.
Principal Visits and Collaboration with Teachers
Teachers are a diverse group of professionals who are passionate about their
students and their content (Schacter & Thum, 2004). At the high school level, the many
diverse content areas and specialties make it a challenge for principals to establish
themselves as instructional leaders in all areas (Ginsberg, 2001). Yet, good teaching
demonstrates many of the same characteristics whether in a physics class or in a history
class. If the principal establishes a clear understanding of high quality instructional
practices, teachers will recognize that principals can be helpful in improving instruction
in any classroom (Danielson, 2007; Marzano, 2003). Further, it is essential that teachers
know that the principal is familiar with their classrooms and instructional practices in
order to establish mutual trust and true collaboration. Such relationships develop from
rich dialog and frequent classroom visits that create collaboration between teacher and
principal (Ginsberg, 2001; Marshall, 2008).
The principal-teacher interactions used in this study were implemented as one
treatment. As such it was not possible to quantify which specific interaction may have
had more influence than another; however, the principals at this school suggested that
snapshots had a more significant effect than other interactions. The most important
component of snapshots is the structure it provided for collaborating with teachers side
by side in the classroom and being involved directly with instructional practices (Hall &
133
Hord, 2000). Although no empirical evidence is presented in this study to support it, the
researchers believe that visiting classrooms daily and principal involvement in daily
teacher instructional practices demonstrated more impact than the other principal-teacher
interactions. It takes time and energy, but when snapshots are in place, the results are
rewarding for principals and teachers (Downey & Frase, 2001).
Rubric Based Assessment of Instructional Practices
It is far easier to assess teacher improvement or growth when a principal is
frequently in the classroom. The anecdotal records, conversations, and collaboration from
classroom visits establish an environment for personal growth that cannot easily be
documented on a more traditional checklist of teacher skills. Conversely, rubric based
evaluations, as defined by Danielson (2007), provide teacher and principal an effective
tool for improving instructional practices. The rich dialog that comes from discussing
what is observed with the teacher and the teaching artifacts furthers the discussion and
makes the resultant collaboration invaluable (Cordingley, Bell, Rundell, & Evans, 2003;
Danielson, 1996; Downey & Frase, 2001; DuFour & Marzano, 2009). Traditional
evaluations that rate teachers on a scale from 1-4 without a clear description of what each
rating means are ambiguous and ineffective (Danielson, 2007; Halverson et al. 2004).
Thus, for quality evaluations to take place the process of dialoging between the evaluator
and evaluatee around a rubric-based instrument must occur; and a mutual understanding
of the learning taking place in the classroom must be developed (Danielson, 2007;
Downey et al., 2004, DuFour, DuFour, Eaker, & Karhanek, 2004; Halverson et al. 2004).
Working with Teachers of Differing Qualities of Instructional Practices
The various levels at which teachers perform present a challenge for principals
(Villegas-Reimers, 2003). The principal must identify the strengths and weaknesses of
134
the teachers and shift strategies when working with each individual. High performing
teachers need feedback to validate their quality work. They also tend to be overly critical
of their own skills; and this behavior, while contributing to improvement, often increases
the likelihood of job burnout (Dunning et al., 2003; Kruger & Dunning, 1999). According
to Pratt (1977), positive feedback to teachers is a powerful tool and motivator for high
performing teachers.
Medium performing teachers need positive feedback as well as discussion to
improve their instructional practices. These teachers may fall through the cracks and not
change their instructional practices because they go about their day to day routine with
little feedback, but never do anything to garner criticism. Research (e.g. Danielson, 2007;
Marzano, 2003; Fullan, 2005b; Whitaker, 2002) has demonstrated that improvement
occurs when principals intentionally collaborate with all teachers, not only on evaluation
years, but each year. Principals must collaborate with medium performing teachers
because, with coaching, these teachers can become higher performing, while others
demand attention to prevent falling into the lower range of performance.
There seems to be no practical way to improve instructional practices for teachers
who are unable or unwilling to accurately assess the quality of their instructional
practices. Despite the treatment used in this study, low performing teachers as a whole
stayed low performing while perceiving themselves as high performing teachers.
According to Goldsmith & Reiter (2009), Coaching is most successful when applied to
people with potential who want to improve—not when applied to people who have no
interest in changing. This is true whether you are acting as a professional coach, a
manager, a family member, or a friend. As discussed earlier, low performing teachers
may not have the ability to self-evaluate accurately or significantly improve instructional
135
practices even with the help of other more capable evaluators (Dunning et al., 2003;
Kruger & Dunning, 1999).
Unintended Outcomes
When designing the treatment for this study, the researchers did not consider the
possibility of unintended outcomes. We did not anticipate an initial increase in the
number of weak teachers leaving the school. We did not anticipate the development of
close relationships between principals and students, or that snapshots would supply
valuable information to improve the effectiveness of our conversations with parents.
Finally, we did not anticipate the increased level of job satisfaction for principals.
Exiting Teachers
Twelve Dixie Heights teachers resigned from the start to the finish of the study;
seven at the end of the pilot year and five after the year of full implementation. This level
of turnover is not typical for Dixie Heights, which lost on average fewer than three
teachers annually in each of the previous nine years and no more than five in a single
year. Various reasons for the exodus were reported by exiting teachers. At exit
interviews, nine of the 12 teachers expressed concern and dismay that principals were in
their classrooms so often as a result of the snapshot interaction; 10 out of 12 cited data
being distributed to the entire faculty for review and; a third factor, reported by eight out
of 12, was the monitoring and discussion of discipline in data reviews. All of the exiting
teachers further reported that if “left alone” they could continue to teach, but could not
teach under constant supervision.
Principal-Student Relationships
Another unintended outcome from the study was the increase in relationships that
developed between principals and the students. By the end of the study, many students
136
indicated an increased level of comfort when approaching principals with potential
problems or concerns, which actually prevented more serious issues before they
developed. Students often made recommendations to principals to prevent specific
student altercations, and when students were asked why they chose to more readily
cooperate with the principals to reduce disciplinary issues, the typical students responses
were, “I see you [the principals] everywhere, so I thought you could help”. As a result of
the increased principal-student communications, student fights were prevented, squabbles
mediated and potentially tardy students were sent to class in a timely fashion. Principals
being visible throughout the school and in the classrooms also contributed to the dramatic
decrease in student discipline referrals. When asked about the classroom visits by the
principals during conversations, students indicated they believed the principals were
more engaged with the school than in the past. Often principals were able to prevent
issues simply by being present to talk with students and redirect behavior (Downey et al.,
2004; Frase & Hetzel, 1990) and these efforts helped increase the efficiency of the school
as well as teacher instruction.
Principal-Parent Discussions
In addition to the unintended positive increase in principal-student discussions,
the quality of principal-parent discussions also improved. In many instances, when
parents came to school to discuss teachers’ instructional practices, principals were able to
dispel rumors or misconceptions much more effectively than in past years. For example,
a parent of a high achieving student came into a meeting with other parents and reported
that a specific teacher “just sat and read books” and told the students to “figure it out for
themselves.” The principal was able to access a tracking document of snapshots and
provide documentation of 67 classroom visits reporting only high quality instruction.
137
Increased Job Satisfaction for the Principals
Perhaps the most surprising and unintended result of this treatment was the
discovery that implementing the principal-teacher interactions at the core of this study
may reduce problems that often lead to principal burnout. Chief among these problems is
the lack of substantive principal connections to the students and teachers. Whitaker
(2003) noted that in addition to this isolation, principal burn out results from job
ambiguity, loss of autonomy and the increasing demands of the position. According to
Duke (1988), principals wanted to be a part of the faculty and involved with students in
order to increase job satisfaction, and the execution of the treatment used in this study
added much enjoyment to the principals’ work by increasing the frequency of positive
interactions with students and teachers. Two of the principals reported, “This is why I got
into administration, to be with students, and help teachers.” Another principal said that
she “would never leave the building if the principals could keep this program going in the
same direction.” One principal, who was in charge of the majority of discipline for the
school said, “I did not believe that being in classrooms would help keep discipline to a
minimum, but the time in classrooms made my job easier.”
Recommendations for Future Research
Further research on particular treatment needed for teachers at various levels of
performance would add to the body of knowledge currently being used to help teachers
improve their instructional practices. The treatment used in this study must be applied to
other schools to increase the generalizablitiy of this study. Since this study was conducted
in one school, we cannot conclude it would work the same way in other schools.
Further research is also recommended to determine how principal interactions in
the classroom could strengthen and support the walk-through model currently used by
138
many schools and districts. This study demonstrated that a district could strengthen its
walk-through with more frequent, informal visits to familiarize the principal with the
teaching strengths and weaknesses of teachers. These principal-teacher interactions
would also strengthen conversations between the teachers and principals and provide
solid examples of quality teaching.
Additional research will be required to discover possible implications of the
individual effects of each of the four principal-teacher interactions used in this study. For
example, data reviews may have directly affected grade distributions in this study, but
would data reviews as an isolated treatment improve grade distributions in a school?
Principals being present in classrooms on a frequent basis through snapshots seemed also
to directly affect discipline, but could an increase in principal visibility in classrooms as a
single treatment improve discipline in a school? Given that these effects were unintended
consequences for this study, it seems appropriate to investigate them further.
Core Principal Questions Affirmatively Answered by this Research
Ultimately, there are limits to what principals can directly affect in a school, but
principals do have the opportunity of changing how they interact with teachers. The
overall goals of this study were to determine how a specific set of principal-teacher
interactions affected instructional practices and to provide practical strategies for
principals which can improve a school without accessing additional personnel or
financial resources. The following table of questions and answers (Table 28) does not
represent any new information beyond the discussion presented earlier in this study.
These questions and answers do however address questions a principal might ask which
can be answered from the results of this study.
139
Table 28
Core Principal Questions Affirmatively Answered by this Research
How can I find time to get into classrooms?
According to the findings in this study, if the principals of a school would commit to going into
classrooms on a frequent basis (1600 to 2400 classes a year in this study) and working collegially with
the teachers, discipline referrals would decrease. There are a number of things which inhibit principals
from getting into classrooms on a frequent basis. Many of them cannot be mediated by the principals’
action. Assuming that handling of discipline referrals takes up a specific amount of principals’ time on a
frequent basis, once principals commit to getting into classrooms, they would make up some of the lost
time in decreased discipline referrals.
How do I engage teachers in job related conversations about instructional practices?
According to the findings in this study, principals can engage teachers in meaningful job related
conversations by 1) Conducting frequent classroom visits where the principals become a part of the
educational process and collaboratively discuss instructional practices and student performance 2)
Setting aside specific time during the summer to hold one-on-one conversations about each teacher’s
performance, professional growth, and expectations.
How do I get teachers to look at performance data of their students?
According to the findings in this study, principals can get teachers to consider performance data of their
students. This can be accomplished by performing the gathering and data analysis for the teachers and
then providing each teacher not only with their own data, but allowing them to see how the performance
data of their students compare with other teachers within the school.
How can I increase principal job satisfaction?
According to the findings in this study, principal job satisfaction can be increased by spending more time
in classrooms, interacting with students and collaborating with teachers on instructional strategies. These
activities greatly increased the principals’ sense of connection to both students and teachers – a
connection which, when missing, can lead to job dissatisfaction. There is also strong indication in this
study that spending more time in classrooms will result in decreased discipline referrals, which can
increase principal job satisfaction by decreasing a duty which is often viewed as unpleasant.
How can I reduce discipline referrals?
According to the findings in this study, if the principals of a school would commit to going into
classrooms on a frequent basis (1600 to 2400 classes a year in this study), discipline referrals would
decrease. This is most likely a result of increased principal visibility but may also result in improved
instructional practices.
How can I decrease failure rates (improve student grades) while increasing the quality of instructional
practices?
According to the findings in this study, principals can decrease failure rates while increasing the quality
of instructional practices. This can be accomplished by providing each teacher not only with their own
classroom grade distributions, but allowing them to see how they compare with other teachers within the
school. Additional improvements in student grade distributions can be made by working collaboratively
with the teachers of students with higher failure rates than their peers to implement additional teaching
strategies aimed specifically at improving student performance.
How can I know the actual quality of instructional practices?
According to this study, not all teachers have the ability to accurately assess the quality of their own
instructional practices. Thus, in order to know the actual quality of instructional practices in a teacher’s
classroom, principals must make frequent visits to the classroom and assess the quality of the
instructional practices with a valid and reliable instrument capable of accurately describing the quality of
the instruction occurring. Additionally, principals must be involved in frequent discussions with other
trained classroom observers to maintain reliability in the use of such an instrument.
140
In this study implementing a few specific changes to principals’ behavior enabled
the principals to function more effectively as instructional leaders. A principal cannot
control every aspect of a school, but they can control the way they interact with teachers.
Teachers are contentious professionals who desire and benefit from quality collaboration
for the purpose of improving the quality of instructional practices. Improving
instructional practices is essential to increasing student performance within a school. This
study concluded that the introduction of a few collaborative principal-teacher interactions
incorporated into the school day, improved instructional practices, student performance,
and student behavior while enhancing principal job satisfaction without the use of
additional personnel or financial resources.
141
Download