2013-14 ACHIEVENJ IMPLEMENTATION: KEY FINDINGS Peter Shulman

advertisement
2013-14 ACHIEVENJ IMPLEMENTATION:
KEY FINDINGS
Peter Shulman
Assistant Commissioner and Chief Talent Officer
July 8, 2015
Agenda
Major Findings from Year 1 Implementation Report
Teacher Evaluation Ratings for Selected Subgroup Populations
Next Steps
2
Major Findings from Year One
1.
On June 1, the Department released results from the first year of the new educator
evaluation system.
2.
Overall, the first year of AchieveNJ implementation represents a significant step
forward; our educators are no longer subject to a binary system that fails to provide
meaningful feedback and promote growth for all.
3.
As expected, most of our educators are high quality; however, differentiation among
measures allows more nuanced conversations about the elements of practice that
need improvement.
4.
The Department continues to collaborate with educators to better understand the
successes and challenges at the district level and to support innovation and
improvement.
3
Statewide Final Evaluation Ratings
100%
Teachers
90%
80%
80%
73.9%
70%
60%
50%
40%
30%
23.4%
20%
70%
62.2%
60%
50%
40%
35.2%
30%
20%
10%
0%
School Leaders
90%
Percent of Administrators
Percent of Teachers
100%
2.5%
0.2%
Ineffective
Partially
Effective
Highly
Effective
Effective
Summative Rating Categories
10%
0%
2.4%
0.2%
Ineffective
Partially
Effective
Highly
Effective
Effective
Summative Rating Categories
Key Data:
• As we knew, the majority of our
educators are of a high quality –
but we now know much more
about their work with students.
• Approximately 2,900 struggling
teachers were identified last year;
under the binary system the year
before, districts reported less than
0.8% “not acceptable.”
• These 2,900 teachers touch 13%
of all NJ students – 180,000 of
them; now they will be better
supported to improve their impact
on student learning.
While one year of this new data is insufficient for identifying sustained trends or making
sweeping judgments about the state’s teaching staff, we are proud of this significant
improvement and the personalized feedback and support all educators are now receiving.
4
Educator-Set Goals
90%
Teacher SGO Score
Distribution
100%
90%
Percent of Teachers
80%
70%
62.0%
60%
50%
40%
30%
20%
10%
0%
15.2%
0.8% 0.0% 0.4% 0.1% 1.8% 0.3% 2.2% 0.7%
SGO Scores
14.5%
1.9%
Percent of Administrators
100%
Principal Administrator
Goal Score Distribution
80%
70%
60%
53.5%
50%
40%
30%
21.7%
20%
10%
0%
12.7%
0.8% 0.0% 0.3% 0.1% 2.7% 0.2% 1.7% 1.3%
Administrator Goal Scores
5.1%
Key Data:
• Over 75% of teachers scored
a 3.5 or better on SGOs.
• Over 93% of principals
scored 3.0 or better on
Administrator Goals.
• Understanding these goals
represented a big shift, the
Department emphasized
setting “ambitious but
achievable” growth targets;
districts emphasized
“achievable.”
Districts must think carefully about whether their SGO and Administrator Goal scores provide
an accurate picture of the student learning taking place. The Department continues to
provide tailored support to improve the goal-setting process for educators.
5
Student Growth Percentiles (SGPs)
100%
90%
Non-mSGP Teachers
70%
60%
50%
39.2%
40%
30%
20%
10%
0%
11.0%12.1%
10.6% 9.0%
1.7% 0.9% 1.7% 1.4% 3.1% 2.7%
mSGP Scores
100%
Percent of Administrators
90%
80%
59.6%
60%
50%
40%
30%
20%
0%
100%
90%
80%
70%
60%
50%
40%
30%
20%
10%
0%
mSGP Teacher
73.5%
77.0%
24.0%
0.2% 0.1%
Ineffective
19.6%
2.4% 3.3%
Partially
Effective
Highly
Effective
Effective
Summative Rating Categories
Principal/AP/VP mSGP
Score Distribution
70%
10%
6.6%
Percent of Teachers
Percent of Teachers
80%
Final Score Comparison for mSGP
and Non-mSGP Teachers
Teacher mSGP Score
Distribution
11.6%
8.8%
0.8% 0.0% 0.2% 0.5% 1.2% 2.5%
8.7%
5.6%
0.6%
Districts can now examine SGP data
for trends along with other measures
of student success to inform
decisions ranging from resource
allocation and professional
development to lesson planning and
instructional strategies.
Key Data:
• The mSGP score has
identified more than 74% of
teachers and school leaders
as achieving growth with their
students in the effective
range.
• Results show no disadvantage
for 4th-8th-grade Language Arts
or Math teachers; the vast
majority of those receiving
mSGPs were Effective or
Highly Effective.
• Inspection of potential gaps
between mSGPs and other
component scores helps
increase the accuracy and
value of the whole evaluation.
mSGP Scores
6
Differences Across Districts
100%
100%
District 1
90%
79.2%
80%
66.8%
70%
60%
50%
40%
31.7%
30%
20%
Percent of Teachers
Percent of Teachers
80%
70%
60%
50%
40%
30%
16.7%
20%
10%
0%
District 2
90%
1.2%
0.3%
Ineffective
Partially
Effective
Highly
Effective
Effective
Summative Rating Categories
10%
0%
0.0%
Ineffective
4.2%
Partially
Effective
Highly
Effective
Effective
Summative Rating Categories
Key Data:
• As shown, the distribution
of final evaluation ratings
can vary greatly at the
local district level.
• In some cases, teachers
may not be getting the
individualized feedback
they deserve.
• Local implementation will
determine the value of
AchieveNJ for a district’s
educators.
Each district must examine the distribution of ratings at both the summative and component
level to identify trends that can inform improvements in supporting educators – and the state
is supporting this work.
7
Agenda
Major Findings from Year 1 Implementation Report
Teacher Evaluation Ratings for Selected Subgroup Populations
Next Steps
8
Data Made Available for Download
• On July 15, the following data will be available for download:
– Counts of teachers in each final summative rating category (Highly
Effective, Effective, Partially Effective, Ineffective) at the school,
district, and state level
– Counts of principals and assistant/vice principals in each final
summative rating category (Highly Effective, Effective, Partially
Effective, Ineffective) at the district and state level
Confidentiality Note: Evaluation data of a particular employee is confidential in accordance with
the TEACHNJ Act and N.J.S.A.18A:6-120, is not subject to the Open Public Records Act, and will
not be released to the public. Thus, data made available does not include anything personally
identifiable to a teacher or principal/AP/VP.
9
Important Notes for Subgroup Findings
• The Year 1 Report published on 6/1 did not include subgroup findings.
Since publication, we have conducted additional analysis and received
requests for more data and are sharing this information now.
• While the slides that follow point to some challenges and opportunities, we
must resist the urge to make sweeping judgments – especially since the
analysis represents only one year of data.
• The Department recently partnered with Harvard’s Strategic Data Project to
conduct a deeper study of the first two years of AchieveNJ results; we
anticipate sharing more detailed information next year.
10
Final Teacher Evaluation Ratings by FRPL
and Priority School Status
Final Ratings by Free & Reduced Price
Lunch (FRPL) Status
Final Ratings by Priority School Status
• In districts with high
concentrations of students
with economic disadvantage,
1 in 12 teachers was rated
less than Effective.
• 1 in 6 teachers in Priority
schools was rated less than
Effective.
• in these scores, SGP was not
the driving factor;
observations alone follow the
same trend.
11
Final Teacher Evaluation Ratings by Novice
Teacher Status
Final Ratings by Novice
Teacher Status
• Experienced teachers were
more than twice as likely to
earn Highly Effective ratings as
novice teachers.
• Novice teachers were more
than twice as likely to earn
Partially Effective ratings as
experienced teachers.
• This early data supports
national research and our
recent efforts to improve
preparation, induction, and
support for novices.
Novice is defined as having 0-2 years of
experience in the district; Experienced =
3+ years in district.
12
Final Teacher Evaluation Ratings by ELL
and SPED Teacher Status
Final Ratings by ESL/Bilingual and
Special Education Teacher Status
• Overall, we do not see major
differences in percentages in each
rating category based on teaching
ELL and SPED populations.
• As with FRPL and Priority school
findings, scores are consistent across
measures – SGP is not the driving
factor.
• We will continue to study subgroup
trends to ensure the system does not
unfairly disadvantage teachers of
certain populations.
13
Agenda
Major Findings from Year 1 Implementation Report
Teacher Evaluation Ratings for Selected Subgroup Populations
Next Steps
14
Revisiting “The Widget Effect”
• TNTP’s 2009 report, “The Widget Effect,” raised national awareness about
the need for evaluation reform. Consider our progress against the 5 main
points of this report, along with related next steps:
1.
Widget Effect Point One: All teachers are rated good or great
Progress:
• Increase in differentiation through
multiple measures
• More nuanced feedback
2.
Next Steps:
• Improve guidance on educator-set
goals and full use of rubrics
Widget Effect Point Two: Excellence goes unrecognized
Progress:
• Achievement Coaches (AC), other
recognition programs
• Ability to identify leaders across
multiple measures
Next Steps:
• Continue work to promote teacher
leadership in districts and by state
15
Revisiting “The Widget Effect” cont’d.
3.
Widget Effect Point Three: Inadequate professional development
Progress:
• Observation conferences foster
individualized feedback
• Shift in professional conversations
4.
Widget Effect Point Four: No special attention to novices
Progress:
• Longer path to tenure with more
support for new teachers
• Extra observation requirement
5.
Next Steps:
• Continue workshops, guidance to
districts (i.e., SGO 2.1, AC prof. dev.)
Next Steps:
• Complementary work on improving
teacher preparation and certification
Widget Effect Point Five: Poor performance goes unaddressed
Progress:
• Extra support required for
educators rated below Effective
• Expedited arbitration for those
unable to improve after 2 years
Next Steps:
• Continue supporting use of
Corrective Action Plans, due process
16
Continuous Improvement
• We are still in the early stages of this work.
• Efforts toward improving AchieveNJ in 2015-16:
– Focus group listening tour
– SGO 2.1 and assessment literacy modules
– Achievement Coaches professional development sessions
– Innovation and flexibility initiative
– Greater focus on principal evaluation
– Deeper study of findings with Strategic Data Project
17
Appendix: Observation Scores Reflect
Similar Trends to Summative Ratings
Summative Rating
Teacher Practice Rating
Ineffective
Partially
Effective
Effective
Highly
Effective
70%+
FRPL
0.6%
6.7%
81.4%
11.2%
All other
schools
0.1%
1.3%
71.8%
26.9%
Ineffective
Partially
Effective
Effective
Highly
Effective
Priority
schools
1.9%
16.5%
76.7%
5.0%
All other
schools
0.2%
2.2%
73.9%
23.8%
Ineffective
Partially
Effective
Effective
Highly
Effective
Special
education
0.3%
3.1%
77.2%
19.3%
General
education
0.2%
2.5%
73.4%
24.0%
Ineffective
Partially
Effective
Effective
Highly
Effective
70%+
FRPL
0.7%
7.5%
81.6%
10.2%
All other
schools
0.1%
1.6%
73.8%
24.4%
Ineffective
Partially
Effective
Effective
Highly
Effective
Priority
schools
2.0%
16.8%
76.5%
4.7%
All other
schools
0.2%
2.5%
75.6%
21.6%
Ineffective
Partially
Effective
Effective
Highly
Effective
Special
education
0.3%
3.4%
78.2%
18.0%
General
education
0.2%
2.8%
75.1%
21.8%
18
Download