Facilitation Notes

advertisement
FEBRUARY 2013
OVERVIEW
This session will deepen participant understanding of Danielson competencies 1e, (Designing Coherent
Instruction), 3b (Using Questioning and Discussion Techniques) and 3d (Using Assessment in
Instruction) to support the calibration. To address common misunderstandings around each
competency, participants will work backwards, identifying evidence that supports a specific rating
(rather than using evidence to come to a rating) and explore disconfirming evidence to better understand
how to identify the preponderance of evidence. Participants will also explore the impact of bias on
classroom observations. Participants will then go through an independent calibration exercise by rating a
video on 1e, 3b and 3d.
WELCOME AND INTRODUCTION TO A DEEPER DIVE ON
CALIBRATION (15 MINUTES)
OUTCOME
Participants will understand the importance of accuracy and consistency across multiple raters, the
challenges observers must address to ensure it, and the role of bias in undermining it.
FACILITATION NOTES
1. Explore Issues in Calibration. (15 minutes). OTE will discuss common calibration questions and
trends—including a description of calibration match rates and rationale trends from January
Calibration Session for competency 3d, Using Assessment in Instruction.
ACTIVITY 1: COMPETENCY STUDY—JANUARY CALIBRATION
EXERCISE RE-VIEW (100 MINUTES)
OUTCOME
Participants will resolve common rating misconceptions and uncover key calibration decision-making
choices.
GUIDING QUESTIONS


How do we identify key evidence to sustain a rating?
What do expert raters focus on when determining where practice falls on the Framework
continuum?
PART 1: DESIGNING COHERENT INSTRUCTION (45 MINUTES)
MATERIALS




Overview and Rubric for Danielson Competency 1e
9th Grade English lesson plan, Roseanne Cheng, with accompanying text
In yellow facilitator’s folder: expert rating and rationale for Danielson Competency 1e
Rationale Graphic Organizer (top half of the sheet for Part 1 of this activity)
2
FACILITATION NOTES
1. Introduction. (2 minutes). Table facilitators explain that for this activity, participants will review a
lesson plan and identify evidence for Danielson Competency 1e (Designing Coherent Instruction).
Participants will align the evidence they collect to the rubric for 1e.
2. Review Danielson 1e. (3 minutes). Facilitators direct participants to read the overview for
Danielson 1e to ensure they have the big picture of the competency. Participants then review the
rubric itself, starting with the Effective column. Participants should read the description of the
competency before they review the critical attributes.
3. Read lesson plan/poem and identify evidence. (5 minutes). Participants review the lesson plan
and accompanying poem. Participants annotate the lesson plan, taking note of the evidence in the
plan that aligns to the rubric for 1e (including learning activities, instructional materials and
resources, grouping of students, lesson structure, etc.).
4. Confirm expert rating and identify the evidence for it. (10 minutes). After participants have
identified evidence for 1e in the lesson plan and poem, facilitators confirm the expert rating for this
lesson on 1e using the Expert Rating Rationale sheet provided in the yellow facilitator’s folder.
Note: facilitators only confirm the rating at this time. Later in the activity, facilitators will distribute
the accompanying rationale for participants to discuss.
Participants then review their coding of the lesson to identify specific evidence that supports the
expert rating. Each participant should identify at least three pieces of evidence to support the
identified rating, and write them on sticky notes.
Note: even if participants disagree with the overall rating, facilitators direct them to find at least three
pieces of evidence to support the expert rating. Part of the goal of this activity is to approach a shared
understanding of effective practice in order to drive improvement of instruction, not to assign a
rating.
5. Share evidence and chart. (10 minutes). In a whole-table discussion accompanied by a chart,
facilitators ask participants to share one piece of low-inference evidence they annotated that
supports the expert rating, explain how the evidence supports the rating, and then place their
sticky note on the chart.
Facilitators chart participants’ explanations of the evidence they post on the chart.
Specifically: How does this evidence support the expert rating?
Sample chart:
Supporting
Evidence
Sticky notes
Explanation of evidence
Facilitator charts participants’ explanations
6. Identify the most compelling supporting evidence. (5 minutes). Facilitators ask participants to
use the content of the chart to discuss the following questions:
 What evidence is particularly compelling?
 How does the evidence that supports the rating outweigh any disconfirming evidence we found?
3
7. Write the rationale. (5 minutes). Facilitators direct participants to work with a partner, and utilize
the top half of the blank Rationale Graphic Organizer in their folders, to write a paragraph that
explains why the expert rating landed where it did. Participants refer to the weightiest evidence they
found, and if they locate evidence that disconfirms the rating, explain why they considered it but still
understand the expert rating.
8. Share and discuss expert rationale. (5 minutes). Facilitators distribute copies of the Expert Rating
Rationale sheet for 1e provided in the yellow facilitator’s folder. Participants examine the expert
rating and rationale alongside the supporting evidence they identified and paragraphs they wrote.
In a whole-table discussion, facilitators ask participants to comment on the following:
 How did your rationales echo the experts’?
 What evidence did the expert raters consider that we may not have?
 What are our key takeaways about determining ratings for 1e?
PART 2: USING QUESTIONING AND DISCUSSION TECHNIQUES AND
USING ASSESSMENT IN INSTRUCTION (55 MINUTES)
MATERIALS





Low-inference Evidence Collection Form
Classroom video
Overview and Rubric for Danielson competencies 3b and 3d
In yellow facilitator’s folder: expert rating and rationale for Danielson competencies 3b and 3d
Rationale Graphic Organizer (bottom half for Part 2 of this activity)
FACILITATION NOTES
1. Introduction. (2 minutes). The room facilitator explains that participants will do the following
during Part 2 of this activity:
 View a video of a lesson and record low-inference evidence using the handout in their folder
 Align their recorded evidence to Danielson Competency 3b, Using Questioning and Discussion
Techniques
 Identify evidence to both support and disconfirm the expert rating
 Leverage key understandings to write a rationale to support the expert rating
 Confirm the expert rating for Danielson Competency 3d, Using Assessment in Instruction
2. View video and record low-inference observations. (10 minutes). Participants watch the video
and record their low-inference observations on the Evidence Collection Form in their folders.
3. Review Danielson Competency 3b. (3 minutes). Facilitators direct participants to read the
overview for Danielson 3b to ensure they have the big picture of the competency. Participants then
review the rubric itself, starting with the Effective column. Participants should read the description of
the competency before they review the critical attributes.
4. Align low-inference evidence to 3b. (5 minutes). Using their low-inference observations,
participants work independently to identify evidence that aligns to 3b.
4
5. Confirm expert rating and identify the evidence for it. (5 minutes). After participants have
identified evidence for 3b, facilitators confirm the expert rating for this lesson on 3b using the Expert
Rating Rationale sheet provided in the yellow facilitator’s folder.
Note: facilitators only confirm the rating at this time. Later in the activity, facilitators will distribute
the accompanying rationale for participants to discuss.
Participants then identify specific evidence that supports the expert rating. Each participant should
identify at least three pieces of evidence to support the identified rating, and at least one piece of
evidence that supports a different rating, and write them on sticky notes.
6. Share evidence and chart. (10 minutes). In a whole-table discussion accompanied by a chart,
facilitators ask participants to share one piece of low-inference evidence they collected that
supports the expert rating, explain how the evidence supports the rating, and then place their
sticky note on the chart.
Once each participant has placed a sticky note of supporting evidence on the chart, they may
share the disconfirming evidence they collected as well and add it to the chart.
Sample chart:
Supporting Evidence
Disconfirming Evidence
7. Identify the most compelling supporting evidence. (5 minutes). Facilitators ask participants to
use the content of the chart to discuss the following questions:
 What evidence is particularly compelling?
 How does the evidence that supports the rating outweigh any disconfirming evidence we found?
8. Write the rationale. (5 minutes). Facilitators direct participants to work with a partner, and utilize
the bottom half of the blank Rationale Graphic Organizer in their folders, to write a paragraph that
explains why the expert rating landed where it did. Participants refer to the weightiest evidence they
found and explain why they considered disconfirming evidence but still came to the expert rating.
9. Share and discuss expert rationale. (5 minutes). Facilitators distribute copies of the Expert Rating
Rationale sheet for 3b provided in the yellow facilitator’s folder. Participants examine the expert
rating and rationale alongside the supporting evidence they identified and paragraphs they wrote.
In a whole-table discussion, facilitators ask participants to comment on the following:
 How did your rationales echo the experts’?
 What evidence did the expert raters consider that we may not have?
 What are our key takeaways about determining ratings for 3b?
10. Review evidence and confirm expert rating of Developing for Danielson Competency 3d. (5
minutes). In a whole-table discussion, facilitators explain that the January Achievement Institute
calibration data showed a very high rate of agreement between participants’ ratings and the expert
ratings for Danielson 3d.
5
As a result, to close Activity 1, participants need only align their low-inference evidence to the rubric
for 3d, and confirm a rating of Developing. Facilitators ask participants to share a rationale for the
rating of Developing for 3d. Facilitators distribute copies of the Expert Rating Rationale sheet for 3d
provided in the yellow facilitator’s folder. Participants examine the expert rating and rationale
alongside the supporting evidence they identified.
ACTIVITY 2: IDENTIFYING BIAS (55 MINUTES)
OUTCOME
Participants will increase awareness of how bias impacts evaluation and discuss strategies to counter
observer bias.
GUIDING QUESTIONS


What are my unspoken assumptions about teaching?
How do I recognize my own preferences and biases so that I can still render accurate ratings?
MATERIALS


Instructional Practices Inventory handout
Addressing Common Bias Errors in Observation handout
FACILITATION NOTES
1. Rank instructional practices. (3 minutes). Facilitators direct participants to the Instructional
Practices Inventory handout in their folders. Participants review the instructional practices and
quickly rank each item from 1 to 3 (1 = most likely to increase student learning, 2 = somewhat likely
to increase student learning, and 3 = least likely to increase student learning).
Note: Participants only have three minutes to rank, so they should go with their gut, rank the
strategies quickly, and move on. They should skip strategies they are not familiar with, but must rank
most of the items. They will have the opportunity to discuss these afterwards.
2. Triads share rankings. (10 minutes). Facilitators will group participants in triads to share their
lists, identify areas where their rankings do not agree with their partners, and discuss the rationales
for their rankings.
3. Table Discussion. (10 minutes). Facilitators bring the group together and ask participants:
 What informed your rankings? (Elicit or clarify elements that informed their rankings, including:
personal experience; educational philosophy; knowledge of research on particular strategy,
familiarity with the teacher being observed; the needs of a specific population; etc.)
 How might these values inform the observation process?
 This activity is intended to surface biases about instruction. Based on this activity, how would you
define bias in observation?
Note: A working definition of bias in observation is: a tendency to respond to practice in a way that
impedes objectivity. If participants are not ready to develop a definition, facilitators should offer this
6
one and ask participants to agree or amend. Facilitators should make sure the definition they elicit
gets at the key idea, stating this definition if necessary.
4. View video. (7 minutes). The room facilitator introduces the video and indicates that participants
will view a short clip of classroom practice. As they view the video, participants note the bias triggers
they observe. Facilitators remind participants that a bias can be a positive or a negative reaction
based on one’s own experience and knowledge.
5. T-chart on potential biases. (10 minutes). Facilitators create a chart titled: “Potential Bias in
Classroom Observation” (see sample below).
Facilitator introduces this charting activity by asserting that we all have values and
experiences that shape how we observe, what we notice, and what we think matters. In a fair
observation system, we must be aware of these biases so we can control them. Facilitators ask
participants to consider the kinds of biases that might inform teacher observation.
Specifically, in the video, were there things we saw in this classroom that might trigger bias?
Facilitators add participants’ responses to the chart as either positive or negative biases (positive bias
favors the teacher; negative bias doesn’t).
Note: Participants may bring in other biases surfaced, such as those in the instructional practices
ranking activity or in their work. These can be included in the list as well.
Sample: Potential Bias in Classroom Observation
Positive biases
Negative biases
Examples of possible participant responses:
Examples of possible participant responses:
Students were in groups
Teacher appeared “wimpy,” seeming a bit afraid of
the class
Teacher
appeared
prepared—cards
were
organized on the white board, students had Classroom seemed chaotic, books and papers in
materials they needed
piles around room, computers in odd places, etc.
Teacher had students figure out the key objective, Teacher gave out worksheets with multiple-choice
rather than modeling it for them
questions
6. Common bias errors. (10 minutes). Facilitators direct participants to the “Addressing Common Bias
Errors in Observation” handout in their folders. Participants review the categories of common bias
errors and the list of strategies to address them. Participants reflect individually on which of these
biases are most personally challenging, or emerge most frequently in their work.
With a partner, participants select and discuss the strategy that will be most useful in their work.
7. Reflection on bias in our own practice. (5 minutes). Facilitators ask participants to consider the
following questions:
 How does bias potentially inform our observations?
 How do we help school leaders recognize their own biases to increase their effectiveness?
7
ACTIVITY 3: INDEPENDENT CALIBRATION ASSESSMENT (70
MINUTES)
OUTCOME
Participants will demonstrate their level of calibration for Danielson competencies 1e, 3b, and 3d, and
support their rationale using strong evidence.
GUIDING QUESTION

How do we accurately align evidence of teacher practice to the language of the Danielson rubric?
MATERIALS








In yellow facilitator’s folder: small cards with examples of strong, medium, and weak evidence
Video with accompanying lesson plan
Low-inference Evidence Collection Form
Overview and Rubric for Danielson competencies 1e, 3b, and 3d
Participant Rating Rationale handout for all three competencies
On tables: HEDI cards
In yellow facilitator’s folder: highlighted rubrics for all three competencies
In yellow facilitator’s folder: expert ratings and rationales for all three competencies
FACILITATION NOTES
1. Introduction and “warm-up”. (2 minutes). Facilitators explain the purpose of this activity: to
leverage today’s and prior learning to produce independent ratings of Danielson competencies 1e, 3b,
and 3d. Participants will turn in their independent ratings and then discuss their ratings as a group.
Facilitators indicate that participants will warm up for the independent calibration assessment by
sorting examples of strong, medium, and weak evidence/rationales with a partner.
2. Strong—medium—weak evidence and rationale sort. (8 minutes). Facilitators pass out the
samples of evidence and rationales to pairs. Participants sort the samples into strong, medium, and
weak piles and identify why each sample fits into its category. To clarify, facilitators explain that
strong examples provide evidence and rationales aligned to the rubric, while weak examples tend to
include opinions and lack evidence.
Note: Samples A, D, and E are representatives of strong evidence/rationales (sample D can also be
categorized as medium); Samples B, C, and F are representatives of weak evidence/rationales.
(The one item indicated as medium was flagged by some reviewers as lacking a strong alignment
between evidence and rationale, though it does include low-inference evidence and a rubric-aligned
rationale.)
3. View video and record low-inference observations. (10 minutes). Participants watch the video
and record their low-inference observations on their Evidence Collection Form.
4. Independent calibration assessment: highlight rubric rows and complete rating rationale
sheets. (20 minutes). Participants highlight rubric rows across competencies 1e, 3b, and 3d that are
8
supported by their evidence, and record this evidence on their Participant Rating Rationale sheet for
all three competencies. They then complete their rationales and rate the observed practice.
5. Collect Participant Rating Rationale handouts. (1 minute). Facilitators collect participants’
completed handouts. Facilitators remind participants to make a note of the ratings they chose to
inform the upcoming discussion.
6. Share ratings and discuss evidence to support participants’ ratings for 1e. (5 minutes). In a
whole-table format, facilitators ask participants to hold up HEDI cards to indicate their ratings for 1e.
Participants share and discuss evidence that supported their rating for 1e. They should consider the
following question: what evidence and language in the rubric drove their rating? Facilitators should
elicit participants’ thinking behind their ratings.
7. Repeat for 3b and 3d. (10 minutes). Participants share their ratings using HEDI cards and discuss
evidence that supported their ratings for 3b and 3d.
8. Share and discuss highlighted rubrics and expert ratings/rationales for 1e, 3b, and 3d. (10
minutes). Facilitators distribute the highlighted rubrics and expert ratings/rationales sheets for 1e,
3b and 3d from the yellow facilitator’s folder.
In a whole-table discussion, participants consider the following questions as they review the
documents:
 What evidence is highlighted on the rubric?
 How does this evidence compare to the evidence that you highlighted?
9. Reflection. (4 minutes). Table group reflects on their learning by discussing the following questions:
 What are some observations that you have made about the process of calibration?
 How can you use this information to support school leaders with the teacher effectiveness work?
9
Download