ENG 600 Project Kay M. Hedrick Research:

advertisement
ENG 600 Project
Kay M. Hedrick
Research:
With the inauguration of the Common Core Standards, the United States found a new focus in
education. The majority of US states have now adopted these standards in an effort to revive learning
and bring career and college readiness to the forefront of our attention. As with any new initiative, new
buzzwords flood the arena. My project centered on one of these “buzzwords” and what it means in
terms of education within my classroom.
This buzzwords is standards-based assessment. This refers to “grading that references student
achievement to specific topics within each subject area,” according to Robert J. Marzano in his book
Formative Assessment and Standards-Based Grading (17). This approach varies from the typical mode
used when I began teaching, just nine short years ago. Oddly, Marzano states, this term was
popularized in 1993, long before it became the new topic of interest in today’s pedagogy (17). He also
states that, “While this system seems like good practice, without giving teachers guidance and support
on how to collect and interpret the assessment data with which scores like advanced , proficient, basic
and below basic are assigned, standards-based reporting can be highly inaccurate” (18).
This leads to the issues I experienced during the tenure of this project. The Classroom
Assessment for Student Learning method, which will referred to as CASL as follows, was adapted by a
Somerset Science teacher named Ken Mattingly and used effectively in his classroom. At the beginning
of the 2011-2012 school year, faculty in the Rowan County School District were required to participate
in professional development under Ken Mattingly’s tutelage in an effort to encourage use of this type of
grading. In addition to standards-based grading, this system also scores for mastery, relying heavily
upon formative and summative assessment. In this area the two techniques varied. The CASL formula
did not follow the same grading protocol; however it prescribed a means in terms of rubrics that was
necessary to use the Mattingly method. According to him, students were to be graded using a 3-2-1
scale – 3 being Mastery, 2 being Developing Knowledge, and 1 being Novice. This is how grading was
completed during the initial stages of the project. There were several issues to be considered that
undermined the use of this method. Our audience, both students and parents are accustomed to
grades being determined in a percentage-based mode. In this method, a score of 3, or Mastery, was
awarded to any student scoring an 86% or higher. The resulting confusion and frustration from this was
only aggravated by the fact that a score of 2, or Developing, resulted was awarded to any student
scoring between a 66 and an 86. When entered into our on-line grade book, which has not been
modified to accommodate this system, a student scoring a 2 on any given target, showed a 66% in the
grade book. This is considered a D on our current grading scale. Additionally, each and every target had
to be figured individually – very difficult to do when you are assessing multiple targets on a cumulative
assessment – then the percentage calculated and a number score assigned. This logistically created a
nightmare in terms of assessing anything of value.
The good that came from the system lies in the development of very specific rubrics for each
learning target or standard. According the Classroom Assessment for Student Learning, “the bottom
line” in the development of a strong rubric is as follows, “Always include everything of importance on a
rubric, even if it is difficult to define. The things most difficult to define are actually those that most
need definitions” (207). CASL is right. These difficult things are the most important, which resulted in
multiple revisions of rubrics as student work was assessed and new specifics identified. Again, this is
where the Mattingly method seemed logistically overwhelming in an English classroom. So much of
what constitutes the teaching of English, such as Writing, Literary Analysis, and Reading is subjectively
difficult to define. But, to then turn around and take the definition once you’ve gotten it down and
apply it to a 3-2-1 scale, was logistically not feasible.
Therefore, at the conclusion of part one of ENG III, after discussing the issue with both the
principal, the DAC, and the school’s curriculum coordinator, it was decided to modify the system to
include so that a 10-point scale was used to more closely mirror the percentage scale that Infinite
Campus generated. Thus the second assessment policy was developed, and students and parents were
educated as to the changes. Since RCSHS operates on a trimester schedule, the implementation of this
form of assessment is still in its infancy. Revisions will be made, as necessary. Debbie Howes, the
principal, has also attended recent training that will further educate Rowan County faculty as we
continue to figure out this process. Professional development for all faculty with this new information
will occur in January. Due to this project, she and I have already discussed this new information, and I
am happy to see that my work is on the right track, but can still improve.
Finally, the one thing that remained consistent throughout the entire process is the strategies
taken from Seven Strategies of Assessment for Learning by Jan Chappuis. Regardless of the type of
grade system being used, these strategies have worked and been found to improve student learning in
my classroom. I am certain that when the 120 students I taught first trimester truly mastered the unit
taught on the Glossary of Usage (4 failures total), it is because of these strategies:
Formative and Summative Assessments:
According to Chappuis, “The achievement gains realized by students whose teachers rely on formative
assessment can range from 15 to 25 percentile points, or two to four grade equivalents, on commonly
used standardized achievement test scale scores.” Additionally, “formative assessment practices greatly
increased the achievement of low-performing students” (3).
Clear Targets:
Chappuis adds, “ If students don’t have a clear vision of their destination, feedback does not hold much
meaning for them” (17). Thus learning target logs which clearly define what the student is expected to
learn sets the student up for better understanding of where they are and where they need to go.
Effective Feedback:
“When students’ efforts don’t produce success, use feedback that offers ‘direction correction’ to
maximize the chances that further effort will produce satisfactory results,” Chappuis advises (64).
Self-Assessment and the Setting of Goals:
Chappuis goes on to say, “By offering descriptive feedback we model for students the kind of thinking
we want them to engage in when looking at their own work. The next logical step is to hand over that
skill and responsibility. When students self-assess and set goals they develop an internal sense of control
over the conditions of their success and greater ownership of the responsibility for improving” (95). This
was accomplished as students used rubrics, feedback, and learning target logs, which allowed them to
track their progress.
Strategies 5, 6, and 7, which follow, are still in the process of being incorporated into the instruction.
Lessons focus on one learning target or aspect of quality at a time. (This is hard to do, when the school
mandates the incorporation of various different Intentional Reviews in preparation for ACT, EOC and
On-Demand Assessments)
Students are taught focused revisions.
Download