Popham Phony Formative Assessments

advertisement
1
Developing Strategies for Critical Reading:
Some First Steps
Task
1. Turn the title into a question, or
write a question (or questions) about
the title.
2. What is the topic of this piece? How
do you know?
Write It Here
3. Is this piece written for readers like
you? How do you know?
4. Who is the author? What is his or
her background? Is he or she likely to
be knowledgeable on this topic?
5. Where was this article published?
What is the readership of the
publication? Can we trust this source?
6. Is this article current? How do you
know?
7. Read the opening of the article.
Draw a line where you think the
introduction ends. What does the
author’s main idea (thesis?) seem to
be?
8. Read the last paragraph. Summarize
the author’s conclusion. Does it link to
his or her main idea?
9. Read the article that begins on the
next page. As you reach each “Says
and Does” point, summarize what the
preceding passage says. Then try to
label what it does rhetorically.
© Kathleen Dudden Rowlands
krowlands@csun.edu
Things texts can do:
 Introduce the topic
 Interest readers
 Make a claim
 Present evidence and or examples
 Explain and/ or define terms
 Analyze and interpret data
 Present counterarguments
 Draw conclusions
2
All About Accountability / Phony Formative Assessments: Buyer
Beware!
W. James Popham
The term formative assessment is rapidly moving to the head of this year's education fad
parade. The reason is all too clear. Several years ago, Paul Black and Dylan Wiliam (1998) of
Kings College in London presented a persuasive review of empirical studies dealing with the
payoffs of well-conceived classroom assessments. The two British researchers concluded that
when schools used the results of classroom assessments to adjust ongoing instruction, students
not only mastered content better, but also improved their performance on external achievement
tests.
Given the pressure on educators to boost their students' scores on external accountability tests,
the notion that classroom assessments could contribute to higher test scores was alluring to
many education leaders. As news of Black and Wiliam's conclusions gradually spread into
faculty lounges, test publishers suddenly began to relabel many of their tests as “formative.”
This name-switching sales ploy was spurred on by the growing perception among educators
that formative assessments could improve their students' test scores and help their schools
dodge the many accountability bullets being aimed their way.
Does:
Says:
More than one test company official has confided to me that companies affixed the “formative”
label to just about any tests in their inventory. The companies sensed that the term would sell
tests and appeal to many pressured educators, who would, in desperation, grasp at any scoreimprovement straws they could find.
Assessment expert Lorrie Shepard believes that this approach, which is based solely on
marketing motives, is corrupting the meaning of the term formative assessment, thereby
diminishing the potentially positive effect of such assessments on student learning. During the
2006 National Large-Scale Assessment Conference, Shepard observed,
The research-based concept of formative assessment, closely grounded in classroom
instructional processes, has been taken over—hijacked—by commercial test
publishers and is used instead to refer to formal testing systems called “benchmark” or
“interim assessment systems.”
Does:
Says:
© Kathleen Dudden Rowlands
krowlands@csun.edu
3
What, then, is formative assessment, and why is it so important for educators to understand
what's involved? For an assessment to be formative, teachers (and ideally students as well)
need to have the results in sufficient time to adjust—that is form—ongoing instruction and
learning. According to Wiliam, the biggest instructional payoffs occur when teachers use “shortcycle” assessments, in which test results are available quickly enough to enable teachers to
adjust how they're teaching and students to alter how they're trying to learn.
Educators need to realize that the research rationale for formative assessment is based on
short-cycle assessments. Such rapid-turnaround assessments yield results during a class
period or in the midst of a multiweek instructional unit. If the results don't get back in time for
teachers to adjust instruction for the students being assessed, then it's not formative
assessment.
Does:
Says:
Profit-motivated testing firms (as well as dollar-driven consultants) may allege that
districtwide or even statewide assessments, referred to variously as “benchmark” or
“interim” tests, are, in fact, formative. But almost all these tests fail to get results
back in time for meaningful instructional adjustments to take place for the tested
students. Some take more than a month, especially those that require hand-scoring
of student responses. Because the results come back for topic X when the teacher
has already moved on to topic Z, such tests cannot be regarded as formative
assessments. At the very least, test companies have no right to proclaim the
effectiveness of their tests by riding on the research coattails of short-cycle
classroom assessments used formatively.
Moreover, for district-dispensed interim tests to spur timely and beneficial
adjustments in teachers' instruction, the administration of those tests would have
to mesh remarkably well with the curricular aims that teachers were addressing in
the district's classrooms at that specific time. Although this curricular concurrence is
possible, I've rarely witnessed it.
Does:
Says:
Just because these large-scale tests don't qualify as formative doesn't mean that classroom
assessments automatically can claim that advantage. Most classroom assessments will not
supply information to help teachers adjust their instruction unless teachers deliberately design
them to do so. Thus, the assessments can't be considered formative. Even if a teacher intends
© Kathleen Dudden Rowlands
krowlands@csun.edu
4
to create a test whose results will permit instructional adjustments, not all those well-intentioned
tests will be as helpful as the teacher hoped. Many classroom tests are consummately cruddy.
Does:
Says:
Properly formulated formative classroom assessments (or even sufficiently short-cycled district
assessments) can help students learn better and can improve those students' scores on
external accountability tests. Persuasive empirical evidence shows that these tests work;
clearly, teachers should use them to improve both teaching and learning.
I am not suggesting that longer-cycle tests, such as the so-called benchmark or interim tests
that we often run into these days, are without merit. They quite possibly may enable teachers to
make useful longer-term changes in instruction and curriculum. But if you encounter such a test
that is glowingly labeled as “formative” and is swathed in research results associated with shortcycle classroom assessments, don't be hoodwinked by the sales pitch. In the future, evidence
may show that benchmark or interim tests are instructionally beneficial in the short term. But
research currently does not support that claim.
Does:
Says:
References
Black, P., & Wiliam, D. (1998). Inside the black box: Raising standards through classroom
assessment. Phi Delta Kappan, 80(2), 139–148.
Shepard, L. (2006, June 26). Panelist presentation delivered at the National Large-Scale
Assessment Conference, sponsored by the Council of Chief State School Officers, San
Francisco, CA.
W. James Popham is Emeritus Professor in the UCLA Graduate School of Education and Information Studies;
wpopham@ucla.edu.
Educational Leadership November 2006 Volume 64 Number 3
Pages 86-87
© Kathleen Dudden Rowlands
krowlands@csun.edu
Download