Collaborative Portfolio Assessment in the English Secondary School System

advertisement
Collaborative Portfolio Assessment in the English Secondary School System
David R. Russell
Preprint. Please check published version before citing.
"Collaborative Portfolio Assessment in the English Secondary School System." Clearing House 68
(March/April1995): 244-47.
England has a radically different mass assessment system than the US. It depends not on machinescored multiple choice tests but on the experience and professional judgment of teachers working
collaboratively to assess both curriculum effectiveness and individual student performance. Students
are assessed on what they have been taught in specific courses, on their performance in a curriculum,
in other words, rather than on their academic ability or general knowledge. In order to measure
performance, assessment in England has been moving for over two decades toward an system based
on grading portfolios of student work prepared during their course work (course work folders, as
they are called). In this way, assessment is tied directly to the curriculum, so that students, teachers,
and parents are able to share identified goals and work toward them together.1
In the last decade, several groups in the US have also been working toward performance
assessment that is tied to the curriculum and assessed by collaboratively by teachers: the New
Standards Project, the College Board Pacesetter Project, and several state assessment projects. This
paper describes the English system not as a model to be imitated—there are profound differences in
the two societies and their education systems—but as a point of reference, a means of seeing the US
system and the recent reform efforts in comparative perspective.
The Curriculum and the Portfolios
Secondary school in England is divided into two stages, the first more general (roughly ages
15 to 16), and the second more specialized (roughly ages 17 to 18). Since schooling to age 16 is
compulsory in England, virtually all students complete the first sage, consisting of seven or more
courses that are specifically tied to the assessment system. Students ordinarily take English,
mathematics, history/geography, science, foreign language(s), fine arts, and practical arts. Unlike US
courses, most of these courses last two years, and often the same teacher stays with the students both
DRAFT
Page 2
years. Students are assessed on the their performance in each course, on their mastery of the content
and skills taught in each, not on their general knowledge or ability.
Those who pass receive a General Certificate of Secondary Education (GCSE) and qualify to
go on to the second two-year stage, called "A-levels," where they take fewer and more specialized
courses. Those who pass the assessment at age 18 qualify to go on to a university. Students who fail
either the GCSE or the A-levels have a range of other options for training and can take alternative
routes to university education. But both GCSE and A-levels are "high-stakes" examinations with
clear and important consequences.
During most of these two-year courses, students complete projects that go into their
individual portfolios. They also take some more conventional timed essay or practical examinations.
The portfolios contain materials of various kinds. For example, in geography, students submit four
projects based on field trips and other research, each containing more than a dozen pages of text, plus
photographs, drawings, maps, diagrams, graphs, and tables. In English language and literature,
students submit several genres of writing (creative, informational, analytic, etc.) and videotapes of
oral presentations. In science, students conduct experiments done in three-person research teams,
monitored by teachers, and prepare lab reports on them.
Students typically choose their own topics for their portfolio projects, with the advice and
approval of their teachers. Teachers help students select projects that will challenge them to move
toward the goals of the course and meet the assessment criteria, while at the same time developing
their own interests. This both allows for diversity and increases individual motivation. Most
importantly, it moves pedagogy toward a small-group and individual project-oriented approach and
away from lecture and textbook-dependent memorization.
Tying Assessment to Curriculum: Examination Boards
As with the ACT and SAT in the US, mass assessment is conducted by independent, nonprofit organizations, called Examining Boards, that market their assessment services to schools
nation-wide. The Boards compete with each other, which creates some differences among them (for
example, one may require fewer but longer papers in an English portfolio, for example). But they all
DRAFT
Page 3
operate on the same general principles and must follow the strict code of practice of the national
Schools Examination and Assessment Authority. And all five Boards' assessments are treated as
equivalent by employers and by higher education. A grade of “A" from one Board is given the same
weight as an “A" from any other Board for purposes of hiring and admissions.
What is assessed? The English system assesses what is taught in each course—the
curriculum. The curriculum of a specific course in US secondary schools is primarily guided by the
choice of a textbook. In England, the curriculum is guided by the choice of a syllabus. Syllabuses
are produced by each of five Examining Boards and marketed to schools as part of their assessment
service. To choose an Examining Board, then, is to choose syllabus. A syllabus, however, is not a
detailed course outline and is not meant to be one. First it is a statement of broad aims. For
example, the Examining Board syllabuses for English Language and Literature for the GCSE (age 14
to 16) each contain about four pages of Aims and Assessment Objectives, covering all of language,
literature, and composition.
Second, the syllabus is a description of the assessment process. It gives a range of
requirements for what goes into portfolios, broad descriptions of the kinds of things students should
be able to show they can do. It does not prescribe specific assignments or readings. The specific
assignments are developed by the teachers in each individual department, who construct their courses
based on their own strengths and their students' needs. Finally, the syllabus gives detailed
Assessment Criteria for each part of the of the portfolio.
Since 1989, the syllabuses from all the Examining Boards are guided by the National
Curriculum. That is not a detailed outline of work either, but rather general goals for instruction in
each content area. And there are differences in the ways the five Boards interpret the broad National
Curriculum standards.
The choice of Examining Board is usually up to each content area department within a
school, and there is seldom pressure from the school district to choose one Examining Board (and
thus one syllabus) over the others. Most people feel the competition among Examining Boards
encourages innovation.
DRAFT
Page 4
Conducting Assessment: Teachers and "Moderation"
How then are the portfolios of student performance assessed? How are hundreds of thousands
of individual portfolios judged in such a way that students, parents, institutions of higher
education—and the society at large—accept those judgments as objective assessments of the
performance of schools and of individual students? The key to the English assessment system is that
it is collaborative. Just as the performances of figure skaters, gymnasts, divers, and artists are
assessed by several raters to avoid the subjectivity of individual judgment, so each student
performance is assessed by several teachers.
But unlike athletic or artistic performances, the judgments of individual teachers or groups of
teachers are systematically checked at several further levels against the judgments of their peers in a
process called "moderation" (Gipps and Stobart, 1993) Though the process differs somewhat among
Boards, it basically consists of five phases.
1. Collaboration to set and disseminate standards
Each Examination Board has a group of very experienced and respected teachers who
supervise the assessment. They are regular classroom teachers or retired teachers who work on a part
time or consultant basis. For each content area, the group consists of one or two Chief Examiners, a
number of Team Leaders, and, for each team leader, a group of six to 10 Moderators who are each
responsible for 10 to 12 schools (see Figure 1). The first step is to develop the syllabus, which is
revised five years or so. The Board puts together a group of expert teachers (who may or may not
also serve as examiners) to create or revise the syllabus. They also collect some ten sample student
papers, of varying quality, responding to the syllabus requirements (usually papers/videotapes/etc.
from last year's portfolios). These are called the Trial Marking Materials. The Chief Examiner,
Team Leaders, and Moderators meet to discuss these Trial Marking Materials and arrive at a
consensus on grades for them. The Trial Marking Materials serve as “benchmarks" of performance
against which student work will eventually be measured. The Board then publishes the syllabus and
the Trial Marking Materials (without their grades) and distributes them to teachers nationally, who
then choose among the five Boards’ syllabuses.
DRAFT
Page 5
Figure 1: Organizing Collaborative Portfolio Assessment
Examining Board:
Chief Examiner
Team Leaders
______________________ Moderators_______________
School:
Teacher Examiner
Teachers teaching a (2-year) course
In each school, the teachers of each two-year course choose one Examining Board's syllabus
and select a Teacher Examiner to supervise the school's assessment and communicate with the
Board. Teachers individually read and grade the Trial Marking Materials from the board they have
chosen. They then meet to in a trial marking session to discuss their grades and arrive at a consensus
school grade on each of the Trial Marking Materials, which they send to the Board.
Next the Teacher Examiner from each school (and any other teachers who wish to) meet with
Moderator, to analyze the Trial Marking Materials and discuss standards together. In this way, the
standards are shared among teachers. There is general agreement on what students should be able to
do after finishing a course and on what it means to successful demonstrate they can do it. This
consensus guides teaching and learning.
2. In-school collaborative assessment
At the end of the two-year course, after the students have completed their portfolios, each
teacher grades the portfolios from her own classes. She uses the Assessment Criteria in the syllabus
as interpreted by her and her colleagues collaboratively at their school's Trial Marking session and at
their meeting with the Moderator and teachers from the other schools in the area that use the same
Board's syllabus.
When all the teachers in a school who teach the course have finished grading their own
students' portfolios, they meet again in a school standardizing meeting. The aim of the meeting is to
arrive at a consensus on grades by putting all the portfolios done by the students in rank order. One
method is to sort all the portfolios of all the students into boxes by grades the students received. Two
teachers together look at one box (the B's for instance) and put the portfolios in that box into rank
order. Then the teachers in that group discuss the borderline cases with the teachers looking at the
boxes of portfolios with grades just above and just below (the A's and C's). They collaboratively
DRAFT
Page 6
decide whether the top ones in each box should go up to the next grade or the lower ones should go
down to the next grade. When disagreements arise, all of the teachers discuss the portfolio and
arrive at consensus. The process takes from two to four days, depending on the number of portfolios
and the experience of the teachers. (Substitute teachers are hired to provide the necessary time.)
The standardizing meeting, say many teachers, is the most valuable aspect of the assessment
system, for teachers share their values and their practice with each other. They must compare their
practice and standards to those of their colleagues and discuss them openly and frankly. They learn in
a very direct way which assignments work well and which do not. And this shared knowledge
shapes teachers' curricular planning for next year. One teacher described it as "the best in-service
training one can have, because we're giving it to each other."
3. Going beyond the school to collaboratively set final grades
The Examining Teacher from each school sends a grade sheet with all the grades on all the
portfolios in the course to their Moderator. The Moderator looks at the grades sheet and the Trial
Marking grades from the teachers in each of her 10 to 12 schools. She then requests a sample of
portfolios to read, mainly those on grade boundaries.
The Moderator then begins reading the samples. Though she has already attended a trial
marking session with Board colleagues and a meeting with representatives from her schools, she
needs further consultation to insure her grading is consistent with other moderators around the
nation. During the first week of her reading, she grades and comments on 10 portfolios and sends
those to her Team Leader, who checks her assessments against those of the other Moderators. If a
Moderator's assessments are greatly different than those of the other Moderators, the Team Leader
can request another sample and have further discussion to ensure the grading is consistent.
The moderator then reads all the sample portfolios from her schools. If necessary, the
Moderator can move all a school's grades up or down in grade categories where she judges a school
is grading too high or too low, though she cannot change the rank order of the portfolios determined
by the teachers in the school. That is, if she judges that the school is giving C-pluses to portfolios
that other schools and the Board would give B-minuses to, she can raise the grades on a school's C-
DRAFT
Page 7
plus portfolios to B-minuses.i In these instances, the Moderator discusses boundary cases with the
school, the Team Leader and Chief Examiner. The Moderator is generally from the same geographic
area as her schools and she develops a strong and supportive relationship with those schools.
The moderation system takes diversity into account without compromising standards. The
cultural backgrounds of students in a school may influence the type of courses the teachers design
and the kinds of projects the students do for their portfolios. Moderators must understand this when
assessing the portfolios. Students from one school may read and write about Caribbean literature, for
example, or do geography projects that involve the inner city experience of Asian immigrants. Yet
these writing projects can meet the standards as well as those of students from the majority culture.
After the Moderator’s work is complete, she then sends a report on each school and two
samples of ten portfolios from her schools to her Team Leader, who checks her assessments.
The
next step is an Examining Board standardizing meeting. The Team Leaders discuss the work the
Moderators have done and compare the portfolios which the Moderators have sent them as samples.
If a moderator is thought to be a bit out if line it may be necessary for the Chief Examiner to look
very carefully at the adjustments which a Moderator has made to specific schools and to call in more
portfolios from that Moderator. Sometimes, if a moderator is consistently out of line, all the grades
she has moderated can be moved up or down. But this seldom happens, as differences have almost
always been resolved at an earlier stage.
The final step is a meeting of the Chief Examiner and one or two outside consultants. This
meeting is also sometimes attended by officials from the Ministry of Education and the school
accrediting organization. They first look at examples of portfolios at each of the grade cut-off points
(A-B, B-C, etc.). Then they look at the statistics on the percentages of students achieving each
grade, the performance of different schools, the assessments of different moderators, and information
on last year's results, including some portfolios from the previous year. At this final stage, they may
adjust the cut-off points, putting fewer or more students into a certain grade category, though there is
only occasionally any need to do this. Although it is possible at this stage to adjust the number of
students in a category to keep the same pass rates from year to year, this is frowned upon since it is a
DRAFT
Page 8
form of norm-referencing that violates the principles of criteria-referencing on which the system is
based. Once final grades are established, schools are sent the results for each student, the
moderator's report, and sample portfolios with moderator's comments.ii Schools then send individual
grade reports to each student. The entire process, from the first grading by the course instructor to the
release of final grades, takes two to three months.
Conclusion
Because of the many checks and balances, the many negotiations, collaborative portfolio
assessment is a complex process. But over the years, as schools and moderators have become more
accustomed to collaboratively assessing student work, a tradition has developed that makes the
whole process a familiar and valued part of teachers' work and the education system's functioning in
society (School Examinations Council, 1986).
The assessment is relatively expensive. It costs about $30 per student for each course
assessed, a cost borne by the local school district (LEA). But it is a price the English education
system has been willing to pay for an assessment system that is integral to the curriculum of each
content area. Multiple choice tests have never been seriously considered as an alternative.
As with any assessment system, there are controversies, and these controversies have become
particularly heated as the recently introduced national curriculum standards are negotiated (Gipps &
Stobart, 1993). Groups with various commitments and interests argue that there should be greater or
lesser emphasis on projects versus sit-down, timed essay examinations, that the results should or
should not be linked to teacher and school assessment, or that the content of particular courses
should be changed. Indeed, tying assessment directly to curriculum makes these debates meaningful
in a way that they cannot be in the US, where the influence of assessment on curriculum and
pedagogy is largely hidden—but no less real.
Research in the US over the last 20 years has shown that schools and teachers "teach to the
test," as long as the test has important consequences. That is why it is so important to have "a test
that is worth teaching to." Multiple-choice assessment increases curricular attention to factual
information and to the particular forms that information takes on multiple-choice tests (Resnick &
DRAFT
Page 9
Resnick, 1992). Collaborative portfolio assessment increases curricular attention to student projects
in a range of performances types—written, oral, and practical—that all require students to synthesize
facts and communicate information for a complex task. Many believe it is these high-order skills
that students must learn and assessments must encourage for the US to be competitive in a global
economy.
Reformers in the US who have been working toward performance assessment through teacher
collaboration face a host of challenges in evolving valid, reliable, and socially credible assessments.
But the English experience shows that even in a decentralized mass education system, it is possible
to have large-scale collaborative portfolio assessment. It requires teachers and administrators willing
to assume responsibility for connecting a curriculum that teaches high-level skills to an assessment
system that depends on teachers' collective professional judgment.
Notes
1My thanks to the Institute of Education, University of London, for their support during
my time there as a Visiting Fellow, particularly to Tony Burgess, Nancy Martin, and Jane Miller.
My thanks also to Anne Barnes, Terry Furlong, and others of the National Association of
Teachers of English, and to the students, teachers, and administrators—to numerous to
mention—of Islington Green Secondary School, London, and Preston Manor School, Wembley,
who were so generous with their time in allowing me to observe their work and interview them.
Any errors in this account are of course my own. This project was funded in part by the British
Council.
2If a particular portfolio seems out of place in the rank order, the Moderator will discuss
it with the school and may suggest (but not require) a change in order.
3Individual students cannot appeal a grade, but schools can appeal and have all their
portfolios regraded. This expensive and time-consuming process is rare since the judgments of
the graders have been checked at so many levels and are therefore highly credible.
References
DRAFT
Page 10
Resnick, L. B., and D. P. Resnick. 1992. Assessing the thinking curriculum: New tools for
educational reform. In Changing assessments: Alternative views of aptitude, achievement and
instruction, edited by B. R. Gifford, and M. C. O'Connor, 37-75. Boston: Kluwer.
Gipps, C., and G. Stobart. 1993. Assessment: A teacher's guide to the issues. 2 ed. London: Hodder.
School Examinations Council. 1986. Policy and practice in school-based assessment. London:
School Examinations Council.
iIf
a particular portfolio seems out of place in the rank order, the Moderator will
discuss it with the school.
iiIndividual
students cannot appeal a grade, but schools can appeal and have all
their portfolios regraded. This expensive and time-consuming process is rare since the
judgments of the graders have been checked at so many levels and are therefore highly
credible.
Download