A Brief Overview of PACT

advertisement
QuickTime™ and a
decompressor
are needed to see this picture.
A Brief Overview of the PACT Assessment System
In fall 1998, SB 2042 was enacted in California to maintain multiple pathways to a
teaching credential, but ensure that regardless of the pathway (e.g., student teaching,
district internships, university internships), candidates meet a uniform set of standards.
Among other provisions, it established a requirement for all California candidates for a
preliminary teaching credential to pass a state-approved teaching performance assessment
with demonstrated validity and reliability to supplement training, course assignments and
supervisor evaluations. The California Commission on Teacher Credentialing contracted
with the Educational Testing Service to develop such an assessment. SB 2042 explicitly
allowed for the development of alternative assessments that were as rigorous as the statedeveloped assessment. Over the next four years, a job analysis of teaching was
conducted and a set of standards for prospective teachers, called Teaching Performance
Expectations or TPEs, was developed. Assessment Design Standards for the Teaching
Performance Assessments or TPAs were also developed.
Twelve institutions (8 University of California institutions, San Diego State, San Jose
State, Stanford, and Mills) formed the PACT consortium in 2001 to develop an
alternative assessment. The design was to consist of a common standardized assessment
plus a set of other assessments that varied between programs. (See the diagram of the
PACT Assessment system) The group committed to developing a common assessment
that still enabled unique program values and emphases, with the varying assessments
affording programs a chance to enhance the representation of their unique values and
emphases in the PACT assessment system.
Teaching Event
The design of the common assessment, called the Teaching Event, was modeled after the
portfolio assessments of the Connecticut State Department of Education, INTASC (the
Interstate New Teacher Assessment and Support Consortium) and the National Board.
The common assessment was to draw from artifacts created while teaching, accompanied
by commentaries that provide context and rationales needed to understand and interpret
the artifacts. The common assessment was also to place student learning at the center,
with special attention to subject-specific pedagogy and the teaching of English Learners.
The assessment design chosen was that of a portfolio assessment, with Context, Planning,
Instruction, Assessment, and Reflection tasks documenting a brief segment of learning.
An integrated task design was chosen to prompt candidates to make connections between
these different teaching tasks, and to provide evidence to understand a candidate’s
teaching of a brief learning segment in some depth through the distinct lenses provided
by the different tasks.
A central design team was established at Stanford University through foundation support.
This team worked in collaboration with subject-specific development teams composed of
faculty/supervisors from PACT member institutions to develop Teaching Events for
piloting in the 2002-03 academic year. Feedback and suggestions for improvement were
solicited from faculty/supervisors, trainers, and scorers the first year and used to direct
revisions for the 2003-04 academic year. This established the practice of viewing the
Teaching Event as a dynamic assessment that responded to demonstrated needs for
improvement identified by the field. The combination of the assessment expertise of the
design team, the expertise of faculty/supervisors in subject-specific pedagogy, and
reflection on candidate performance has resulted in constant improvement of the
Teaching Events.
Subject-specific pedagogy is represented in the Teaching Event through the foci for the
learning segment documented by the Teaching Event, the foci for the video clips, the use
of scorers with subject-specific expertise, and subject-specific benchmarks and training.
The foci for the learning segment and the clips were selected to represent critical teaching
and learning tasks in the credential area that occur with great frequency (to allow student
teachers options in selecting a learning segment to document). For example, the focus
element for Elementary Literacy is on teaching students to comprehend and/or compose
text. For Single Subject mathematics, the foci are conceptual understanding, procedural
fluency, and mathematical reasoning. The prompts for the commentary and directions for
constructing the Teaching Event were revised over the years to more clearly direct the
candidates to plan, teach, and reflect on their teaching in light of what they know or learn
about their students and their learning. The questions were selected to not only gather
data, but to represent enduring questions that candidates would be expected to continue to
ask themselves throughout their teaching career, with increasingly complex responses.
The emphasis on English Learners also shifted over the pilot years. Initially, there was a
rubric in each category (Planning, Instruction, Assessment, Reflection, known as PIAR)
that focused on English Learners. However, the evidence generated was insufficient to
support that number of rubrics, and it was judged that the student teacher knowledge base
also lacked sufficient complexity. Also, the feedback from some candidates and their
supervisors suggested that formal, academic language was an issue for more students
than just English Learners, e.g., students who spoke African-American Vernacular
English, students from parents with low levels of education, secondary students who were
not accustomed to using disciplinary terminology or to writing complex texts. Focusing
the discussion of language issues narrowly on English learners was frustrating for these
candidates, especially when the number of English learners relative to other students
struggling with academic language was small. The concept of English Language
Development was broadened to that of Academic Language, and an expectation for
language development as well as content development for each learning segment was
established. We are continuing to explore how to represent developing the academic
language of English Learners along with that of other students, while avoiding a loss of
representation of English Language Development for English Learners.
Academic Language was added as a separate scoring category drawing from evidence
across tasks, resulting in a Planning, Instruction, Assessment, Reflection, Academic
Language or PIARL scoring structure. The constructs measured in the Teaching Event
and the scoring levels are described in an Implementation Resource called Thinking
Behind the Rubrics, which appears in the section on Training of Trainers and Scorers.
Embedded Signature Assessments (ESAs)
Development of the Teaching Event has outpaced development of the campus-specific
tasks, which have come to be known as Embedded Signature Assessments (ESAs). Data
for the Teaching Event alone is sufficient to meet the standards for high-stakes
assessments, including the Commission on Teacher Credentialing’s Assessment Quality
Standards. Therefore the Teaching Event is being submitted as the sole component of
PACT’s Teaching Performance Assessment at this time. However, development work on
the ESAs continues. Many programs have identified or developed assignments intended
to be ESAs, and have also developed more formalized scoring criteria or rubrics to enable
standardized scoring by more than one person. They are now piloting these ESAs,
collecting data, and analyzing the results in a process similar to that of the development
of the Teaching Event.
Use of scores for program improvement
Many programs have used scoring data as well as focused conversations stemming from
scoring similar student work across the entire program to identify program strengths and
targets for program improvement. The eleven rubrics from the Teaching Event, 2-3 for
each category scored (PIARL) across five scoring categories, provide feedback for both
individual students and for the program as a whole. Taken together, the 2-3 rubrics
within a scoring category provide feedback that is targeted at different constructs,
providing a multi-dimensional picture of performance in each category, and providing
a complex picture of candidate strengths and areas for growth.
Teacher certification programs have used a variety of strategies to strengthen their PACT
program, many of which will be shared later in this Handbook. As a result, a
strengthening of candidate performance on the Teaching Event can be seen across the
years. Yet, at the same time, it does not appear that the standardized Teaching Event has
unduly narrowed candidate performance, as a variety of program values and emphases
continue to be found throughout the PACT institutions in addition to a variation in
effective approaches to instruction.
PACT today
Despite the suspension of the Teaching Performance Assessment (TPA) requirement in
2003, the members of the PACT consortium not only renewed their commitment to
continued development but actually grew in numbers. With the reinstatement of the TPA
requirement by SB 1209 in fall 2006, PACT has grown to 31 members. The Teaching
Event was formally submitted to the California Commission on Teacher Credentialing for
approval as a TPA in March, 2007 and was approved. The consortium plans to continue
to work on the ESAs to complete the PACT assessment design, and to resubmit the
Teaching Event/ESA package when the ESAs meet the required standards for technical
quality.
Download