CLASSROOM ASSESSMENT AS A BASIS OF TEACHER CHANGE

advertisement
CLASSROOM ASSESSMENT AS A BASIS OF TEACHER CHANGE
Jan de Lange
Freudenthal Institute
Background and Conceptual Framework
During the past decade, but particularly during the last three years, our research on classroom
assessment practices has documented that formative assessment can play a significant role in
pedagogy that promotes learning with understanding. To teach for understanding, teachers must
(a) assess students’ prior knowledge, skill, and understanding; (b) use this information as a point
of departure for instruction; and (c) monitor their students’ progress during the teaching–learning
process. There remains a significant need on the part of researchers and teachers to develop a
deeper understanding of the processes and purposes of formative classroom assessment, to
identify features of exemplary formative assessment practices, and to study how to enhance the
frequency and quality of feedback in a wide range of classroom settings (Bransford, Brown, &
Cocking, 1999).
To date, the evidence we collected in Providence (via the Research in Assessment Practices
project) and in the Verona Schools (via the Middle School Design Collaborative), and summaries
of other work (by the Assessment and Achievement Study Group) support the contention that
formative assessment by teachers, along with appropriate feedback to students, helps students
learn with understanding. Although data from our current research are still being summarized,
two findings are clear. First, most mathematics teachers have limited understanding of formative
assessment practices and, thus, provide their students with incomplete information about their
progress (Romberg, 1999); and second, teachers can learn to use such practices in their
classrooms as a consequence of appropriate professional development, and, in turn, their
students’ achievement improves (Black & William, 1998).
Classrooms at our research sites reported examples of exceptional student achievement on
internally developed problem-solving assessments and externally administered standardized
achievement tests. Patterns of teachers’ assessment practice evident across these classrooms
included teacher attention to student thinking, explicit expectations for all students, and ongoing
opportunities to provide feedback. Over time, teachers developed a more comprehensive view of
assessment and began regularly to use a wider range of assessment strategies. The increased
attention given to student learning (via assessment) facilitated a greater orientation to the study
of relations between content, instruction, and the evolution of student understanding as students
progressed from informal to formal reasoning in mathematical domains. It is our contention that
the study of classroom assessment can serve as a basis for reorienting teacher practice so that it is
flexible and more sensitive to students’ understanding of mathematics.
In line with the research reported by Black and William (1998) and working from Simon’s
(1995) model of the mathematics teaching cycle and hypothetical learning trajectories (HLTs),
we agree that mathematics instruction that promotes understanding is highly dependent on
appropriate formative assessment practices and tools that can be used to monitor and provide
feedback to students as they progress through a particular unit of instruction. As such, formative
assessment is central to the monitoring of student learning and achievement. Current
investigations of HLTs often refer to a broad view of learning for understanding that accounts for
connections and relationships to prior knowledge, informal conceptions, and the new
connections gained from classroom activities (Carpenter & Lehrer, 1999). Similarly,
hypothetical assessment trajectories (HATs) require an availability of student responses that can
be observed and heard so that teachers (and students) can reflect, infer, and act on interpretations
and solution strategies to a problem situation.1 Using learning and assessment trajectories as a
developmental framework requires teachers to conceptualize student learning as a network of
desired developmental paths, accessible and achievable through appropriate sequencing of
learning opportunities. Thus, a perspective that actualizes developmental trajectories assumes
that a collection of engaging classroom activities is insufficient to achieve desired learning with
understanding. Rather, teacher conceptions of mathematics and organization of activities need to
be reexamined according to students’ current conceptions of mathematics and how mathematics
is learned. Although teachers are generally receptive to experimenting with curriculum materials
that have the potential to promote learning with understanding, they are unaware of the need to
reconsider their current assessment practices in light of the rich evidence of student learning
generated through thought-revealing activities (Lesh, Hoover, Hole, Kelly, & Post, 1999). In
fact, considerable instructional conflict is generated when teachers define achievement of student
1
We view hypothetical assessment trajectories (HATs) as a loosely sequenced subset of benchmark evidence for
student learning with understanding, bounded by practical means by which teachers can reasonably assess individual
and collective learning within a classroom setting through formal, informal, and instructional assessment practices.
learning according to narrow bandwidth evidence constrained by a limited range of assessment
practices.
In previous research of teachers’ classroom assessment practices, we found that teachers will
engage students in instructional activities even when they have a limited understanding of the
mathematical purpose of the task or how a specific activity is connected with the ongoing
development of a related cluster of mathematical concepts (Feijs, de Lange, van Reeuwijk,
Webb, & Wijers, 2000) The teachers’ immediate need to “make the lesson work” overrides
deeper consideration of long-term goals or mathematical purpose (Richardson, 1994). Therefore,
to support purposeful decision making, teachers must be given an opportunity to examine and
construct an initial set of HLTs and a corresponding set of HATs to monitor student progress. To
support teachers’ construction of HATs, an appropriate starting point is to examine teachers’
current (or real) assessment trajectories through investigation of curriculum sequences,
instructional decisions, and benchmark assessment tasks.
To broaden teachers’ views of and responsiveness to student learning through reciprocal
enhancement of classroom assessment practices, attention is given to the development of HATs
and the investigation of formative assessment practices (e.g., feedback, instructional decisions).
The development of formative assessment practices to promote student achievement is supported
by earlier Center studies that have situated teacher learning and professionalization within the
context of studying student thinking (Franke Carpenter, Levi, & Fennema, in press-a, in press-b;
Lehrer & Schauble, 1998a). From these case studies of teacher practice, we see a need for
teachers to have a more comprehensive view of assessment supported by techniques and
practices that facilitate student-centered instruction.
The goal of the proposed research is to use the knowledge we have gained about teachers’
assessment practices to develop and test a program of professional development that seeks to
bring about fundamental changes in teachers’ instruction by helping them change their formative
assessment practices. Our research leads us to assert that formative assessment is quite complex.
Recent discourse in classroom assessment has largely ignored critical processes of instructional
decision making and feedback. Our intent is to bring these processes to the fore, advancing
formative assessment as fundamental to the improvement of mathematics instruction that
promotes student learning with understanding. Twenty years of developmental research suggest
that it is unrealistic to expect teachers to become instant assessment designers and experts (de
Lange, 1987). Therefore, informed choices need to be made as to the aspects of formative
assessment that can be generalized across districts and can be propagated in other environments.
Lacking in most efforts to improve assessment practice is the study of processes of informal
assessment and instructional assessment. We define informal assessment here as observation and
documentation of student work that is not “formalized” by traditional testing protocol or a
rigorous grading system (e.g., class work, homework). We know that teachers listen to students
as they work and observe their actions during instruction, but quite often do not view such
information as assessment. Instructional assessment includes related practices such as
instructional decision making, interpretation of students’ written and verbal responses, and
strategies for eliciting or responding to student ideas during the course of instruction. These two
nonformal elements of classroom assessment have been under-studied, and we see a need to
continue our work in studying how this area of teacher practice might be initiated and supported
in various school contexts.
The Research on Assessment Practices project has given us insight in the problems and practices
around formative classroom assessment. The results of this work can be found in four
publications. Insight Stories: Assessing Middle School Mathematics (Feijs , de Lange, van
Reeuwijk, Webb, & Wijers, 2000) will be a book with case studies on problems and practices in
formative classroom assessment in mathematics. A Framework for Classroom Assessment (de
Lange, 1999) provides a theoretical framework for designing assessment in mathematics, and
The Great Assessment Picture Book (de Lange, 2000b) consists of many practical examples of
classroom assessment in mathematics. The final publication will give an example of a
hypothetical learning trajectory and describe the hypothetical assessment trajectory that might fit
to it (de Lange, 2000a). Another important resource that has been developed is the electronic
assessment AssessMath! (Cappo, de Lange, & Romberg, 1999), a software package developed by
Learning in Motion that currently contains over 1,000 mathematics tasks organized according to
principles outlined in the Framework for Classroom Assessment. In those four publications
together with AssessMath!, we have a solid theoretical and practical base to extend our research
into new territory. The focus of the developmental research we are proposing for Years 6 and 7 is
to understand how to use what we have learned and the materials we have produced to help
teachers engage in productive formative evaluation. In extending our work in this direction, we
also will study the ways teachers make decisions about what instruments to use, when they use
them, and the reasons that motivate their choices.
Research and Development Questions
The results of our previous research suggest that a good approach is (a) to have teachers identify
and reflect on hypothetical learning trajectories in specific mathematical domains, (b) to identify
potential crucial or critical moments in such trajectories, (c) to clarify the possible lines of
conceptual development, (d) to support teachers as they design assessments and make
predictions about outcomes, and (e) to document jointly with them the resulting student
outcomes. In Years 6 and 7, we propose to address the following research questions in relation to
this work:
1.
What professional development materials will be required to disseminate
principles for improving formative assessment across a wide range of schools?
2.
What support do school personnel and teachers in various school contexts, who
are adapting these principles to local conditions, need in order to ensure that
changes in formative assessment are sustained?
3.
How do teachers make decisions about what assessment instruments to use, when
to use them, and what reasons motivate their choices?
4.
How do teachers’ assessment practices change as a result of their participation in
this professional development program?
5.
How are changes in teachers’ assessment practices reflected in their students’
achievement?
In our earlier work, our study of classroom practice was broad and inclusive in order to
examine the relationship between assessment, instruction, and learning. In the proposed
extension of this work, we will focus on the development, study, and dissemination of
professional development materials that we have found to be logical focal points for
expanding teachers’ views of classroom assessment. The investigation of the research
questions requires the development and testing of professional development materials
that can deliver core principles of formative assessment while remaining sensitive to
material, human, and social resources of school contexts.
Approach and Methods
Our general plan is to train two professional development cadres who will conduct professional
development programs on formative evaluation for teachers in their districts. In March 2001, we
will conduct a professional development institute in the Netherlands for a group of staff
developers and lead teachers from Providence, RI; Philadelphia, PA; and New Berlin, WI. The 5day institute will familiarize the participants with the assessment materials we have developed
and will involve them in adapting and constructing formative assessment instruments consistent
with the principles described in the Framework for Classroom Assessment. During the rest of the
year, the lead teachers will experiment with formative assessment in their classes. The teachers
will receive on-site support from the staff developers, lead teachers, and researchers and will be
able to communicate regularly with the researchers through the website described below. We not
only will work with the professional development staff and lead teachers to modify their own
assessment practices, but also give considerable attention to ways to conduct professional
development for other teachers according to their specific assessment needs.
In summer 2001, the professional development teams, with our support, will offer a week-long
institute for teachers in their districts on formative classroom assessment. The teachers in this
group will be the subjects of our study. During the following academic year, the professional
development teams will hold monthly after-school meetings for the teachers and will provide
ongoing support for them. Additional support will be available for the teachers through the
website.
In summer 2002, the teachers will participate in a second institute to reflect and build on what
they learned throughout the year. The professional development teams will continue to support
the teachers throughout the fall of 2002.
Subjects
At Providence, RI, we have identified a group of four middle-school lead teachers who will
contribute to the professional development team in the present project.
At the Philadelphia, PA, site, we have identified a group of approximately 15 professional
development leaders and lead teachers who will contribute to the professional development team
in this project. These professional development teams, which are involved in the Philadelphia
Urban Systemic Initiative, work in or service four inner-city middle schools in Philadelphia.
Starting in summer 2001, they will work with approximately 40 mathematics teachers from these
schools.
The New Berlin, WI, includes 5 lead teachers, who will conduct the professional development
program, and approximately 15 teachers in two middle schools. The three sites offer a contrast:
In two cases, the schools are located in an inner city; the other is in a rural/suburban setting.
Although we plan to study the sample of teachers described above, we anticipate that the
Philadelphia professional development teams will actually work with several hundred teachers in
the district. Thus, the formative evaluation professional development program will be
disseminated to a much larger group of teachers than we will actually study.
The Professional Development Program
The professional development experience will start out with teachers considering assessment
instruments in the resources we have developed (The Great Assessment Picture Book,
AssessMath!). This will serve two purposes. First, it will provide the participants access to a
number of existing assessment instruments that they can adapt to their use, and they will learn
how to judge the quality of existing instruments and select instruments appropriate for their
instructional goals. Second, it will provide a context to reflect on the goals and nature of
formative assessment. In considering these instruments, participants will examine the role and
function of assessment instruments vis-à-vis the desired learning outcomes, the link between the
instruments and the possibility for positive feedback, and the “scoring” of the tasks (e.g., partial
credits, strategy scoring). Participants also will consider how a series of items can be constructed
to reflect a hypothetical assessment trajectory,
The goal of the professional development is not, however, just to help teachers understand how
to use and adapt existing assessment procedures; they need to learn and understand assessment
processes for their own needs. An important feature of the professional development will be to
work closely with the participants to design and adapt assessment activities. This will have two
effects: First, the participants will construct assessment instruments that they can use in their
own practice, and, second, there is no better way to help teachers understand formative
assessment than by having them participate in the design of assessment instruments themselves.
They not only will learn how to construct good items for different purposes, but also will learn to
construct formative assessment procedures by reflecting on hypothetical learning trajectories and
the resulting hypothetical assessment trajectories. We cannot possibly provide a complete set of
assessment instruments to meet a teacher's formative assessment needs. However, we will be
able to convincingly illustrate the process of formative assessment, and there will be large pool
of assessment instruments in the Great Assessment Picture Book and AssessMath! that teachers
can adapt to their use.
Throughout the professional development program, considerable attention will be given to how
to design a hypothetical assessment trajectory. We also will address finding a balance between
formal and informal assessment and the relation between classroom assessment and external
assessment. Other key issues regard the representation used to offer the problems to the students
(e.g., language, pictures, formulas), the choice of contexts, and equity issues. Other topics
include rubric-based scoring and how to keep track of the different strategies teachers use. As
our goal for mathematics education is learning for understanding, we will emphasize the
possibilities that different assessment instruments offer in this respect.
The mathematics classroom assessment web site. Another goal of the professional
development program is to prepare teachers to use electronic tools like our projected assessment
web site and the AssessMath! Tool. As part of the needed support system and as an information
and dissemination tool, we will develop and maintain a substantial web site. This site will
incorporate a variety of assessment instruments, examples of students’ work, scoring rubrics, and
the like. It will be interactive and constructive in the sense that any teacher can contribute to the
discussion about existing assessment instruments and contribute new instruments.
Data Collection
Teacher change. The staff will interview and observe classroom practice of each of the teachers
in the sample two to three times during the 2001–2002 school year and twice during the 2002–
2003 school year. The interview observations will take place in the early fall, in the middle of the
year, and near the end of the year. One interview/observation will be carried out in spring 2001
prior to the time that the teachers participate in the professional development program in order to
provide a benchmark for assessing changes in teachers’ assessment practices. Observation and
interview questions will be developed based on the RAP work. The changes in teachers’
assessment practices we will look for include the changes in use and choice of assessment
instruments, the level of addressing learning for understanding, the continuity of the assessment
process, the role of feedback, the change in their instructional practices, the method and role of
scoring assessments and assigning grades, and how the teachers see their students change.
Student achievement. Although most external assessment does not provide good measures
student learning for understanding, they are widely used throughout the United States.
Philadelphia uses the Stanford Achievement Test (SAT9), PSSA, and a district-wide test. New
Berlin uses the Terra Nova and the WSAS. Providence uses the New Standards Test. These tests
provide one measure that we can use to assess change in student achievement, albeit a limited
one. We will provide a measure of student learning that is more closely related to forms of
assessment that constitute the basis of our formative assessment instruments by adapting items
from our pool of formative assessment tasks to use as a summative measure of student
achievement. We will show students’ achievement also in a more qualitative way: showing what
students really do achieve by showing their work (and not their scores) and making relations
between this work and existing standardized tests. Variables that we will analyze are the
competency levels at which the students are operating, showing differences in mathematical
reproduction, in mathematical processing like mathematization and reasoning, and in the variety
of student strategies.
Timeline
Spring 2001. In spring, we will conduct a professional development institute in The Netherlands
for the professional development teams and will gather benchmark measures of teacher
assessment practices and student achievement.
Summer 2001. In summer, we will conduct professional development workshops for the
teachers in Philadelphia and New Berlin.
Academic year 2001–2002. During the school year, we will provide continued teacher support
for their formative assessment activities through monthly meetings, in-school contact, and
through the website. Additionally, we will observe and interview teachers two to three times and
assess student achievement at the end of the year.
Summer 2002. In summer, we will conduct professional development workshops for the
teachers in Philadelphia and New Berlin.
Academic year 2002–2003. During the fall, we will provide continued teacher support for their
formative assessment activities through monthly meetings, in-school contact, and through the
web site. We will also observe and interview teachers at the beginning of the year and in
December 2000. Support and data gathering will end in December, but student achievement will
be assessed at the end of the academic year.
Response to OERI Concerns:
Classroom Assessment as a Basis of Teacher Change (CATCH)
What Will Be Done With Teachers?
During our workshops in Years 6 and 7, teachers will initially be asked to consider assessment
instruments that they can adapt to their use. They will learn how to judge the quality of existing
instruments and to select instruments appropriate for their instructional goals as they begin to
reflect on the goals and nature of formative assessment in light of the desired learning outcomes.
They will work in detail on the “scoring” of tasks (e.g., reviewing, interpreting strategies, partial
credit) and will examine how assessment task items are constructed to reflect hypothetical
learning and assessment trajectories. During these professional development workshops, teachers
will construct a multidimensional assessment plan (that integrates the use of informal assessment
with formal assessment such as quizzes, tests, and projects) and instruments that they can use in
their own practice. During the academic years of the proposal, teachers will also have on-site,
classroom- and teacher-specific support from the staff developers, lead teachers, and researchers.
In working with teachers, we will address finding a balance between formal and informal
assessment and the relation between classroom assessment and external assessment. At a
microlevel, we will work with teachers on the choice of representation (e.g., language, pictures,
formulas), contexts, and equity issues in constructing and adapting assessment tasks. We will
also be working with teachers on using electronic tools and on working with the assessment
database. As part of the needed support system and as an information and dissemination tool, we
will develop and maintain a web site which will incorporate a variety of assessment instruments,
examples of students’ work, scoring rubrics, and the like. Interactive and constructive in format,
teachers will be able to contribute to ongoing discussions about existing assessment instruments
and will be able to post or upload their own new or adapted instruments and download those
created or adapted by other teachers.
One final note: We anticipate that the professional development teams we interact with in
Philadelphia will actually work with several hundred teachers in the district, and that the
formative evaluation professional development program will be disseminated to a much larger
group of teachers than we will actually study.
What Is the Evidence for Extending Current Work?
To date, the evidence we collected in Providence, RI, (via the Research in Assessment Practices
project) and in the Verona, WI, schools (via the Middle School Design Collaborative), and
summaries of other work (by the Assessment and Achievement Study Group) have shown that
formative assessment by teachers has a strong impact on students’ learning with understanding
critical concepts in mathematics. Preliminary analyses of this work (to be discussed in detail in
the Year 5 synthesis by the Center), have found that study teachers had limited understanding of
reform assessment practices but that, when teachers used such practices in their classrooms, as a
consequence of appropriate professional development, their students’ achievement improved on
internally developed problem-solving assessments and externally administered standardized
achievement tests. This increased attention given to student learning, in terms of greater teacher
attention to student thinking, explicit expectations of student performance, and stronger
individual and group feedback facilitated students’ progress from informal to formal reasoning in
mathematical domains. Teacher overall instructional practice became more flexible both to the
class as a whole and to the individual student, in addition to being more sensitive to and focused
on students’ understanding of mathematics.
What Do You Expect Classroom Assessment To Look Like?
Using formative evaluation tools can provide an early read on student misconceptions, allowing
teachers additional time for reflection and adjustment of instructional plans. Initial assessment
activities are often best accomplished in a format that permits teacher–student interaction and
probing of responses, enabling the teacher to assess prior informal or formal knowledge of
individual students as well as to reiterate important topics. Although basic knowledge and skills
can be evaluated in an initial more formal assessment, this format permits teachers to check
higher-level student competencies such as nonroutine problem solving and mathematical
communication in a relatively fluid manner. The formative evaluation tool we discuss here
serves as an example first activity of a hypothetical assessment trajectory (HAT) intended to
assess (and develop) student understanding of area.
In this activity, students are asked to decide which of two leaves will need more chocolate
(which leaf has the greater surface area). Teachers first draw out student interest in the context
by sharing a related story about making chocolate and model how one side of each leaf will be
frosted with chocolate (e.g., pouring melted chocolate, frosting one side with a spoon or brush).
After the introduction, students are asked to develop a response to the task in pairs or small
groups in order to facilitate sharing of strategies and are supplied with a blank transparency,
transparency pens, scissors, two sizes of grid paper, tape, and string. After students have had
time to work on the problem and to develop explanations for their strategies, they reconvene for
a whole-class discussion.
The task is not complex. Its imaginable context and informal treatment of the concept of area
makes it accessible to a wide range of students and encourages students to focus on the concept
of area without using the word explicitly, thus implicitly discouraging any reference to formulas.
Students generally approach this task in a variety of ways. Some place one leaf on top of the
other and look for the overlapping sections; some cut and paste one leaf onto the other to see
whether or not one can “cover” the other; still others, even though the use of square measuring
units as a mathematical convention is not introduced explicitly, use grid paper (or draw a grid on
the leaf) and count the number of complete and partial squares that “cover “ each leaf. (If
students using this last strategy use different grids of different sizes, and thus get different
“areas,” discussion can extend to why these students’ total counts of grid squares vary for the
same leaf.)
As students compare the shapes of the leaves and share their strategies, different strategies for
finding area, with varying degrees of complexity and accuracy, emerge. Additionally, in
problems based on realistic contexts, such as this one, student reasoning often takes interesting
turns. For example, a student that assumes the chocolate is soft or fudge-like might propose that
in order to truly compare area, the chocolate frosting on each leaf could be, like clay, rolled up
into balls, flattened out into circles of the same thickness, and compared. The strategies students
use to compare the areas of shapes are important for later development of their understanding of
area. These strategies also lay a foundation that will help students better understand how area
formulas are derived and often provide connections for further instruction.
As part of the small group discussion, students sometimes bring up the term area to describe the
amount of chocolate covering one side a leaf. In our research, some teachers have used this as an
opportunity to ask students to describe or define area, thus facilitating class deliberation over
such issues as whether area is a 2-dimensional or 3-dimensional construct, whether “cover” has
the same meaning as “fill,” and so on.
What Evidence Do You Expect To Find?
We know that teachers listen to students as they work and observe their actions during
instruction, but quite often do not view as assessment nonformal elements of instruction such as
instructional decision making, interpretation of students’ written and verbal responses, and
strategies for eliciting or responding to student ideas during the course of instruction. We also
recognize that it is unrealistic to expect teachers to become instant assessment designers and
experts. As a result of the professional development, we anticipate changes in teachers’ use and
choice of assessment instruments, level of addressing learning for understanding, continuity of
the assessment process, feedback, instructional practices, methods of scoring assessments and
assigning grades, and perspectives in the ways their students learn.
Although most external assessment does not provide good measures of student learning for
understanding, they provide a measure, albeit a limited one, that we can use to assess change in
student achievement. As part of our work, we will also provide a measure of student learning
more closely related to forms of assessment that constitute the basis of our formative assessment
instruments by adapting items from our pool of formative assessment tasks to use as a
summative measure of student achievement. We anticipate that students will show strong
achievement in mathematical reproduction, in mathematical processing like mathematization and
reasoning, and in the variety and strength of student strategies.
Responses to Specific Questions
Explain the use of comparison groups. Although a tradition in educational research,
comparison groups have limited usefulness in this project. As we explained in our proposal,
classroom formative assessment is almost nonexistent in U.S. classrooms. Standardized tests
(achievement on which is usually the basis of group comparisons) measure end-result student
achievement; they do not measure formative gains or strength of reasoning. Although valid, they
are valid in a limited way, generally measuring rote retention of formulae or superficial
understanding of number, algebra, and geometry. As such, they do not measure depth of
reasoning or potential achievement. Neither can they effectively be used to inform day-to-day
instruction or to assess the immediate or short-term needs of the class as a whole or of the
individual student. The classroom assessment we advocate and on which we provide professional
development in this project effectively supports day-to-day instruction through instructional
decision making, interpretation of students’ written and verbal responses, and strategies for
eliciting or responding to student ideas during the course of instruction.
What is the rationale for conducting a professional development workshop in The
Netherlands? (Such work should more appropriately be conducted near project sites,
rather than out of the country.) The Freudenthal institute has worked in the United States for
more than 10 years with great success on microdidactical classroom research, on technology, on
assessment, on teacher training, on implementation, on designing new curricula, on multimedia
professionalization, among others. We know what to do more appropriately near schools, what to
do at schools, what to do far away from schools. We continuously evaluate our activities, and are
continuously being evaluated. We even think we know more or less what we are doing.
The Freudenthal Institute is (by far) the largest research and development institute on math
education in the world, employing more than 70 full-time researchers (with no teaching
obligations). The Institute has hosted a successful NSF-funded workshop for more than 50 U.S.
teachers, in addition to workshops for Icelandic teachers, Danish teachers, Indonesian teachers,
and Japanese researchers. In summer 2001, the Institute will also be hosting a conference for
researchers from all over the world.
In addition, continuous streams of foreigners visit the Institute for shorter or longer periods.
Professors from all over the world have held visiting positions at the Institute, including
(recently) Magdeline Lampert and (currently) Cathy Fosnot and a Japanese cognitive
psychologist. The Institute itself is held in high regard in the field of assessment, and de Lange
currently chairs the Mathematics Functional Expert Group of the OECD/PISA project. The
Netherlands itself has a very strong tradition of effective classroom assessment.
Would such work be done more appropriately near project sites? We suggest that the Institute
offers state of the art developments in research in mathematics education assessment research.
The principal investigators of the CATCH project thought it would build a strong ownership for
the U.S. teachers if they could “go to the source,” go in other words where they can actually see
and judge for themselves the ideas behind CATCH. OERI money will not be used for this
purpose. We are willing to invest our own money because we have seen the effect of the Institute
workshop.
Explain how the implementation and results of this study will inform other proposed
projects. There appears to have been some confusion about the nature of the CATCH project.
CATCH, unlike the Organizational Capacity project, is not a cross-cutting project. CATCH is
engaged as a distinct program of professional development with its own sample of teachers and
its own data collection. The difference between this project and the other projects is that it is
organized on the basis of formative evaluation rather than on the basis of specific content. In
fact, all of the Center projects deal with formative evaluation. A major goal of every project is to
provide teachers a basis for assessing and building on students’ knowledge, and the analysis of
student learning within the specific content domains studied in each project provides teachers a
framework for such evaluation.
Researchers from all of the groups will communicate regularly to consider common
themes emerging from our work as described in the Cross-Cutting Themes section of this
proposal. We anticipate that CATCH researchers will both inform and be informed by these
interactions. CATCH research potentially provides researchers a perspective on the ways
teachers may generalize ideas about formative evaluation learned by studying specific content to
new topics. By the same token, the subject-matter projects provide a context for examining the
role that explicit knowledge of the learning trajectories for specific mathematics and science
content plays in teachers’ ability to engage in formative evaluation that supports student learning.
Thus, the CATCH project provides an interesting contrast in ways to frame professional
development that addresses common problems.
We will produce four fundamental publications, develop a number of classroom assessment
materials, develop and maintain an assessment web site with free access, and develop and
articulate an assessment framework strongly connected to the external assessment framework of
the OECD/PISA study.
Download