Document 12884644

advertisement
Journal of Assessment and Accountability in Educator Preparation
Volume 1, Number 1, June 2010, pp. 16-28
Evaluation in Teacher Training Colleges in Israel: Do
Teachers Stand a Chance?
Miri Levin-Rozalis
Orit Lapidot
Ben-Gurion University of the Negev
In 2004 the National Task Force for the Advancement of Education in Israel recommended that an evaluation
coordinator be incorporated into every school in Israel. The study reported here looks at teacher training in academic colleges. It raises serious doubts regarding the likelihood that teachers could meet the challenge posed by
the current situation in Israel. It also casts doubt on the chances of this situation changing in the near future, and
claims that the implications are far too grave—for teachers, students and the entire education system.
In 2004 an “earthquake” dramatically changed the
sleepy evaluation field in Israel: the National Task
Force for the Advancement of Education (the Dovrat
Commission) recommended that an evaluation coordinator be incorporated into every school in Israel
(Committee for Integration of Internal Evaluation in
Schools, 2004). Another recommendation was to set
up an independent evaluation body, and indeed, one of
the ordinances ratified by the Ministry of Education
was to establish the National Authority for Measurement and Evaluation (NAME), a statutory body responsible for both measurement and evaluation in the education system and for the certification of evaluators for
that purpose. The sudden need to train so many evaluators created a free-for-all of training programs and
position papers, some commendable, the majority not.
Perhaps the most significant action taken by NAME in
this context was to set standards for the professional
knowledge of evaluators in the field of education
(Hartaf, Ganor, Rom, & Shilton, 2007). The second
was the decision that the coordinator of school evaluations would have to have a Master’s degree in evaluation.
This entire process put the subject of evaluation, in
general (and educational evaluation, in particular), on
the table. The ongoing public debate on evaluation in
education is very broad, and among the many various
bodies involved in it are such entities as the Israeli National Academy of Sciences (Committee for Measurement and Evaluation in Education, 2005) and the
Council for Higher Education. Apart from position papers, there has been a flood of committees, study-days
and workshops. Prominent figures in the field worldwide were invited to participate in these activities
(Levin-Rozalis & Lapidot, 2008).
___________________________________________________________________________________________________
Correspondence:
Miri Levin-Rozalis, Department of Education, Ben-Gurion University of the Negev, POB 653, Beer-Sheva,
Israel 84105. Email: Rozalis@zahav.net.il.
Author Note:
This study was conducted under the aegis of the Mofet Institute Intercollegiate Research Authority, which
initiated and funded it.
The data reported in this article have been reported previously in Levin-Rozalis, Lapidot, and Dover (2006)
and Levin-Rozalis and Lapidot (2008). However, these are very low circulation research reports available
only in Hebrew.
Journal of Assessment and Accountability in Educator Preparation
Volume 1, Number 1, June 2010, 16-28
Teacher Training in Evaluation in Israel
The reason for all this to-do is, in our opinion, the
fact that evaluation, despite being an extremely professional field (Levin-Rozalis, 2003), is, in itself, a
powerful act that has considerable influence on two
interconnected spheres: political power and the consequential-validity of the evaluation process. Evaluation derives from and expresses policy (Stake, 2007)
over and above the politics involved in the very act of
evaluation (Weiss, 1991, 1999; Wholey, 1983, 1994).
The debate in the United States surrounding the national program “No Child Left Behind” (NCLB) is a
good example of the effect of evaluation on policy and
education (McDonnell, 2005). In addition to the professional and ethical debate, which we shall not go into
here, there is another important point that cannot be
ignored. The methodology prescribed for the testing of
this national program (randomized controlled trials
[RCT]) dictates the character of educational intervention instead of the reverse (see also American Evaluation Association, 2003). Patton (1997, pp. 341–369)
defines this by way of negation: Evaluation is not political when nobody cares about the evaluated program,
nobody knows about the evaluated program, and when
power and authority are not involved.
To reinforce this argument, it is sufficient to look at
what is happening today in the field of evaluation all
over the world. Strong financial bodies and others with
global influence are currently involved in evaluation in
a significant way: the World Bank, the US Federal
Government (which dictates standards for evaluation
and research), the European Union (which, for example, places evaluation commissioners in countries joining the Union), the governments of most European
countries that operate evaluation units, the UN and its
various agencies (which not only conduct evaluation
but also train evaluators), the Organization of Economic Co-operation and Development (OECD) (which
is a partner in strong international evaluation organizations) to mention just a few. Their aims, which are
sometimes partially declared and sometimes completely
undeclared, are supervisory. They involve the assimilation of standards suitable for or important to the intervening bodies; the establishment of a governmental
culture with which they can live; influence over the
running of bodies, organizations, systems and even
governments; supervision of cash flow and expenditure; and in quite a few cases, cultural colonialism.
In addition to the political side of evaluation, we
cannot ignore the issue of consequential validity—the
environmental implications created by the very fact that
something or someone has been evaluated. We often
17
speak of unexpected effects. At the individual level,
this can be unexpected effects on a student’s sense of
competence, self-image and motivation as a consequence of various types of test, as well as on the student’s image in the eyes of teachers and parents (Cuban, 2007; Doran, 2003; Greenwald, Nosek, & Sriram,
2006; Howe, 2005; Jennings & Rentner, 2006; Jones &
Olkin, 2004; Koretz & Barton, 2004; Messick, 1989;
Russon, 2002; Sarason, 1998). But the effects of consequential validity often go far beyond the teacher, the
student, or even the school, as Jones and Olkin (2004,
p. 557) state:
Tests may have the effect of defining the
legitimate boundaries of educational concern in the eyes of Congress, the public,
and even educators…. It is clear that the
group constructing the test would in many
respects be setting educational standards.… Inevitably, there will be a tendency on the part of teachers to teach for
the test….
With these two issues (politics and consequential
validity) in mind, we want to present the situation in
Israel. Thirty years ago, there were only a handful of
evaluators in Israel. The first university courses offered
there in evaluation were in 1979. Recent surveys of
Israeli evaluators have found that now, evaluation in
Israel is done in numerous fields, such as welfare, a
variety of service fields, immigration, health, and municipal activities (IAPE, 2002; Shochot-Reich, 2006).
The background to the growth of evaluation in Israel is different from that in other western countries
such as the United States, Canada, and large sections of
Europe. Whereas in these countries, the field of
evaluation grew in response to pressure from above and
to political changes and the needs of policy (Chianca,
2002; Cousins & Aubry, 2006; Guba, 1978; Nevo,
1989, 2001; Renzulli, 1975), evaluation in Israel grew
in response to needs on the ground (Kfir & GolanKook. 2000; Levin-Rozalis & Shochot-Reich, 2008).
The growth in Israeli evaluation occurred among academics to a certain extent, but it was primarily within
private bodies, almost totally following the requirements of private (mainly foreign) foundations (Committee for Integration of Internal Evaluation in Schools,
2004; Schwartz, 1998). This situation created an experimental greenhouse for the slow and somewhat soft
development of the field. Being detached from government institutions, evaluation in Israel was closely
connected to the field and not so much to decision-
18
Journal of Assessment and Accountability in Educator Preparation
making agencies. It was very sensitive towards the
evaluee and served mainly as a way for the programs
and projects themselves to learn, using many kinds of
formative, participatory methods.
The number of people receiving formal professional training in evaluation in Israel is still relatively
low. Surveys conducted in 2002 and 2006 found that
the standard of the preliminary learning of evaluators in
Israel is varied but low; only one-fifth of the respondents studied evaluation for any kind of degree, and the
majority learned how to evaluate from their work on
the ground (IAPE, 2002; Shochot-Reich, 2006). As
mentioned above, all that changed significantly in
2004. NAME is very active and its activity is well noticed. Conducting nationwide tests and assessments,
framing the body of knowledge needed for educational
evaluators, supervising in-service training in evaluation
and negotiating the content of studies for evaluation
coordinators, NAME’s influence is widespread. Evaluation and measurement has become an issue
everywhere—both within the education system and
outside of it.
Each of the influential bodies participating in the
debate on evaluation has its own perception of evaluation and its own agenda, and each pulls in a different
direction—a direction that arises from the diverse and
even contradictory roles of evaluation in education:
• evaluation for the purpose of supervision vs.
evaluation for the purpose of learning;
• standardization vs. acknowledgement of diversity;
• a systemic overview vs. a diagnosis of a
school, class, teacher, student;
• evaluation of a process (alternative evaluation
of learning and teaching processes) vs. product evaluation (grades);
• internal evaluation vs. self-evaluation vs. external evaluation;
• evaluation of knowledge vs. evaluation of
skills; and
• evaluation for the purpose of classification vs.
evaluation for the purpose of development and
advancement.
This is not an academic debate over one definition
as opposed to another; it has practical implications: any
choice made regarding the modus and character of
evaluation in the education system will have effects
(that cannot be overstated) on the character of education and, in particular, on patterns of teaching and
learning. As Patton (2008, p. 22) cites, “What gets
measured, gets done.” Who, then, decides what gets
measured? In what ways, to what purpose and with
what consequences? These changes have a tremendous
potential influence on the education system and thus
possess both great potential and great risk. Yet, we
wish to argue that at present both the education system
and evaluators in Israel have no answers to address the
need to fulfill the potential and protect themselves from
the risk (Levin-Rozalis, Lapidot & Dover, 2006; LevinRozalis & Lapidot, 2008; Levin-Rozalis & ShochotReich, 2008). One of the ways of avoiding, even partially, the undesirable effects that massive evaluation
activities can cause in a system—and turning this activity into a lever for improvement—would be to raise
the professional standard of evaluation professionals,
on the one hand, and the evaluation and measurement
literacy of teachers and educators, on the other. This is
partly because evaluation is part of the teachers’ profession, but also because it is they who will be evaluated, whether directly or indirectly. And, these evaluation processes will exert considerable influence both on
their work and on the entire system.
The study reported below (Levin-Rozalis, Lapidot
& Dover, 2006; Levin-Rozalis & Lapidot, 2008) looks
at teacher training in academic colleges. It raises grave
doubts regarding the likelihood that teachers could
meet the challenge posed by the current situation in
Israel. It also casts doubt on the chances of this situation changing in the near future. In-school evaluators
will commence their activity in the coming year, and
the need to define their role and the conditions of their
training is stronger than ever.
Method
Research Questions
The research questions focused on a central issue:
How is training for educational evaluation conducted in
the teacher training colleges? What tools are acquired
by the trainees and in-service teachers for conducting
evaluation, on the one hand, and for understanding the
findings of evaluation and applying them in an informed way, on the other? To this end we asked and
tested the following questions:
(1) Who are the evaluation trainers?
(2) What training exists and what is its content?
Teacher Training in Evaluation in Israel
Research Population
Of the 28 teacher-training colleges in Israel, we examined 15. We chose colleges that have research and
evaluation units or centers.
Research Tools
19
searchers. Subjects on which there was disagreement
were clarified and revalidated with the help of the raw
interview data and/or the syllabi until agreement was
reached.
Findings
Interviews
Who Are the Evaluation Trainers?
Twenty-two office holders were interviewed, as
were 11 teachers of evaluation. Of these, 9 were
evaluation unit heads/coordinators and 9 were pedagogic coordinators. There is some overlap between positions; that is, one person can hold more than one position. The interviews began as open, in-depth interviews around the question, “Tell me about your job
as…” and continued as informative interviews in which
we asked concrete questions about evaluation training,
professional history, course and program content, and
so forth.
The aim of the interviews was to examine the perception of evaluation in all its aspects, the interviewee’s
knowledge of evaluation, and what is being done in the
colleges with regard to training teachers for evaluation.
Those who teach evaluation to teachers come to
evaluation from a variety of backgrounds, and they
come randomly. Of the 11 evaluation teachers that we
interviewed, only 2 had had specific university training
in evaluation. The remainder acquired their education
in the sciences, sociology, educational counseling, geography, history, psychology, and education. For example, a college lecturer in evaluation states,
Analysis of Study Programs and Syllabi
Seventy-one syllabi from evaluation and measurement courses along a historical continuum between
1996 and 2004 were examined, along with 61 syllabi
from 2004 through 2006—following the Dovrat Commission report, and reflecting the current situation. The
aim of analyzing the syllabi was to find out what was
actually being done (with the reservation that a syllabus
does not provide an accurate course description), and
the changes that had taken place over the years.
We also examined course scope, objectives and
content; the types of assignments the students were
given; items from the required bibliography; and so
forth.
Validity and Reliability
We took two steps to increase the reliability of the
analysis (i.e., to ascertain that the interpretation actually
reflects what was said and to lessen the researchers’
bias). First, we provide extensive quotes for a rich description of the interviews (Geertz, 1973) in order to
demonstrate and validate the study’s claims. Second,
the analysis was conducted independently by two re-
I define myself as a researcher, not an evaluator. I was
never engaged in evaluation until I came to the unit
here. When I got here, I said I wasn’t an evaluator, but
was told ‘It doesn’t matter. . . . the college allocates us
funds so that we can develop a research culture in the
college.
“We’ve got a course called ‘Teachers Research
Their Work’,” says a pedagogic coordinator. “There’s a
lecturer teaching there from the field of chemistry who
thought that evaluation is important, so she made sure
that these subjects appeared there.”
A lecturer teaching evaluation at one of the colleges says,
If you’re not an expert in statistics, you can’t be an
evaluator. There’s no definition for an evaluator.
There’s no degree for an evaluator. Who defines what
an evaluator is? I’ve never passed a formal evaluation
course but I’m an evaluator.
And another lecturer told us,
One of our problems, a really weak link, is that the
lecturers themselves haven’t learned to conduct proper
evaluation. They’re not professionals in the subject.
One of the things we talk about is getting all the lecturers to undergo an extension course on evaluation . .
.[with] different evaluation models.
There was an almost total consensus among the interviewees regarding the necessity for and importance
of evaluation in the system, college or school—as well
as its potential for advancing the system. But there are
differences in the perception of evaluation as serving
20
Journal of Assessment and Accountability in Educator Preparation
learning or as a means of control, mainly in the lack of
distinction between the two. In the perception of our
interviewees, the distinctions between the different
types of evaluation, or between formative, process-related and summative evaluation, are not significant, and
they feel that they all lead to a single objective: efficiency, improvement, and accepting knowledge-based
decisions. This is demonstrated by the following quotations: “We evaluate study programs. We evaluate
faculty members. Evaluation is a process that accompanies the decision makers. In the end, what’s important is end targets, measuring tools that enable the taking of wise decisions. Evaluation is purpose-dependent” (from an evaluation lecturer who is also part of the
college evaluation unit).
It’s . . . designed to examine whether and to what extent you’re meeting your objectives and what can be
done so you meet them more. There’s no difference
between evaluation and reflection, no difference between summative and formative evaluation, it’s all the
same: feedback processes that are constant give you
the tools to say which points you should reinforce and
improve (an evaluation lecturer).
The fact that these perceptions find almost no expression in the training itself is another matter. The
majority of the interviewees shared the view that
evaluation should be taught in courses in the discipline,
and that statistics, mathematics and quantitative research methods are the principal part of what should be
included in evaluation training. However, pedagogical
instructors teaching evaluation believe that evaluation
must be learned within pedagogical subjects; they perceive it as a vital component that is structured naturally
into the teaching process, mainly touching upon teachers as their own evaluators, and upon the evaluation of
students.
The lecturers teaching evaluation extension courses
express dissatisfaction with the way evaluation is being
taught and note the numerous difficulties they encounter. They focus on three topics: the wide scope of the
material vis-à-vis the insufficient teaching hours allocated in general, including hours for teaching evaluation; the lack of the field’s popularity in colleges and
among students; and the insufficiently professional
human resources engaged in evaluation training. As
one head of an evaluation unit says,
Everything’s important. Today it’s important to teach
study planning, evaluation, psychology, leadership.
And the blanket’s too short. However you stretch that
blanket, something’s going to fall out. And why
should evaluation fall out? Because it’s a field without
power; it doesn’t have the single-minded coordinator,
the one who’ll yell and scream and get the subject into
the system! If you don’t have that, it’s very hard to get
it in. And because it’s an important subject, we do
what little we can.
“There’s a problem of budgets and hours,” says one
evaluation teacher. “Evaluation isn’t that popular at the
college. People don’t really understand what it is, that
it’s important. There’s no awareness.”
In addition, a pedagogical coordinator states,
I think another important issue is, of course, the Dovrat Commission report. They reached the conclusion
that we must engage in evaluation. When you approach the Ministry of Education with demands and
requests for projects, they ask to see what you’ve done
in the past, whether you’ve succeeded or not. That
seems to me to be the main reason for introducing
evaluation into the system.
Evaluation Training
Evaluation training in the colleges addresses a variety of target audiences: students, student teachers,
graduate students, extension courses for principals, inservice teachers and extension courses for evaluation
coordinators. These can be divided into three groups,
based on the aims of the course: (a) student teachers,
(b) evaluation coordinators engaged in evaluation at
various levels, and (c) principals and other consumers
of evaluation. The framework of each group varies
from the standpoint of training structure, content and
the expectations of the graduates.
Student Teachers
Based on the interviews with lecturers teaching
evaluation, the student teachers completing their
evaluation training are mainly interested in evaluating
students’ achievements. They expect that they will acquire the necessary knowledge and tools in evaluation
and measurement and that they will know how to run
reflexive processes and self-supervision in the teaching-learning process and in interactions with students.
As one pedagogical coordinator told us, “We teach
people to go out into the field, and the first thing they
encounter is achievement evaluation in the framework
of the ordinary teacher’s basic role and that they are a
small cog, not yet in evaluation of an organization or
institution or various processes in the institution.”
“The students aren’t going to be professional
evaluators, but we want to turn them into teachers who
Teacher Training in Evaluation in Israel
are sensitive to the subject: when to use a test and when
to use a written assignment,” says an evaluation lecturer. “I want to encourage them to develop these tools
by themselves.”
Two-thirds of student teachers study evaluation as
part of their pedagogy and instruction lessons, through
feedback from the pedagogic instructors, as part of action research, or in discipline-dependent evaluation
courses (evaluation in literature, evaluation in mathematics, for example). One-third take general evaluation
courses (general evaluation skills), part of which are
strictly educational evaluation (testing and measurement and alternative evaluation methods, such as portfolio, diaries and different kinds of assignments), with
the remainder dealing with program evaluation.
According to the interview findings and analysis of
the syllabi, it seems that in teacher training, evaluation
covers a great many fields and activities. There is diffusion, with no distinction between the principal issues,
tools and different approaches. This leads to over-generalization and an imprecise perception of the subject.
At the same time, great emphasis is placed on evaluation as a means of measuring student achievements. But
here, too, the studied material, according to both the
syllabi and the teachers’ testimony, it is too little and
insufficiently focused. In one college, for example,
there are four different courses given to different audiences. Three of these are semester courses of two hours
per week and one is for a year.
The objects of evaluation include teachers themselves—through reflection and evaluation of colleagues
(mainly in lessons on pedagogy). Students (through
achievements and alternative evaluation methods) and
study programs are another two. They are studied in
courses in the field of knowledge and also in general
evaluation courses. Evaluation of the school as an institution and evaluation of intervention programs and
projects, which are taught in general evaluation
courses, are another two.
Any mention of different evaluation approaches,
such as summative or formative evaluation, is mainly in
regard to conventional or alternative evaluation of students’ achievements. Together with the objectives of
inculcating knowledge in constructing, validating and
administering tests, evaluation students also study various types of test (such as the census METSAV
[Hebrew acronym for School Efficiency and Growth
Indices] tests), feedback, process documentation and
assessing the student’s knowledge, as well as using
learning portfolios and journals as alternative tools.
21
According to the syllabi, evaluation courses have a
range of objectives, such as providing experience in
constructing a variety of evaluation tools and analyzing
findings, familiarization with the various alternatives in
evaluating achievement, developing a critical approach
and judgment for determining the quality of study material (with the aim of turning students into intelligent
and autonomous consumers of learning programs), internalizing the role of evaluation as part of a teacher’s
work, teaching how to write a seminar paper, and developing students’ theoretical and practical knowledge
as teachers in accordance with “Teaching, Learning,
Evaluation”.
In this situation, it is difficult to assume that significant learning can be achieved. Nor does the integration of evaluation into content-dependent courses
achieve the training objectives. Evaluation is studied in
a field-specific way; students are not taught general
principles that can be applied in another field of knowledge, or in other situations.
Analysis of the syllabi along a historical continuum
shows that, as time has passed, there has been some
development in the use of evaluation tools: evaluation
teachers are using more varied tools (portfolio, reflection, etc.) and more varied concepts (formative, summative and alternative evaluation; student, institution
and program evaluation). However, together with the
development in the use of evaluation concepts, the actual evaluation training of student teachers has declined. Today there are far more discipline-specific
courses in evaluating achievement and fewer general
courses that inculcate evaluation tools that are dependent on methodical, critical thinking, which are not discipline oriented.
The impression is that there is great openness to the
changes taking place in the evaluation field—to new
approaches and advanced work methods—but at the
same time, the ability to contain all the new knowledge
in an organized and informed manner is lacking. The
result is that the field has become inundated and, in defense, teaching evaluation has been confined within the
disciplines.
Bibliography
All the syllabi repeat the same Hebrew bibliographic sources. It is true that the range of literature in
Hebrew is not broad, but having said that, the book list
is far from exhaustive.
On the other hand, the sources in English are few
and sporadic. The principal writers in the field of
evaluation are poorly represented and not consistently
22
Journal of Assessment and Accountability in Educator Preparation
listed. In fact, eight of the 32 syllabi for general
evaluation courses for student teachers between 2004
and 2006 listed no references in English.
School Principals
It is the principals who are the consumers of the
findings and data provided by the evaluation conducted
at various levels in a school, and they are expected to
employ the evaluation for decision-making purposes.
To this end, they must have systemic vision. They must
lead towards developing a data-based decision-making
culture, define systemic targets and criteria for testing
their attainment, ensure that data continue to be obtained so that they can monitor the individual and the
group, and evaluate staff. The school principal must
play a central and significant role in the assimilation of
the in-school evaluation culture. “They will be principals with literacy. That’s a must in the system,” says
one pedagogical coordinator.
You can’t run a system today without evaluation literacy, activating the evaluator, knowing to ask questions, placing question marks against results, and
looking at the data you receive with more open eyes.
Focusing like this will be excellent and it categorically
goes far beyond what is currently planned for evaluation coordinator courses.
Expectations notwithstanding, the existing courses
for training principals in evaluation are few, short and
poor: at one college, training is done in the framework
of an MA course in educational administration. Two
colleges hold a 56-hour extension course (and to drive
this point home, of the 10 extension courses we examined this year at the teacher development centers, not
one was designed for principals). As the population in
question numbers several thousand, this need is not
even beginning to be met.
In-school Evaluation Coordinator
It seems from the interviews that there are great
expectations for the in-school evaluation coordinator. A
study of the reports from the National Task Force (the
Dovrat Commission) reveals a varied and multifaceted
figure who is supposed to deal with the most complex
activities. The in-school evaluators are expected to
come from within the education system: teachers or
other office holders who are intelligent and possess
extensive knowledge and personal qualities like charisma and leadership ability, openness, flexibility of
thought, and creativity. These individuals are expected
to lead a process of introducing the evaluation culture
and to head teams within the school through an orderly
process of change. They must be computer literate and
conversant with measurement tools, and must possess
knowledge and experience in statistics.
The interviewees stressed the importance of attitudes towards research methods and statistics—and of
not flinching from them. These subjects are perceived
as an essential and important part of evaluation, on the
one hand, and as an obstacle that puts people off, on the
other. The interviewees perceived the evaluation coordinator as someone who would introduce structural and
conceptual changes into the system and who, therefore,
must possess leadership qualities, status in the school,
and knowledge in the field of evaluation. These qualities are essential if the coordinator is to lead a move
that is expected to encounter numerous obstacles along
the way, as expressed by an evaluation unit director
engaged in evaluator training:
Evaluation coordinators must be teachers from the
school. They must know the school inside out, be part
of the school’s negotiating team, and not someone
from the outside. I want these people to possess power
in the evaluation field, to have significant status in the
school’s everyday life, and not as visitors. A school’s
evaluation coordinator must be au fait with the
school’s atmosphere, in-school evaluation, really specialize in it, and assist the teachers in the spheres they
request.
After some investigation, we found that the participants who were accepted to the courses for in-school
evaluation coordinators were selected according to preliminary training and characteristics. However, in
looking at the declared aims of these courses, the syllabi, and the interview findings, there is a lack of uniformity in the perceptions and definition of the role of
the in-school evaluation coordinator—its limits,
authority and sphere of responsibility. The in-school
coordinator should primarily activate and lead inschool evaluation teams, but what does this mean? The
training programs we looked at were not consistent in
their definition of the occupation or its responsibilities.
In one program, for instance, the coordinator’s role involved developing an in-school evaluation culture and
programs, along with the implementation of study programs based upon external and in-school evaluation
reports. In another, the emphasis was on data and data
processing for decision-making purposes; less emphasis
was placed on instructing and leading teams.
In summary, we found a wide variety of requirements for evaluation coordinators and no distinction
Teacher Training in Evaluation in Israel
between the nature of the role and the knowledge required to fill it. A partial list of requirements includes
the following:
• the ability to execute a variety of evaluation
activities and to generate reports from the
school’s existing database and external
sources of data, along with initiating discussions based on this information for the purpose of improvement;
• familiarity with data-management software
and the ability to construct and maintain a
data-gathering infrastructure;
• familiarity with methods of evaluating student
achievement, school projects, study programs,
teaching, standards in the fields of content and
optimal climate;
• familiarity with theoretical statistics;
• an understanding of teacher instruction and
supervision capabilities; and
• the ability to support the needs of decision
makers.
Findings: Summary
The colleges’ endeavors in the subject of evaluation are considerable and varied: they offer different
types of training, for different target audiences, with
different training objectives, different contents and different tools. But with variety comes confusion, which is
already built into the field of evaluation, particularly in
Israel. There is insufficient distinction between the different types of evaluation, the different theories and
approaches to evaluation, and the different evaluation
strategies. Most of the evaluation trainers do not come
from the evaluation field and a significant proportion of
them have neither studied nor become proficient in
evaluation.
There is, however, fertile ground for change. All
the people with whom we spoke think that evaluation is
important. The majority are frustrated because, in their
view, the system does not give sufficient recognition to
the field. There is an understanding that evaluation can
and should serve as a tool for learning and enhancement, but the knowledge is lacking.
Our examination of evaluation training at the colleges revealed four major problems:
1. There is no policy regarding evaluation,
2. There is no structured training program,
3. There is a lack of skilled human resources,
and
23
4. The time dedicated to evaluation training is
not sufficient.
There is No Policy Regarding Evaluation
There is no policy regarding evaluation. This is the
main problem—all the other problems derive from it.
Since evaluation is not "a discipline", there is no
study program and therefore the corpus of knowledge is
unclear. In addition, as discussed above, in the introduction, there is confusion about evaluation’s different
roles.
Evaluation was not on the colleges’ agenda in any
significant way prior to the establishment of the National Task Force (the Dovrat Commission), and it was
added to the agenda hastily and in a disorganized manner following it. Our findings do not show any attempt
at ordered systemic thinking, prioritizing, or constructing a short- or long-term program. In all matters pertaining to evaluation, the colleges have generally reacted to what is happening at the time and improvise in
accordance with the situation, and not always proficiently.
Evaluation is not an easy subject to teach; it requires careful thought and planning (Levin-Rozalis &
Rosenstein, 2003; Levin-Rozalis, Rosenstein & Cousins, 2009), but we could find no evidence of any discussions on the subject of evaluation training in any of
the colleges we studied. There is no policy as to what,
how, and how much should be taught. What can be
found in the colleges is there because some individuals
think it is important. The absence of an agreed-upon
corpus of knowledge to guide the work program leaves
decisions about how, what and how much will be
taught to the people teaching evaluation, who, as we
found, lack sufficient knowledge. There is nothing
wrong with a variety of approaches to evaluation if this
variety is based on informed decisions rather than arising out of the evaluation teacher’s often limited knowledge base.
Absence of a Structured Training Program
As one might expect from the lack of policy regarding evaluation, we found no structured, ordered,
and phased program of what should be taught: what
information is required by different types of audience,
what the methods of evaluation are, or what standards
there are. In addition to the multiplicity of names of
courses and the variety of subjects studied, it appears
that there is no clear perception of teacher training in
24
Journal of Assessment and Accountability in Educator Preparation
the academic system, no understanding of either what
evaluation in general and educational evaluation in
particular are; or the central corpus of knowledge of
evaluation that is required for the teacher, the evaluation coordinator or the evaluation consumer, particularly school principals. The course content and the main
concepts imparted to the students present a fuzzy perception of evaluation, of its place in the education system and of the way it is inculcated. We found this to be
true of every college, and it is even more striking when
one looks beyond the colleges.
All the programs at all the colleges focus on how
evaluation is conducted. In contrast, we did not find a
course or lesson in which the students learned to read
and understand evaluation data and to draw conclusions
from them. Conducting evaluation and interpreting the
results of evaluation are two separate fields of knowledge. In our opinion, in a situation in which a system is
evaluated more and more by large-scale assessment, the
ability to understand and interpret the results of evaluation is no less important for teachers than the ability to
conduct an evaluation.
Lack of Skilled Human Resources
This problem is intensified by the fact that instructors have not studied evaluation, and the majority have
never conducted an evaluation. Their knowledge of the
field is eclectic and not always adequate. With no clear
policy, the possibilities for change are very low.
Insufficient Time
The budgetary cuts besetting the college system are
not conducive to the efficient teaching of evaluation.
The structure of teacher training (including evaluation),
which is extremely complex and concentrated into four
years, leaves insufficient time for any kind of learning.
When a decision has to be made between teaching a
discipline and evaluation, evaluation, which is “not a
subject” (i.e., it is not a disciplinary field), comes in
second. Again, budget and policy go together, with no
policy, there will be no budget allocation.
Discussion and Conclusions
Are Israeli colleges ready to meet the challenge of
training teachers for the essential change ahead? In
general terms, the answer is no. The knowledge that is
currently in the system is not sufficient to take up the
gauntlet thrown down by the Dovrat Commission.
We have analyzed the situation using Bourdieu’s
(1986, 1999, 2005) concepts, his “field” concept in
particular, to shed a different light on the complexities
of the present situation.
In his use of “field” (le champ), Bourdieu refers to
a structured space that is an arena of power organized
around a specific subject (in our case, evaluation).
Bourdieu formed the idea of different kinds of capital
and extended it to categories such as social capital,
cultural capital, and symbolic capital.
According to Bourdieu, cultural capital is the most
important and the easiest to manipulate. This is capital
imposed by the elite. Inherited cultural capital (which is
a condition for scholastic success as well as success in
numerous other fields) contributes to a great extent to
the “reproduction” of the class structure. It is also a
product of the education system and influences success
in it (and in society in general). Studies conducted to
examine the importance of cultural capital support
Bourdieu’s claim to its importance (Reay, 1999; Dumais, 2002).
If we examine the field of evaluation in Israel, we
can state at the outset that it is a field in its formative
stages or, more precisely, a field undergoing accelerated, radical, and aggressive processes of change. As
noted above, the field of evaluation in Israel developed
slowly and late, mainly in support of learning processes
of the evaluee and the enhancement of interventions in
various fields. The principal capital was a certain type
of cultural capital that combined knowledge with sensitivity to otherness. In any event, evaluation was a
very marginal field with no economic capital, political
power, or symbolic capital, and thus it was also a field
almost devoid of competitiveness. Professional standards in the field were not generally high and it attempted to build itself up by creating exclusive knowledge for its members through conferences and seminars, all of which were conducted voluntarily.
The decisions of the National Task Force changed
the structure of the field so radically that it might be
said that a new field was actually created in Israel
alongside the old one. We shall mention several notable
changes:
• First, from a marginal field, the field of
evaluation was turned into a central one with a
great presence. The large number of central,
prestigious bodies that are attempting to become part of the field are the best proof of
this.
Teacher Training in Evaluation in Israel
• Second, the shift of emphasis was to the education field, with evaluation in other fields
being less affected by the process, albeit the
influences are clearly felt there as well.
• Third, there has been a significant change in
size. Israel is a small country with a population of fewer than eight million people and
about 3,000 schools. Nevertheless, the decision to have an in-school evaluation coordinator, with a Master’s degree in evaluation, in
every single school in Israel changed the
number of evaluators from a group of a few
dozen to potentially thousands.
• Fourth, there has been a change in professional certification. This was a field in which
80% of its members learned their profession
in the course of their work. In a few years, we
will see a highly professional field.
Even though the principal capital has remained
cultural capital (mainly in matters pertaining to knowledge and academic education), here, too, the emphasis
has changed. It is now mainly on professionalism,
along with familiarization with advanced measurement
and evaluation methods. At the same time, the field has
acquired considerable political power while symbolic
and social capital are beginning to be accumulated.
Another important difference is that unlike the old,
egalitarian field, the field currently under construction
is highly stratified. There is stratification that simply
derives from the way in which the field is being reorganized and from the division of roles within it. The
main stratification, however, will apparently be in cultural capital, with the present thrust towards professionalization. It is with good reason that the first documents published following the National Task Force
dealt with this issue (Committee for Measurement and
Evaluation in Education, 2005; Hartaf et al., 2007); the
immediate effect was seen in the multiplicity of extension courses for evaluators that sprouted up all over
Israel. This is where the main battle is currently being
fought.
The quality of the training that evaluators bring to
the field will become a stratifying factor with implications for the type of roles to be filled (and here the
significance is economic capital, political power and
symbolic capital), as well as advancement possibilities
and mobility within the field. The various roles to be
played by evaluation on the ground (in-school evaluators, regional evaluators, supervision, etc.) will have a
similar effect.
25
Although this is a field whose most important
capital is cultural, and although teachers constitute a
group in which cultural capital is integral, they enter
the evaluation field with very little capital. And even
worse, it is not completely clear if they are part of the
field or whether they stand any chance at all of becoming an active part of it (except in the role of passive
evaluees and victims of potential symbolic violence).
The relevant professional training, which is the currency of the evaluation market in the field being created
today, is almost nonexistent—for either evaluators or
educated evaluees.
In order to be an evaluee without becoming a victim in Israel today, one needs cultural capital (i.e., considerable professional knowledge). One needs to know
how to understand data, even statistical data, that have
undergone advanced and complex processing, how to
draw conclusions from this information and learn from
it, as well as how not to manipulate data for inaccurate
representations (either erroneously or maliciously). For,
as we have shown in the introduction, the political
games and power plays also exist between evaluators
and evaluees. The effect of evaluation in education will
greatly influence teachers’ work. Without recognition
of the implications, without an understanding of the
field, teachers will lack the power to influence change,
to react to changes and to deal with them. This situation
will harm not only the teachers but also the students,
and in the end, the entire system.
The teachers, as a population, are currently victims
of symbolic (and not all that symbolic) violence that
places a question mark against their professional abilities in general and against their legitimacy to participate in this field. But there is still the possibility for
teachers to become powerful players: if professionalism
is the name of the game, they must be professionals; if
the currency is knowledge, then they must possess
knowledge.
Beyond the immediate understanding of student
evaluation, it is important that teachers be “evaluation
literate”, that they be able to distinguish between the
various types of evaluation and understand their significance. The more teachers and educators understand
the meaning of the different types of evaluation, and
the more familiar they become with methods and issues
of measurement and evaluation, the better they will
know how to pose questions, set targets and test them
(or at least understand how the processes are tested by
others). Thus, they will have better control of and influence over not only their own work and evaluation processes but also over evaluation processes that evaluate
26
Journal of Assessment and Accountability in Educator Preparation
them and their students. And they will have a better
understanding of the significance of the evaluation
processes and findings taking place in the system.
Without this knowledge, they will be in the hands of
external experts, some good and fair, others less so.
The significance of this situation is that personnel
in the education system—in every position and at every
level—must know what this is all about. They must
understand what evaluation is, where its power lies and
what it can contribute and enhance, what its weaknesses are and where it is dangerous, but mainly, which
type of evaluation they want, and why. In addition to
being good evaluation professionals, they must also
possess the professional knowledge that will be imprinted on the education system as part of the professional repertory its personnel bring with them. So long
as the attitude of the entire system, including that of the
teachers in the classroom, is more professional, the
likelihood that evaluation will be a contributory, enhancing and progressive tool is greater. But the reverse
is also true, and that is a scenario we can ill afford.
The teacher training colleges cannot afford to be in
the situation in which they find themselves today. The
implications are far too grave—for teachers, students,
and the entire education system. If they are unable to
create and acquire cultural capital—in the form of
knowledge that can be carried into the political arena
and that will void itself of symbolic violence—teachers
will become pawns in the hands of other players instead
of being significant players in a field with extremely
powerful political rules.
References
American Evaluation Association (2003). Scientifically
based evaluation methods. American Evaluation
Association Response to U.S. Department of Education notice of proposed priority. Federal Register RIN 1890-ZA00, November 2003. Retrieved
April 19, 2004 from http://eval.org/
doestatement.htm.
Bourdieu, P. (1986). The forms of capital. In J.
Richardson (Ed.), Handbook of theory and research for the sociology of education. Westport,
CT: Greenwood (pp. 241–258).
Bourdieu, P. (1999). On television. New York: New
Press.
Bourdieu, P. (2005). Sociology in question. Tel Aviv:
Rasling. (Hebrew)
Chianca, T. (2004). The global evaluation community,
the Brazilian community, the Brazilian Evaluation
Network and the Evaluation Network and the
WMU Evaluation Center. Paper presented at the
Evaluation Café, The Evaluation Center, Western
Michigan University.
Committee for Integration of Internal Evaluation in
Schools (2004). Framework of the National Task
Force for the Advancement of Education in Israel.
Jerusalem: Ministry of Education, Culture and
Sport. (Hebrew)
Committee for Measurement and Evaluation in Education (2005). A framework proposal for study
programs and professional Development—Measurement and evaluation in education. Jerusalem:
Committee for Applied Research in Education, Israeli National Academy of Sciences, Ministry of
Education, Culture and Sport, Yad Hanadiv Foundation. (Hebrew)
Cousins, J.B., & Aubry, T. (2006). Roles for government in evaluation quality assurance: Discussion
paper. Ottawa: Centre of Excellence for Evaluation, Secretariat, Treasury Board of Canada.
Cuban, L. (2007). Hugging the middle: Teaching in an
era of testing and accountability. Education Policy
Analysis Archives 15(1). Retrieved January, 29,
2008 from http://epaa.asu.edu/epaa/v15n1/
Datta, L.-E. (2003). The evaluation profession and the
government. In T. Kellaghan & D. L. Stufflebeam
(Eds), International handbook of educational
evaluation (pp. 345–360). Boston: Kluwer.
Doran, H. C. (2003). Evaluating the consequential aspect of validity on the Arizona Instrument to
Measure Standards. Paper presented at the Annual
Meeting of the American Educational Research
Association, Chicago, April 21–25, 2003.
Dumais, S. A. (2002). Cultural capital, gender and
school success: The role of habitus. Sociology of
Education, 75(1):44–68.
Geertz, C. (1973). The interpretation of culture. NY:
Basic Books.
Greenwald, A. G., Nosek, B. A., & Sriram, N. (2006)
Consequential validity of the Implicit Association
Test: Comment on Blanton and Jaccard. American
Psychology, 61(1), 27-41. Guba, E. G. (1978). Toward a methodology of Naturalistic Inquiry in educational evaluation. Los
Angeles: University of California at Los Angeles.
Hartaf, H., Ganor, M., Rom, A., & Shilton, H. (2007).
Guidelines for extension course planning. Jerusalem: National Authority for Measurement and
Teacher Training in Evaluation in Israel
Evaluation (NAME), Ministry of Education and
Culture. Retrieved January 24, 2009 from
http://cms.education.gov.il/EducationCMS/Units/
Rama/HachsharatCoachAdam/madrichHishtalmuyot.htm. (Hebrew)
Howe, K. R. (2005). The question of educational
science: Experimentism vs. experimentalism.
Educational Theory, 55(3), 307–321.
IAPE. (2002) Summary of member survey. Israeli
Association of Program Evaluators (Hebrew).
Jennings, J., & Rentner, D. (2006). Ten big effects of
the No Child Left Behind Act on public schools.
Bloomington, IN: Phi Delta Kappan. Retrieved
February 5, 2007 from http://www.pdkintl.org/
kappan/k_v88/k0610jen.htm
Jones, L. V., & Olkin, I. (Eds.). (2004). The Nation's
Report Card: Evolution and perspectives.
Bloomington, IN: Phi Delta Kappa Educational
Foundation.
Kfir, D., & Golan-Kook, P. (2000). Integration of a
permanent evaluation system into the Ministry of
Education SHAHAR programs. Jerusalem: Hebrew University of Jerusalem. (Hebrew)
Koretz, D., & Barton, K. (2004). Assessing students
with disabilities: Issues and evidence. Educational
Assessment, 9(1-2), 29–60.
Levin-Rozalis, M. (2003) “The Differences between
Evaluation and Research”. The Canadian Journal
of Program Evaluation, 18(2), 1–31.
Levin-Rozalis, M., & Lapidot, O. (2008). Evaluation
training of teachers in Israel. Research report. Tel
Aviv: Mofet Institute Intercollegiate Research
Authority. (Hebrew)
Levin-Rozalis, M., Lapidot, O. & Dover, M. (2006).
Evaluation as a tool for teachers and/or the system and its possible effects on the definition,
training and functioning of the worthy teacher”.
Research report, Part 1. Tel Aviv: Mofet Institute.
(Hebrew)
Levin-Rozalis, M., & Rosenstein, B. (2003). A
mentoring approach to the one-year evaluation
course. The American Journal of Evaluation,
24(2), 245–259.
Levin Rozalis, M., Rosenstein, B., & Cousins, B. J.
(2009). Precarious Balance: Educational Evaluation Capacity Building in the Era of Globalization.
In K. Ryan & J. B. Cousins (Eds.), Sage international handbook on educational evaluation. Thousand Oaks, CA: Sage.
Levin-Rozalis, M., & Shochot-Reich, E. (2008).
Professional identity of evaluators in Israel.
27
Canadian Journal of Program Evaluation, 23(1),
141-177.
McDonnell, M. L. (2005). No Child Left Behind and
the federal role in education: Evolution or revolution? Peabody Journal of Education, 80(2), 19-38.
Messick, S. J. (1989). Validity. In R. Linn (Ed.),
Educational measurement (3rd ed.), pp. 13–103.
Washington, DC: American Council on
Education.
Nevo, D. (1989). Beneficial evaluation—Evaluation of
educational and social projects. Givatayyim: Pil’i
Press. (Hebrew)
Nevo, D. (2001). School evaluation—A dialogue for
school improvement. Even-Yehuda: Rekhes Press.
(Hebrew)
Patel, M., & Russon, C. (1998). Appropriateness of the
Program Evaluation Standards for use in Africa.
Paper presented to the African Evaluation
Association.
Patton, M. Q. (1997). Utilization-focused evaluation
(3rd ed.). Thousand Oaks, CA: Sage Publications.
Patton, M. Q. (2008). Utilization-focused evaluation
(4th ed.). Saint Paul, MN: Sage Publications.
Reay, D. (1999). Linguistic apital and home–school
relationships: Mothers’ interactions with their
children’s primary school teachers. Acta
Sociologica, 42(2), 159–168.
Renzulli, J. S. (1975). A guidebook for evaluation of
programs for the gifted and talented. Ventura,
CA: Office of the Ventura County Superintendent
of Schools.
Russon, C., & Patel, M. (2002). The African evaluation
guidelines: 2002. A checklist to assist in planning
evaluations, negotiating clear contracts, reviewing
progress and ensuring adequate completion of an
evaluation. Evaluation and Program Planning,
25(4), 481–492.
Sarason, S. (1998). Political leadership and
educational failure. San Francisco: Jossey Bass.
Schwartz, R. (1998). The politics of evaluation reconsidered. Evaluation, 4(3), 294–309.
Shochot-Reich, E. (2006). An attempt to define the
evaluation field in Israel as a profession and
characterize evaluators’ professional identity. BA
thesis, Ben-Gurion University, Beer-Sheva. (Hebrew)
Stake, R. E. (2007). NAEP, report cards and education:
A Review Essay”. Education Review, 10(1). Retrieved February 8, 2007 from http://edrev.asu.
edu/essays/v10n1index.html
Weiss, C. H. (1991). Evaluation research in the politi-
28
Journal of Assessment and Accountability in Educator Preparation
cal context: Sixteen years and four administrations
later. In M. W. McLaughlin & D. C. Phillips
(Eds.), Evaluation and education: At a quarter
century (pp. 211–231). Chicago: University of
Chicago Press.
Weiss, C. H. (1999). The interface between evaluation
and public policy. Evaluation, 5(4), 468–486.
Wholey, J. S. (1983). Evaluation and effective public
management. Boston: Little, Brown.
Wholey, J. S. (1994). Assessing the feasibility and
likely usefulness of evaluation. In J. S. Wholey, H.
P. Hatry, & K. E. Newcomer (Eds.), Handbook of
Practical Program Evaluation (pp. 15–39). San
Francisco: Jossey Bass.
Authors
Miri Levin-Rozalis is a faculty member and head
of the Graduate and Postgraduate Program in
Evaluation Department of Education at the BenGurion University in Israel. She is a practicing
evaluator and the co-founder and former president
of the Israeli Association for Program Evaluation.
Her current research interest is the sociology of
evaluation in Israel and the world.
Orit Lapidot is a teacher, program evaluator, and
member of the Research-colleagues' network at the
Mofet Institute in Tel-Aviv. Her research interests
are in the sociology of education.
Download