User Participation in Evaluation - Top-down and Bottom

advertisement
1
__________________________________________________________________________________________
Studies in Educational Policy and Educational Philosophy
E-tidskrift 2003:1
__________________________________________________________________________________________
Associate Professor, Hanne Kathrine Krogstrup
Department of Social Studies and Organisation
Aalborg University, Denmark
Abstract
This article points out different aspects of user participation. It differentiates between topdown and bottom-up oriented user participation. The Top-down oriented approach to user
participation is described with reference to a concept of measuring user satisfaction while the
bottom-up approach is connected to deliberative democratic evaluation, democratic evaluation
and empowerment evaluation.
User Participation in Evaluation - Top-down and Bottom-up Perspectives
User Participation and Evaluation
Endeavours of the public sector to manage and evaluate social provision have placed user
participation at the head of the agenda. In addition to user participation being instigated by
legislative action in recent years in Denmark, the trend is towards user participation assuming
a position as an institutionalised recipe for organisations, as indicated by Kjell Arne Røvik, a
Norwegian organisation theorist. By this he means […] “a popular recipe which has come to
serve as a model for many organisations” (Røvik 1998:13, own translation).
User participation in evaluation reflects a realisation that exclusive participation of politicians,
management, and street-level bureaucrats in evaluations is insufficient, and that users by a
long way have something to contribute.
The notion of user participation is not unequivocal. The inclusion of users may cover a wide
field of activities, which ranges from filling out questionnaires as part of a survey to
determining the criteria which will form the basis of an evaluation. It is however not the
ambition of this paper to meticulously define the notion of user participation in evaluation.
Rather, it is the intention to introduce models/designs, which are in practice regarded as user
participation, and to throw these into relief by reference to theoretical concepts. The notion of
user participation is hence employed pragmatically with varying rather than unambiguously
conceptualised characteristics.
Top-down Versus Bottom-up
With the purpose of accentuating various aspects of user participation as regards evaluation,
and with inspiration from the field of implementation research, the following will distinguish
between top-down and bottom-up approaches to user participation in evaluation.
2
According to some critics, such differentiation is artificial and not particularly gainful in
practice. The dichotomy however enables a certain overview, which helps concentrate the
attention on the fact that different perspectives on evaluation are not neutral. On the contrary,
different perspectives on evaluation engross differences in terms of values, they generate
different types of knowledge and operate in different ways, and they take different views of
the policy they evaluate. The importance assigned to statements of aims in the evaluation
design is one essential difference between a top-down and a bottom-up evaluation (Huldgård
1998:75), and in this paper the user’s role in the design is particularly noteworthy. In practice,
elements of the two approaches converge and fuse; notwithstanding, from a purely theoretical
point of view, they reflect different social positions, and they are therefore to some extent
value-loaded.
The top-down oriented approach is primarily inspired by a target-driven mode of thought,
and, as regards evaluations, goal-free or classical evaluation, but also management theories
and the private sector. Top-down oriented approaches are primarily concerned with analyses
of the degree to which politically devised policy statements are implemented (Bogason and
Sørensen 1998) and rely on the democratic system of government living up to its ideal. As
follows, politics precede legislation whilst the civil services, in a loyal and neutral manner,
carry decisions into effect (Winter 1998). Hence it is moreover possible to put forward
relatively unambiguous evaluation criteria, which are based upon political policy statements.
The generated knowledge primarily seeks to disclose whether political goals have been
accomplished, and control is thus the objective of the evaluation with the intention of making
adjustments so that politically passed resolutions are in conformity with actual practice.
Taking as its point of departure the administrative organisation’s ability to function, the
bottom-up oriented approach however scarcely concerns itself with democratically laid down
policy statements. Speaking of user participation in a bottom-up perspective, it typically
touches on users’ experiences of administrative procedures.
Bottom-up oriented approaches to evaluation all agree that the point of implementation
comprises an independent, politically interpretative stage (Lipsky 1980). On that basis the
question of democracy being related to the stage of implementation is of immediate interest.
Furthermore there is agreement that knowledge emerges by way of dialogue-oriented
methods, especially when it comes to reflections of values, among these the values that users
hold. Based on open questions (i.e. the criteria to be assessed have not been defined
beforehand), experience of users’ assessments of public services has shown that 80% of the
responses (positive and negative) deal with the encounter between users and street-level
bureaucrats (Krogstrup and Stenbak 1994).
Top-down Oriented User Participation
Top-down oriented user participation is closely related to the classical evaluation tradition,
but in recent years, taking inspiration from the use of management tools in the private sector,
this approach has experienced a renaissance. The reasoning of the top-down oriented
approach is that experiences from the private sector can, to an advantage, be transferred to the
public sector. Experience has shown that the most successful businesses pay considerable
attention to costumer service; they listen to costumers and adapt their products accordingly.
Applying this mode of thought to the public sector gathered headway in England during the
1980s in particular, and it has to a large extent gained a footing in Denmark during the 1990s
(Nørgaard Madsen 1992; Krogstrup 1997). In 1983 Griffith, managing director of the largest
3
chain of supermarkets in England, was asked to analyse National Health Service (British
public health services for psychiatric patients and local government services for the mentally
disordered). He reached the conclusion that leaders paid less attention to users than what was
the case in the private sector, and he argued that the organisation, leaders in particular, should
make an effort to obtain knowledge of patients’ views and assessments of public services
(Wistow and Barnes 1993:281).
It was the intention to demonstrate and secure efficiency in the public sector by use of
methods and techniques taken from the private sector. The two fundamental objectives of this
approach include evaluation and measuring effectiveness as well as ready response to the
wishes and demands of the users (Dahler-Larsen 1999). Top-down oriented user participation
is theoretically based on management theories, especially Total Quality Management (TQM)
and New Public Management (NPM), combined with thoughts and ideas from the classical
evaluation tradition. A number of methods are suggested regarding how leaders, administration, and employees may accomplish the organisation’s goals. These methods are very similar
to methods developed by the private sector in order to promote feedback about customers’
assessments of the service they receive. The methods include, among others, assessments of
customer satisfaction, complaints procedures, market analyses, formulation of quality
standards, customer panels assessing services, opinion surveys, and service declarations. The
vocabulary differs from the vocabulary we are familiar with in evaluation theory, yet it does
comprise fundamental elements of evaluation.
The features of evaluation found in management theories are detectable in the sense that
defined objectives are to determine the guidelines for an organisation’s way of managing
tasks, and supervision of quality standards are to reveal whether the objectives have been
implemented. The following principle, put across by The Danish Ministry of Finance, also
reflects this mode of thought: “Rather one measurement than 1000 opinions” (The Ministry of
Finance 1995, own translation).
In connection with the Welfare State undergoing a crisis of efficiency, it is claimed that the
public sector is having problems managing and keeping down expenses, that it is inefficient,
and that it is therefore necessary to check and make certain that taxpayers are indeed provided
with the services they have paid for and which they have been promised.
On the level of local politics it is contended that political decisions must be made evident to
the management and administration to ensure an equivalent implementation, and that citizen
rights and duties are made evident and clear so that users can assess whether services are
consistent with these decisions.
Arguments in favour of assessing whether public services in point of fact carry out the
intentions are not in short supply, and it is certainly not contested here that the problems in
question are indeed existent. By and large, such assessments prompt organisations and staff to
focus the attention on own performances, and hence they may bring about concerns which,
from the point of view of the users, may in fact result in improvements of the services
(Krogstrup 2000).
As mentioned earlier, user participation in evaluations, which embark on decisions and
intentions top-down of the organisation, are, among other things, based on assessments of
customer satisfaction. And there is a close connection between assessments of customer satisfaction and the notions of quality, costs, and management. When measuring customer satis-
4
faction, and hence quality, a typical course of action can be described as follows: First of all,
financial margins and main goals are established on a political level. Next, the management of
the particular organisation then defines criteria and standards for the accomplishment of goals.
Criteria comprise operational, specific quality goals, whereas standards reflect the satisfactory
level of quality. The criteria form the basis of a questionnaire whereby the extent to which
users find the services satisfactory is assessed by way of quantitative methods. On the background of this quality control, the organisation’s quality standards are weighed against the
users’ quality assessments to see if they are equivalent. The organisation can then make
attempts to remedy any discrepancies (The National Association of Local Authorities in Denmark, The Danish National Board of Health, Leth, Nørgaard Madsen in Krogstrup 1997: 68).
Assessments of user satisfaction, as here described in general terms, have a strong point in the
sense that they involve the users in an attempt to disclose various facets of a given service – in
other words, the users are consulted, however within the given framework of the top-down
defined criteria. It is arguably beneficial to assess the extent to which a given service matches
political and management goals; and what is equally important is that such assessments may
reveal whether users in reality receive the service that politicians wish/promise to provide. In
addition, it is a reasonably manageable task to carry out the evaluation, it brings forth the kind
of data that most politicians require, and it makes different assessments relatively comparable.
Top-down oriented user participation can be regarded as a variant of the concept of “performance measurements” as described in English-language literature. Characteristic of the prototypical ideal is that the organisation’s top has defined the criteria which are being assessed,
and the users are characterised as respondents who serve the purpose of clarifying whether the
defined criteria of desirable quality (performance indicators) have been accomplished.
Critique of Top-down Oriented Evaluation
Evaluation through performance measurement and by use of performance indicators has been
intensely criticised internationally (Winston 1999; Perrin 1999; Bernstein 1999). The criticism
is in many respects similar to that of goal-based evaluation. One essential question in this
discussion is whether the criticism primarily applies to features of the concept itself, or
features of the context in which it is employed. In the Danish debate opposing camps of pros
and cons appear, resembling the debate in international evaluation literature where a paradigmatic dispute seems to be taking place. In Denmark as well as internationally, experience
shows that most street-level bureaucrats do not believe that indicators, which have been
established at the top of the organisation, reflect reality.
From an epistemological point of view, it is criticised that the users are not really included,
but rather it is a matter of pseudo-participation when users are only allowed to answer the
questions that those in charge of establishing criteria find relevant. Users do not have the
opportunity to express how they experience the services. The argument is that a focus on
social systems’ demands in preference to users’ demands brings about a democratic shortfall
(Krogstrup 2000).
It is generally agreed that it is possible to counterbalance this criticism by including streetlevel bureaucrats and users in the process of deciding on the criteria (Perrin 1999; Bernstein
1998), and this does in fact also happen in practice. Indeed, there are cases where all (or a part
of) the criteria have been generated on the basis of a dialogue with users and/or street-level
bureaucrats, as well as there are cases where the method is accompanied by a more qualitative
5
involvement of users. In this connection, the criticism put forward is similar to the criticism
levelled against the stakeholder model: that all stakeholders are not equally powerful, and the
stakeholders at the bottom of the management chain are bound to become the losers of the
game. How the criteria are established will be a contextual and empirical question, and it will
to some extent be based on an interpretation of how democracy works as well as pragmatic
external and internal organisational considerations. Also, the number of variant forms of the
type of evaluations outlined here can be interpreted as an attempt to compensate for an
internal criticism, which can be summarised as follows:
There is a general dissatisfaction with simple, single key performance indicators, but very strong
organisational pressure to develop them. When reflecting on their own area of work, many of
those interviewed felt that attempts to sum up what they do in those simple terms were difficult
if not impossible [ ] What gets measured gets done is really true – and many call it goal
displacement, but it is widely acknowledged as a real problem. (Perrin 1999:103)
In general, top-down oriented user participation is criticised for having a naive understanding
of the way democracy operates. This argument builds upon the fact that political goals are
typically broad and vaguely formulated, and as a result public servants or consultants, who
cannot be held politically responsible, implicitly become someone who formulate politics
because they establish criteria and standards.
Bottom-up Oriented User Participation
Given that bottom-up oriented approaches have a shorter history, and that they have not
become institutionalised as a technique to the same extent as top-down oriented user participation, it is difficult to present these in a pure form. Likewise, the criticism is not quite as
elaborate as e.g. the criticism of performance measurement and the use of performance
indicators.
While top-down user participation is based on arguments about efficiency and management,
bottom-up oriented approaches are inspired by a lack of confidence that the democratic
system of government fulfils its ideal and automatically instigates democratic processes.
Ontologically, it is a postulation that the postmodernist society generates a democratic shortfall, and that evaluation may be a factor that reinforces this tendency (Wistow and Barnes
1993:283). To exemplify, at the Canadian Evaluation Conference 2000, Marie-Andree
Bertrand raised the relatively crucial question whether we indeed have the right to evaluate
and intervene in the common life. With reference to Bourdieu, she pointed out that evaluation
contains elements of violence in consequence of evaluations inherently comprising an assessment from a certain perspective, and it is the evaluator who possesses the power (MarieAndree Bertrand, Universite de Montreal: L’évaluation comme productrice de sens dans la
construction des interventions publiques: un role de plus en plus fragile).
A Chinese, Xi Chen, University of Illinois, presents another argument. He tells of a linguistic
directive issued in China, which stated that Mandarin was to be the principal language in
primary schools in Canton. The two languages differed just as much as English and Spanish,
she claimed. Consequently, a democratic shortfall emerged, and discrimination against
children who could not speak Mandarin took place. Hence it was important assess the consequences and the range of the directive from a democratic perspective (the Canadian
Evaluation Conference, Montreal 2000).
6
Some bottom-up approaches primarily seek legitimacy in an argument of knowledge, while
others mainly ascribe an argument of value to the development and employment of user
participation in evaluation.
The gist of the argument of knowledge is that the users do not necessarily hold the utmost or
the only accurate knowledge about a given social service, and they do not by definition have
the best possibility for assessing the service that they receive. However, they do have a unique
knowledge about the service given that they, as the only ones, have experienced it at first
hand, and thus they may contribute significantly to an understanding and development of
“social” services. This approach aims at user participation that accommodates and incorporates users’ rationality when assessing the public sector’s services (Krogstrup and Tjalve
1999).
The argument of value draws attention to the fact that users, and in particular those who are
most deprived, do not enjoy the same democratic rights as do other citizens, and it is therefore
important that they are endowed with an exceptional position when assessing the way in
which the social sector operates. The argument of value is found in democratic evaluation
(Wistow and Barnes 1993) and in empowerment evaluation (Fetterman 1996; Mithaug 1996).
Essential to bottom-up oriented user participation is the opportunity that users have to
communicate their understanding of problems and solutions on the background of their
rationality. Consequently, the applied methods must be adjusted to the users’ capabilities.
It is important to note that only very few of the evaluation methods which are characterised as
bottom-up approaches include the users as the only stakeholders. Rather, the users are
included as one group of stakeholders among others, and the accumulation of user knowledge
forms part of the total accumulation of knowledge, which have been generated on account of
the evaluation. However empowerment evaluation, which seeks to include users by reason of
an emancipatory aim, is an exception. The reason why user participation is relatively prominent in the discussion and is able to cause a stir is, firstly, that it is not regarded entirely
legitimate within certain areas. Secondly, the role of the users in evaluation can be problematic, depending on how the purpose of user participation is interpreted. A third possible
explanation is that the inclusion of underprivileged users is particularly challenging as regards
traditional methods, and it necessitates pedagogical adjustments in view of e.g. the users’
communicative and intellectual capabilities.
The following presents four models (deliberative democratic evaluation, democratic evaluation, empowerment evaluation, and the UPQA method), which arguably reflect a bottomup approach.
Deliberative Democratic Evaluation
In terms of value, deliberative democratic evaluation takes the position that, as an influential
institution, evaluation should support the realisation of a democratic society. It appears that
deliberative democratic evaluation primarily embraces a purpose of learning with the
intention of attaining a democratic society, also on the level of institutions, and it attaches
evaluation to a broader socio-political and moral structure (Greene 2000:14) in order to ensure
that policy debates include all relevant stakeholders, among these the users.
7
In reference to this, three requirements for the evaluation design are defined: Deliberation,
dialogue, and inclusion. Deliberation is defined as reflexive reasoning over relevant themes,
problems, and questions, and the aim is to identify preferences and values. Contrary to e.g.
goal-based evaluation, these questions are brought up for discussion, and the evaluation is
initially goal-free. The approach is dialogic in the sense that stakeholders and evaluators
engage in a dialogue through the entire process of evaluation with the intention of portraying
the stakeholders’ ideas and viewpoints as comprehensively as possible. In relation to a
discussion about user participation in evaluation, the requirement for inclusion becomes
interesting. The requirement for inclusion emphasises the necessity of including all relevant
stakeholders in the evaluation design
[…] so that relevant interests are represented and so that there is some balance of power among
these interests, which often means representing the interests of those who might be excluded from
the discussion, because their interests are most likely to be overlooked. (House and Howe 2000)
Deliberative evaluation contains qualities considerably similar to those found in the stakeholder model (Guba and Lincoln 1989). However it differs primarily due to the explicit
demand for inclusion. One question which House and Howe find important to ask in an evaluation is whether any stakeholders have been excluded. The authors themselves reply that
sometimes an important group has been excluded, and it is most often not a powerful, influential group but rather a poor, powerless, and minority group (House and Howe 1999:6).
House and Howe do not contribute with an actual evaluation model but rather a framework,
which establishes as a criterion the involvement of users along with the other stakeholders in
the evaluation. They consider deliberative democratic evaluation an ideal worth pursuing, and
although this is difficult, it “does not mean that it cannot be a guide” (House and Howe
2000:9).
Democratic Evaluation
Everitt and Hardiker (1996) present a more far-reaching approach to democratic evaluation.
The criteria of evaluation emphasise that practice is compatible with needs negotiated by way
of democratic and fair processes, which enable everyone “to flourish and enjoy well-being”
(Everitt and Hardiker 1996:176). Against the background of this, standards that specify when
this is actually the case are laid down.
Democratic evaluation has set itself the task of facilitating that the Welfare State’s institutions
are capable of self-evaluation and capable of developing practice in the direction of the ‘good’
with the intention of ensuring that inequality, power structure, and practice is transformed to
make allowances for everyone to thrive as active citizens. The aim of the evaluation is to
affect social change in the direction of the ‘good’. It is on the basis of such measure of value
that the evaluation is to make assessments:
Practice is judged to be ‘good’ if it meets these criteria. Knowing that power is always present
cautions against thinking in terms of the ‘good’, preferring to think of the state of democracy,
fairness and equality as becoming, judging practice for its working in the direction of the ‘good’,
i.e. in working towards the implementation of democratic, fair and equal processes to bring about
equality. Practice is ‘poor’ if it makes no attempt to meet the criteria of democracy, fairness and
equality. It is ‘corrupt’ if it is anti-democratic, autocratic, unfair and if it treats those that it is there
to serve as though they are ‘other’, not equal to ‘us’. (Everitt and Hardiker 1996:176)
8
While deliberative democratic evaluation puts forward “democratic requirements” for the evaluation design, Everitt and Hardiker emphasise that the purpose of evaluation is to assess the
democratic dimensions of the implementation.
Empowerment Evaluation
At the heart of empowerment evaluation is the assumption that every human being possesses
individual and unique capacities, interests, and needs, which deserve attention. It is believed
that every person deserves an equal opportunity to express his or her unique potential, and
that there ought not be superior mechanisms, which categorise and thereby define people’s
needs and desires.
Advocates of empowerment evaluation highlight the importance of any person being able to
take the responsibility for his or her life, and of having had an opportunity to formulate the
premises on which it rests. To ensure fair chances, participation in such a process necessitates
that each individual’s capabilities are taken into account with regard to making choices and
clarifying his or her self-defined needs and interests.
Empowerment can be regarded as both a strategy and a value, and it can serve as a tool for
forming the basis of problem-solving in organisations where the interaction is characterised
by competing views and attitudes, and where the actors, among these the users, are offered the
possibility of direct political involvement (Nielsen, Kristoffersen, and Rieman 2000).
Empowerment evaluation goes through the following procedures: The participants establish
goals on the basis of a discussion about strengths and weaknesses of the services. The participants in the evaluation process decide on and develop strategies for the implementation of
the established goals as well as decide on the necessary requirements for documentation to
assess whether the established goals have been accomplished (Fetterman 1996 and Mithaug
1996, cf. furthermore Krogstrup 1999).
Empowerment evaluation may take place at an individual as well as an organisational level,
involving users as well as employees at the bottom of the organisational structure. Empowerment evaluation contains an emancipatory assessment of the extent to which the
evaluation, as a resource of empowerment and social change, serves those who are the least
powerful in the evaluated context.
The UPQA Method
Several evaluation methods have been developed in a Danish context, and user participation
appears to be a common denominator. One of these methods is called the UPQA method
(User Participation in Quality Assessment). Users are assigned the role of acting as triggers
for learning in the process of evaluation. The UPQA method emphasises users’ assessments
of policy as an important source in terms of questioning the way it is put into practice.
Challenging the institutional order and reflection are key words for organisational learning.
The following seeks to describe the principles found in the process that provokes reflection
and challenges the institutional order.
The reflection is anchored in a process where users of e.g. social provision are asked, in a
group interview, to express and explain their satisfaction or dissatisfaction with the services
they are offered. A range of experiences and subjective quality assessments will arise out of
9
this group interview, and they can be systematised in thematic headings. Seen from a user perspective, quality most often manifests itself in the relations between users and street-level
bureaucrats (Lipsky 1980), and hence users will often call attention to experiences and make
assessments which deal with this relationship.
To exemplify with a purely hypothetical theme on the subject of quality, it could be that users
find it difficult to talk to caseworkers because they experience that caseworkers have a tendency to define both problems and solutions. Consequently, users feel as if they are deprived
of independent initiative.
Street-level bureaucrats (here caseworkers) are confronted with the users’ thematic quality
assessments with the intention of asking the caseworkers, in a group interview, to explain
what they believe are the reasons for the users’ experiences of quality. Street-level bureaucrats may e.g. point out that they increasingly experience users as lacking initiative, and thus
they often consider it their responsibility to “take initiatives so that at least something
happens”. An inconsistency between users’ needs and the practice of social provision has
been discovered. Furthermore, street-level bureaucrats could refer to experiences where, time
and again, the management has refused to implement users’ initiatives. Hence, yet another inconsistency has been discovered.
The statements from the group interview with street-level bureaucrats are now presented,
partly to the users and partly to the management with the object of having them respond to
these statements. Let me exemplify by taking a closer look at the involvement of the management. The statements from users and street-level bureaucrats are systematized and presented
to the management in a group interview in order to search out their opinions of the causes.
Perhaps they identify politicians as central actors with regard to quality. Politicians are then
confronted with a summary of the statements from users, street-level bureaucrats, and
management with the intention of obtaining their opinions of the causes of statements coming
from these actors. Perhaps the politicians redirect the discussion towards the street-level
bureaucrats, who are subsequently asked to respond to the politicians’ statements, and so
forth. Thus the “hunt” for inconsistencies continues in keeping with the themes on quality that
were identified by the users (Krogstrup 1997).
The above-described approaches vary in terms of the degree of importance attached to user
participation and in terms of objectives. However they share the viewpoint that the public
sector suffers from a democratic shortfall, and the method is dialogue oriented. With the
exception of empowerment evaluation, the above models seek to include not only users but
also other stakeholders. Despite the apparent attention to users, it does not imply that they are
assigned an exceptional position amongst stakeholders with regard to the importance assigned
to their statements.
Critique of Bottom-up Oriented Evaluation
The critique of the described approaches is multi-faceted, and the substance of it is closely
connected to the perspective from where it was voiced. It is not possible here to recap every
point of critique, and merely some of the main points will be mentioned. First of all, it has
been criticised that the democratic system of government is neglected in bottom-up
approaches to user participation. It is emphasised that the public sector has a responsibility
take account of social and collective considerations, and it is rather precarious if user partici-
10
pation in evaluation results in a situation where the implementation of policy is alone assessed
and adjusted on the basis of users’ and user groups’ special interests.
This line of argumentation furthermore brings forward the problem that a dialogue gives
priority to strong users at the expense of underprivileged users, and consequently advocates of
a bottom-up approach undermine their own foundation by creating a democratic shortfall in
relation to the last-mentioned group. This argument raises a question about the extent to
which users, all things considered, are able to participate in a dialogue (Karlsson 1996:177179; Levin 1996:10). Partly, it is questioned whether users are capable of participating in a
direct dialogue with members of the organisation, which may be problematic due to relations
of power and authority, the users’ limited resources, etc. And partly it is questionable
whether, as a starting point, all users have indeed realised what they want and need, whether
they have formed an understanding of their own situation, whether they are able to assess
social provision and point at potentialities, and so forth. In this connection it is furthermore
questionable whether users have sufficient knowledge of alternative ways of providing social
services, and whether users are aware of the consequences of different options (Krogstrup and
Tjalve 1999).
Also, the use of concepts such as deliberation, dialogue, inclusion, and empowerment is
criticised for lack of clear conceptualisation, which allows for interpretations that may easily
legitimate abuse of power in evaluation or, less abrasively phrased, it may legitimate that
evaluation rests on the evaluator’s premises. Among other things, it is highlighted that empowerment evaluation, when conducted by members of the organisation, can never go beyond
the institutional order, from which users should supposedly be emancipated and be able to
perceive critically. As a final main point of critique, it is often noted that the evaluator
possesses power by virtue of his or her partaking in the construction of the knowledge that is
generated. This power is problematical, and it is a source of contingency and contains
unscientific ingredients by reason of the evaluator’s function to not only evaluate data but also
to conduct and control the process, and hence there is a risk of manipulation of underprivileged users in particular.
However, this is not criticism of or reservations about user participation in evaluation
exclusively, but criticism which is generally levelled against formative evaluation.
Learning Perspectives
It is possible to distinguish between two different types of learning processes: Single-loop
learning and double-loop learning. Top-down oriented user participation primarily reflects
single-loop learning while bottom-up orientation can be regarded as double-loop learning. In
conclusion, let me elaborate on this. Single-loop learning is characterised by the organisation’s ability to remain stable in a changing world. The tool is fault finding: Searching for
solutions by way of relating to both internal and external contexts, members of the organisation locate faults, which they then correct in conformity with the existing norms for proper
action. Single-loop learning is satisfactory provided that the objective is to “correct faults” by
changing the organisation’s strategies and assessments within the existing rationality (Argyris
and Schön 1978: 18-20). Learning processes taking place on the background of top-down
oriented user participation are examples of single-loop learning: Politicians and management
establish goals, and the degree to which goals are accomplished is assessed via quantitative
surveys of user satisfaction. In view of this, defects as regards quality are identified (fault
finding), and staff members subsequently work out operational goals for the elimination of
11
defects. The learning perspective of this approach implies that the organisation keeps the
course that has been determined through criteria and standards. Surveys of user satisfaction
thus assume a character of control of the organisation’s adherence to the established course.
The course that the organisation has laid down is not challenged. Hence this type of survey
educes a way of legitimating and strengthening the existing rationality rather than challenging
it. While single-loop learning reflects the ability to discover and correct faults in relation to a
given rationality, double-loop learning reflects the organisation’s ability to see a situation
from different points of view and question the existing rationality (Morgan 1988:93). More
specifically, the question is whether the organisation is able to see that, theoretically, there are
numerous possible solutions to their problems, and whether this point of view affects that the
organisation’s solutions are challenged. Double-loop learning is thus characterised by the
creation of a new understanding of possible conflicting needs, conditions, and consequences
in such a way that this understanding is embedded in the organisation and is not just encountered by the individual actor through the correction of faults (Argyris and Schön 1978: 18-29).
In order for this to happen, the organisation must be able to open-mindedly face changes in
the environment and be able to challenge the latent level in a quite fundamental way (Morgan
1988:96).
Concluding Remarks
The choice of evaluation design and method, including top-down or bottom-up oriented user
participation, must depend on the type of knowledge and information the organisation is
looking for as well as the purpose that user participation can and must serve. Important
objections in relation to specific initiatives at user participation may deal with the fact that it
is often unclear which approach is being employed or could/should be employed, what type of
knowledge is generated by way of the chosen approach, and what conclusions can be drawn
from that basis.
The assessment of the quality of an evaluation is in practice often ontologically based.
Accordingly, questions concerning the quality of an evaluation are in danger of being reduced
to arguments of attitude to user participation; that is, whether users should be included or not,
and how to include them. In place of arguments of attitude, which may bring about endless
discussions and no authoritative conclusions, it is possible to turn to theories of science and
seek the answers through the concepts of validity (does the evaluation in fact investigate the
things it claims to investigate); reliability (if the evaluation was carried out by another
evaluator, would it then reach the same result), and finally a discussion of the extent to which
it is possible to generalise the results of the evaluation. It is beyond the intentions in this paper
to go into an elaborate discussion of these concepts, and it is perhaps redundant to point out
that the concepts are defined in different ways depending on e.g. the generated data being
qualitative or quantitative. Striving for validity and reliability, it is essential that the criteria of
the assessment are explicit so that the reader has an opportunity to criticise, reflect on, and
discuss the presented evaluation results as well as the recommendations that may follow from
the evaluation.
It is often the case that evaluation data about user satisfaction is presented as definitive
knowledge without clarifying how criteria have been established, and without paying much
attention to the users’ perspective on the evaluated services.
12
Hanne Kathrine Krogstrup is Associate Professor at Aalborg University, Denmark. She has
developed the UPQA- model: User participation in Quality Assessment, which is a dialogue
and learning oriented evaluation model. She is co editor on a danish publication “Tendencies
in evaluation” and her latest book - “Evaluation models” describes different evaluation
models. Classical effect-evaluation, realistic evaluation, performance measurement,
responsive evaluations, stakeholder evaluation, user participation in evaluation, empowerment
evaluation etc.
References:
Argyris Chris og Donald A. Schön (1978): Organizational learning: A Theory of Action
perspektive, Addison-Wesley Publishing Company.
Bernstein, David J. (1999): Comments om Perrin's “Effective Use and Misuse of
Performance Measurement”, I: American Journal of Evaluation, Vol. 20, No. 1, pp. 85-99.
Bogason, Peter og Eva Sørensen (1998): Samfundsforskning Bottom-up - Teori og metode.,
Gylling:Roskilde Universitets Forlag.
Dahler-Larsen, Peter (1999): Den rituelle Reflektion - om evaluering i organisationer.
Odense: Odense Universitetsforlag, 3. oplag.
Everitt, Angela og Pauline Hardiker (1996): Evaluating for Good Practice.
Macmillan.Finansministeriet (1995): Effektive institutioner - værktøjer til velfærd. Schultz.
Fetterman, David M. 1996: Empowerment Evaluation: An introduction to Theory and
Practice: I Fettermann David M., Shakeh J. Kaftarian og Abraham Wandersman (eds):
Empowerment Evaluation. Knowledge and Tools for Self Assessment and Accountability.
Thousand Oaks: Sage.
Greene, Jennifer C. (2000): Challenges in Practicing Deliberative Democratic Evaluation. I:
New Directions for Evaluation, Spring 2000. Jossey-Bass Publication.
Guba, G. Egon og Yvonna S. Lincoln (1989): Fourth Generation Evaluation. Newbury Park,
London og New Delhi: Sage.
House, Ernest R. og Kenneth R. Howe (1999): Values in evaluation and social research.
Californien: Sage.
House, R. Ernest og Kenneth R. Howe (2000): Deliberative, Democratic Evaluation. I: New
Directions for Evaluation, Spring 2000. Jossey-Bass Publication.
Huldgård, Lars (1998): Bløde mål og evaluering i bottom-up. I: Bogason, Peter og Eva
Sørensen (1998): Samfundsforskning Bottom-up - Teori og metode. Gylling: Roskilde
Universitets Forlag.
Karlsson, Ove (1996): Att Utvärdera -mot vad? Stockholm: HLS Förlag.
Krogstrup, Hanne Kathrine og Else Stenbak (1994): Socialpsykiatri mellem system og bruger,
6 rapport, Projekt Socialpsykiatri 15-M. Glumsø: SUS.
Krogstrup, Hanne Kathrine (1997): Brugerinddragelse og organisatorisk læring i den sociale
sektor. Århus: Systime.
Krogstrup, Hanne Kathrine (1999): Det handicappede Samfund - om brugerinddragelse og
medborgerskab. Århus: Systime.
Krogstrup, Hanne Kathrine og Jakob Tjalve (1999): Top-down og bottom-up orienteret
brugerinddragelse. I: Hanne Kathrine Krogstrup: Det handicappede samfund - om
brugerinddragelse og medborgerskab. Gylling: Systime.
13
Krogstrup, Hanne Kathrine (2000): Utilsigtede konstitutive konsekvenser af at “styre”
humanprocessing løsninger ved hjælp af standarder. Artikel under publicering. Institut for
Sociale Forhold og Organisation. Aalborg Universitet.
Levin, Morten (1996): The Quest for Quality in Participatory Inquiry. Paper. Department og
Organization and Work science. The Norwegian University of Science and Technology.
Lipsky, Michael (1980): Street-level Bureaucracy, dilemmas of the individual in public
services. New York: Russell Sages Foundation.
Madsen, Ole Nørgaard (1992): Kvalitet som mål i offentlig virksomhed. Århus: Forlaget
Centrum.
Mithaug, Dennis E. (1996): Fairness, Liberty and Empowerment Evaluation I: Fettermann,
David M., Shakeh J. Kaftarian og Abraham Wandersman (eds): Empowerment Evaluation,
Knowledge and Tools for Self Assessment & Accountability. Thousand Oaks: Sage.
Morgan, Gareth (1988): Organisasjonsbilder. Oslo: Universitetsforlaget.
Nielsen, Paw Holze, Ole Slotht Kristoffersen og Søren Riemann (2000): Evaluering og
kvalitetsudvikling i folkeskolen - Udviklingen af et formativt demokratisk
evalueringskoncept med kritisk udgangspunkt i praksis, speciale ved Aalborg Universitet,
cand. scient. adm uddannelsen.
Perrin, Burt (1999): Performance Measurement: Does the reality Match the Rhetoric? A
Rejoinder to Bernstein and Winston, I American Journal of Evaluation: Vol.20, No. 1,
1999, pp. 101-11.
Rothstein, Bo (1994): Vad Bör Staten Göra. Stockholm: SNS Forlag.
Røvik, Kjell Arne (1998): Moderne Organisationer-Trender i organisasjonstenkningen ved
tusenårsskiftet. Bergen Sandviken: Fakbokforlaget.
Winston, Jerome (1999): Performance Indicators - Promises Unmet: A response to Perrin I:
American Journal of Evaluation: Vol.20, No. 1, pp. 10 I-III.
Winter, Søren (1998). Implementering og effektivitet. Viborg: Systime.
Wistow, Gerald og Marian Barnes (1993): User Involvement in Community Care - Origins,
Purpose and Application, Public Administration, Vol. 71. Autumn.
__________________________________________________________________________________________
© Texten får fritt kopieras för icke kommersiella ändamål under förutsättning att fullständig referens anges.
Krogstrup, Hanne Kathrine. 2003: User Participation in Evaluation - Top-down and Bottom-up Perspectives. I Studies in Educational Policy
and Educational Philosophy: E-tidskrift, 2003:1. <http://www.upi.artisan.se>.
___________________________________________________________________________________________
Download