T I EU A P

advertisement
T HE I MPACT OF THE EU ACCESSION P ROCESS ON THE
ESTABLISHMENT OF E VALUATION C APACITY IN BULGARIA
AND ROMANIA
JULIAN KNOTT‡
ABSTRACT
Policy and programme evaluation is fast emerging as a norm of international governance and is
seen by many as an important tool in promoting a more accountable and results-oriented public
sector. As a consequence, many governments and international organisations have sought to export its benefits by making it a pre-condition for both bilateral and multilateral financial assistance.
This paper analyses the extent to which the European enlargement process has impacted on the
evaluation capacities of Bulgaria and Romania. It examines the institutional and organisational
changes that have affected the demand, supply and utilisation of evaluation in these two countries.
It argues that the accession process has played a significant part in introducing the vocabulary, systems, and structures of evaluation, mainly through the conditionalities associated with the preaccession programmes and the European Structural Funds. However, as things stand, evaluation
remains primarily associated with EU programmes and has still not been adopted systematically
into national structures and practices.
Keywords: accession process; conditionalities; European Union; governance; policy
evaluation
1. INTRODUCTION
Following the collapse of communism in Central and Eastern Europe (CEE), the region has undergone a dramatic transformation moving away from centrally planned
economies towards democratic market economies. This process was marked by the historic accession of eight CEE countries to the European Union (EU) on the first of May 2004.
Although Bulgaria and Romania did not form part of this first wave due to some outstanding compliance issues, they are currently on track to enter the EU on the first of
January 2007.1
Accession to the EU requires compliance with an extensive set of conditions commonly known as the Copenhagen Criteria2 and the community acquis,3 both of which
‡
Research assistant at the School of Public Policy, University College London. For questions or comments, please contact: j.knott@ucl.ac.uk
1
European Parliament Resolution on the Accession of Bulgaria and Romania, Strasbourg: 14 June
2006. http://www.europarl.europa.eu/sides/getDoc.do;jsessionid=6FD18228762B3F90C3044733CD1249D0.
node1?pubRef=//EP//TEXT+TA+P6-TA-2006-0262+0+DOC+XML+V0//EN (accessed on 01/08/06).
2
The Copenhagen Criteria referred firstly to a set of political criteria, namely the establishment of stable institutions guaranteeing democracy, the rule of law, human rights, and respect for protection of minorities; secondly, a set of economic criteria, namely a functioning market economy and the capacity to cope
with competition and market forces in the EU; and lastly to the capacity to take on the obligations of membership, including adherence to the objectives of political, economic and monetary union (COM 2006).
49
INTERNATIONAL PUBLIC POLICY REVIEW
50
place huge pressures on Candidate Countries to pursue a wide variety of reforms in a
number of different policy areas. Given the enormity of the transition and the great demands that were being placed upon the candidates in CEE, the last enlargement became a
long and drawn out process. As a means of facilitating this process and in order to provide assistance in key areas of importance, the European Commission introduced a series
of programmes, namely the PHARE, SAPARD and ISPA programmes. Thus the process
of accession was characterised by a long process of legal transposition as well as a series of
political and economic reforms. As well as facilitating the candidate countries with these
issues, the pre-accession programmes also acted as “learning by doing” exercises in preparation for the European Structural Funds,4 which become available on accession. The introduction of these programmes placed a series of new responsibilities and obligations on
the candidate countries in terms of management procedures in areas such as tendering,
contracting and payment. In addition, in line with the EU financial regulation, there was
also a requirement to monitor and conduct ex-ante, interim and ex-post evaluations of all
EU expenditure.5 Therefore, there have been strong influences and incentives for candidates to develop capacity in these areas. Although the concept of evaluation has been
largely embraced in the EU fifteen6 and enshrined in the practices of project cycle management,7 it was a relatively new concept for the ex-Soviet countries of Central and Eastern Europe who had no mechanisms for the systemic evaluation of policies, programmes,
and projects outside the scope of EU funds. These countries, according to Hyatt, were
culturally more familiar with the concepts of policing and control than they were with the
softer “learning” or “accountability” driven perceptions of evaluation.8 Therefore, moving
from these conceptions towards the prevalent EU and international norms of evaluation
required a considerable shift in terms of institutions, organisations, systems, structures,
and expertise. As policy and programme evaluation is increasingly seen by many as an
important tool in promoting a more accountable and results-oriented public sector, many
governments and international organisations have sought to export its benefits by making
it a pre-condition for both bilateral and multilateral financial assistance. While there is no
“right” way of exporting the values of evaluation, there are certainly lessons that can be
drawn from the EU enlargement process concerning the promotion of evaluation as best
practice in international governance.
It is in this context that this paper analyses the European enlargement process in
terms of its impact on the evaluation capacity of two acceding countries; Bulgaria and
Romania. For the purpose of this study, capacity is measured by looking at the institutional and organisational changes that have taken place affecting the demand, supply, and
utilisation of evaluation. The paper starts from a theoretical perspective, looking at the
main mechanisms by which the EU impacts on domestic institutions and policy. It then
goes on to explore the issue of evaluation, its significance as an emerging European norm
and the key factors required in developing evaluation capacity. The paper then looks empirically at how the EU has impacted on evaluation capacity in these two countries. The
findings section is divided into three parts, looking firstly at the role of the pre-accession
programmes in introducing the practice of monitoring and evaluation, secondly, the influence of the Structural Funds programmes in consolidating this process and finally, the implications for evaluation outside the scope of EU programmes.
3
The Acquis Communautaire consists of the detailed laws and rules of the EU, which are based on the
founding treaties of the European Union, mainly those of Rome, Maastricht, Amsterdam and Nice (COM
2006).
4
The term “Structural Funds” in this paper will refer to the new Structural Funds Regulation that will
be in operation from 2007-2013. This provides a funding scheme targeted towards the three objectives of
convergence; regional competitiveness and employment; and European territorial cooperation
5
COM (2002), p. 12.
6
K. J. Lönnroth, “Challenges for Evaluation in an Enlarged Europe,” Plenary Feedback Session, Fifth
European Conference on Evaluation of the Structural Funds (Budapest: 26-27 June 2003).
7
COM (2004).
8
J. Hyatt and H. Simons, ”Cultural Codes: Who Holds the Key? The Concept and Conduct of Evaluation in Central and Eastern Europe,” Evaluation vol. 5, no. 1 (1999).
VOL. 3, NO. 1 – JUNE 2007
51
2. ASSESSING THE IMPACT OF THE EU ON DOMESTIC INSTITUTIONS AND POLICY
The process whereby domestic change occurs as a result of EU influence is often referred to as “Europeanization,” defined slightly more precisely by Hix and Goetz as “a
process of change in national institutional and policy practices that can be attributed to
European integration.”9 Schimmelfennig and Sedelmeier identify the predominant factors
involved in EU-driven change (see Table 1).10 They outline two explanatory models consisting of the “external incentives” or conditionality model, the “social learning” or socialization model. Based on work by March and Olsen, and consistent with the debate
between rational choice institutionalism and sociological institutionalism, these models
differentiate between a logic of consequences and a logic of appropriateness as drivers of
change.11 The logic of consequences makes the assumption that rational actors seek to
maximise their welfare through strategic actions. The logic of appropriateness, on the
other hand, assumes that actors will be motivated by internalized identities, values, and
norms. The following sections explore these different models drawing on a variety of literature.
Table 1: Alternative Mechanisms of Europeanization12
Logic of rule adoption
Logic of conseLogic of appropriatequences
ness
EU drivers of change Conditionality
Socialization
2.1 Rule Adoption through Conditionality
One of the primary focuses of the literature on European enlargement has been on
the issue of conditionality, defined by Schimmelfennig and Sedelmeier as “a bargaining
strategy of reinforcement by reward, under which the EU provides external incentives for
a target government to comply with its conditions.”13 As well as its application in recent
EU enlargements, conditionality has also been an instrument commonly used by other international institutions such as the World Bank and the International Monetary Fund in
the context of development assistance programmes and the provision of loans. Following
a rationalist bargaining model, EU conditionality is based on a system of power asymmetry,14 which enables the Commission to demand compliance with the Copenhagen Criteria
and the community acquis in exchange for membership of EU.
Whereas conditionality has often been focused on economic issues, its use in the
context of the EU accession process has seen a controversial move to the political and in-
9
S. Hix and K. Goetz, “Introduction: European Integration and National Political Systems,” West
European Politics vol. 23, no. 4 (2000): p. 27.
10
F. Schimmelfennig and U. Sedelmeier, “Introduction: Conceptualizing the Europeanization of Central and Eastern Europe,” in The Europeanization of Central and Eastern Europe, eds. F. Schimmelfennig and U.
Sedelmeier (Cornell: Cornell University Press, 2005).
11
J. March and J. Olsen, Rediscovering Institutions: The Organizational Basis of Politics (New York: New
York Free Press, 1989).
12
Adapted from Schimmelfenning and Sedelmeier (2005), p. 8.
13
F. Schimmelfennig and U. Sedelmeier, “Governance by Conditionality: EU Rule Transfer to the
Candidate Countries of Central and Eastern Europe,” Journal of European Public Policy vol. 11, no. 4 (2004): p.
662.
14
Schimmelfenning and Sedelmeier state that bargaining power is a result of “the asymmetrical distribution of : (1) information; (2) the benefits of a specific agreement compared with those of alternative outcomes or ‘outside options’” (2005, p. 11).
52
INTERNATIONAL PUBLIC POLICY REVIEW
stitutional domain.15 Given this increasing tendency, the Commission’s influence on candidate country governance during the recent and on-going accession processes has been
well beyond the sway that it has had in previous enlargements.16 Indeed, previous accessions consisted of much shorter processes, involving far less scrutiny over the institutional
and governance systems of prospective members. It is therefore generally accepted that,
as far as the recent and on-going enlargement processes are concerned, the EU has displayed its potential to influence the candidate countries through these mechanisms of
conditionality. Grabbe identifies five main mechanisms used by the EU to effect change
through conditionality and the accession process.17 These are categorised as: (1) gate
keeping; (2) benchmarking and monitoring; (3) the provision of legislative and institutional templates; (4) money: aid and technical assistance; and (5) advice and twinning. According to Grabbe, the gate keeping function is the most powerful conditionality tool as it
represents the EU’s ability to control the accession process in terms of when the negotiations are started, when the relevant chapters of the acquis are closed, and ultimately when
a country is accepted in to the Union.18 The other four conditionality tools, however, capture the more intermediate and perhaps more operational means that the Commission
uses to exert influence. The above factors will be further examined later in this paper in
order to obtain an empirical understanding of the impact of these mechanisms on the establishment of evaluation capacity in Bulgaria and Romania.
The EU accession process has been characterised by two main forms of conditionality: democratic conditionality, embodied by the Copenhagen Criteria, and acquis conditionality. Measuring the fulfillment of these requires various levels of interpretation. As
the Copenhagen Criteria are quite ambiguous in nature, consisting of fairly broad concepts, any decision on their fulfillment is extremely subjective and strongly driven by the
political environment within the EU. Indeed, Checkel points out that the politicised nature of conditionality means that there is not always a strong correlation between conditionality and compliance.19 The acquis on the other hand is more detailed and its progress
is often easier to measure. However, the degree to which this is possible is largely dependent on the level of specification in the policy area concerned.20 Conditionality is not
uniform, and its strength varies considerably depending on the policy field and the degree
to which the acquis in that area is “thick” or “thin.”21 In some areas, the lack of institutional templates and specific guidance from the Commission has meant that candidates
have been unsure of how to implement the required changes. Vague guidance such as
“prepare strategy” without additional substantive guidelines on what specific areas
should be addressed has been prevalent.22 In addition, due to the ambiguous nature of
much of the formal conditionality, compounded by the ad hoc advice coming from the
Commission, it would appear that the conditionalities, although theoretically uniform,
have led to a more divergent set of outcomes.23
Building on this concept of unevenness, it is also important to recognise the existence
of both formal and informal conditionality. Whereas much conditionality is embodied
and formally presented through the Copenhagen Criteria and community acquis, during
15
J. Checkel, “Compliance and Conditionality” ARENA working papers 00/18, prepared for delivery
at the 2000 Annual Meeting of the American Political Science Association, (Washington DC: 2000).
16
H. Grabbe, “A Partnership for Accession? The Implications of EU Conditionality for the Central and
East European Applicants,” Robert Schuman Centre Working Paper 12/99 San Domenico di Fiesole (European University Institute, 1999).
17
Ibid., p. 1020.
18
Ibid.
19
Checkel (2000).
20
M. Brusis, “The Instrumental Use of European Union Conditionality: Regionalization in the Czech
Republic and Slovakia,” East European Politics and Societies vol. 19, no. 2 (2005).
21
J. Hughes et al, “Conditionality and Compliance in the EU’s Eastward Enlargement: Regional Policy
and the Reform of Sub-national Government,” Journal of Common Market Studies vol. 42, no. 3 (2004): p. 525.
22
Grabbe.
23
K. H. Goetz and H. Wollmann, “Governmentalizing Central Executives in Post-Communist Europe:
A Four-Country Comparison,” Journal of European Public Policy vol. 8, no. 6 (2001).
VOL. 3, NO. 1 – JUNE 2007
53
the day-to-day operations of the accession process, there are a number of pressures that
are exerted by actors within the Commission that are aimed at influencing certain policy
outcomes, most notably in areas where the acquis is “thinnest.”24 The Commission is
therefore able to use this increased ambiguity and flexibility in order to influence its
counterparts towards the policy options that it favours.
Conditionality, however, cannot be described as the unique driver of change. Brusis
characterises conditionality more as affecting the opportunity structures faced by domestic
actors, and he argues that conditionality is often a facilitating force rather than a decisive
one.25 Checkel states “there is a need to broaden the conceptual toolkit when considering
the causal nexus between conditionality and national compliance.”26 Building on this, the
following section focuses on alternative or complimentary explanations of rule adoption.
2.2 Rule Adoption through Socialisation
The section on conditionality above has focused mainly on the coercive mechanisms
used by the Commission to influence institutional and policy change in the candidate
countries. Although conditionality will be the main focus of this paper, it is also important to explore alternative models of EU-driven change, notably the process of socialisation.
Social constructivism makes an important contribution to the debate on EU-driven
change. Schimmelfennig and Sedelmeier sum up the socialisation argument very succinctly, proposing “a government adopts EU rules if it is persuaded of the appropriateness
of EU rules.”27 They identify the issues of legitimacy, identity, and resonance as being key
factors in this persuasion process. They argue that if these factors are undermined there
will be more reluctance to conform to the rules. For example, regarding legitimacy, they
argue that this decreases in cases where rules are not uniform and are not consistent for
old and new members of the EU.28 Risse characterises the constructivist interpretation in
terms of the pressures that make people try to “do the right thing” as opposed to necessarily behaving in a welfare maximising manner.29 In this respect, candidate countries are
compelled to conform to EU norms because of the pressure of wanting to be seen to be behaving appropriately, or in similar ways to existing members.
Checkel argues that when looking at compliance with EU norms, it is not possible to
focus solely on material incentives, as proposed by the rationalist position, but rather to
include factors such as social learning, socialisation and social norms.30 Indeed Checkel
makes the point that it is not necessary to explain rule adoption in the EU by either rationalist or constructivist approaches alone; moreover it is useful to consider how both interpretations can contribute to understanding this phenomenon.
DiMaggio and Powell argue that modern society is characterised by increasing similarity in the forms and practices of organisations, which they call “institutional isomorphism.”31 Of particular relevance to this study is their identification of the coercive
mechanism of isomorphism. This consists of both formal and informal pressures that are
exerted upon organisations. This mechanism incorporates the conditionality driven rule
adoption as described above, but also allows for a constructivist interpretation. They
point out that changes which are driven by coercive means, i.e. EU conditionality, can
24
Hughes et al, p. 525.
Brusis.
26
Checkel (2000), p. 1.
27
Schimmelfennig and Sedelmeier (2005), p. 18.
28
Ibid., p. 19.
29
T. Risse, “Lets Argue!: Communicative Action in World Politics,” International Organization, vol. 54,
no. 1 (2000): p. 4.
30
J. Checkel, “Why Comply? Social Learning and European Identity Change,” International Organisation vol. 55, no. 3 (2001).
31
P. DiMaggio and W. Powell, “The Iron Cage Revisited: Institutional Isomorphism and Collective
Rationality in Organisational Fields,” American Sociological Review vol. 48, no. 2 (1983).
25
INTERNATIONAL PUBLIC POLICY REVIEW
54
sometimes be slightly “ceremonial” in nature. However, they go on to show that, even if
these changes are not initially deep-rooted, the legal constraints with which they are associated can considerably affect the behaviour and structure of the recipient organisations.
Using this approach, it can be seen that whereas conditionality is useful in explaining
institutional and policy changes, it is not always so helpful in explaining what determines
the success or failure of the implementation of these changes. In the case of EU enlargement, considerable focus has been placed on the task of transposing large amounts of legislation. However, the degree to which this legislation is implemented, and the degree of
success of this process, are factors of key significance. The socialisation argument can help
to explain successful policy adoption and implementation in terms of the acceptance of,
and identification with, norms and values in the case of EU policies, irrespective of
whether or not these were imposed through conditionality.
3. EVALUATION CAPACITY DEVELOPMENT
“The more that we [public administrations] know about how our programs are functioning, the effects they are having, and at what cost, the more likely we are to search out
ways of making them more efficient and more effective. To a substantial degree, this
knowledge is the public-sector manager’s surrogate for the profit-and-loss statement of the
business sector.”32
Policy and programme evaluation has become an increasingly popular tool in the
public sector, often forming part of the recent reform programmes and being seen as going
hand in hand with the move towards a more results oriented public sector.33 Chelimsky
distinguishes between two main functions of evaluation: the learning function and the accountability function.34 Leeuw and Sonnichsen emphasise the first of these, highlighting
that evaluation acts as an important feedback mechanism used in organisational learning,
which provides decision-making information for the purpose of corrective actions either at
project, programme or policy level.35
Much of the literature on evaluation capacity development has been associated with
external assistance and funding. As many have noted, evaluation has often been introduced as a precondition for receiving financial assistance.36 Although evaluation is often
promoted as best practice governance, many overstretched governments in developing
and transition economies are sceptical that expending already scarce resources on such
activities represents a beneficial option. However, despite such reservations, there is a
growing consensus that the development of an “evaluation culture” is a significant means
of improving the performance of governments.37 In addition, regarding the issue of resources, international donors often provide part of the financial resources necessary for
conducting evaluation, which considerably increases incentives.
A critical aspect is the way in which evaluation is integrated into the organisational
and administrative processes. Indeed, as Darlien points out, unless the conducting of
evaluation becomes institutionalised, its occurrence, and certainly its use, tends to be random.38 Thus, in order for evaluation to be conducted systematically, it is necessary to
32
Havens (1992) in R. Pablo Guerrero, “Evaluation Capacity Development in Developing Countries:
Applying the Lessons from Experience,” in Building Effective Evaluation Capacity, eds. R. Boyle and D. Lemaire (New York: Transaction, 1999), p. 178.
33
R. Boyle and D. Lemaire (eds.), Building Effective Evaluation Capacity, (New York: Transaction, 1999).
34
E. Chelminsky, Programme Evaluation: Patterns and Directions (Washington DC: American Society for
Public Administration, 1985).
35
F. Leeuw and R. Sonnichsen, “Introduction: Evaluations and Organizational Learning: International
Perspectives,” in Can Governments Learn: Comparative Perspectives on Evaluation and Organisational Learning,
eds. F. Leeuw, R. Rist and R. Sonnichsen (London: Transaction, 2000).
36
Pablo Guerrero.
37
K. Mackay, “Institutionalization of Monitoring and Evaluation Systems to Improve Public Sector
Management,” ECD Working Paper Series, No. 15 (Washington DC: World Bank, 2006).
38
Darlien (1990) in Leeuw and Sonnichsen.
VOL. 3, NO. 1 – JUNE 2007
55
build sufficient capacity within the institutions and organisations of the public administration.
The World Bank Operations Evaluation Department39 characterises the development
of evaluation capacity in terms of the four pillars shown in Box 1 below.
Box 1: The Four Pillars of Evaluation Capacity Development
1) Institutional capacity: a move from less efficient to more efficient accountability rules and incentives;
2) Organisational capacity: the tailoring and adaptation of the organisational architecture of monitoring and evaluating government entities to the new and more efficient accountability rules and
incentives;
3) Information & communication technology (ICT) capacity: using informatics for better and timelier information on results;
4) Human capacity: through training in monitoring and evaluation, but targeted at the skills that are
suited to the particular institutional and organisational context, and will thus actually be used
and reinforced after they are imparted.
This characterisation is important to emphasise the fact that evaluation capacity development is about more than simply training. Indeed it splits up areas that are often grouped
together, such as institutions and organisations. The institutional situation of a country in
terms of its formal and informal rules, norms, and values is key in shaping its policies. In
terms of the establishment of evaluation capacity, without certain institutional changes,
endless organisational re-configurations may be useless. Therefore attention must be
given to both these levels, as opposed to seeking short fixes through organisational
changes.40 Evaluation is about information, and consequently appropriate systems must
be put in place to systematically measure indicators upon which to gauge the success of
programmes and policies. Without this element, evaluation becomes a far less effective
tool. The final point regarding human capacity is clearly crucial. Evaluation is dependent
upon expertise, both in its management and conduct. Accordingly, as well as developing
sustainable systems for building and in-house evaluation capacity, emphasis must also be
placed on stimulating capacity within the professional community.
As mentioned above, the development of evaluation capacity has often been associated with external demand, such as the conditions imposed by donor agencies. Boyle has
observed this link between the imposed regulations relating to the management of the
European Structural Funds and the establishment of evaluation capacity in Ireland.41 Indeed, Toulemonde has observed the influence of the European Structural Funds on the
creation of Europe-wide evaluation functions.42 However, undoubtedly one of the key
factors of successful evaluation capacity development is the extent of domestic demand for
evaluation information.43 In this respect, external demand for evaluation is not a substitute for strong internal demand from the national parliaments, public administrations, and
general public.
Toulemonde argues that demand cannot be taken as a given and must be created and
progressively developed.44 He identifies three principal methods of creating demand for
evaluation, which he characterises as “carrots,” “sticks,” and “sermons.” The “carrots,” or
the creation of demand by incentives, he argues, can take the form of budgetary or career
39
S. Schiavo-Campo, “Building Country Capacity for Monitoring and Evaluation in the Public Sector:
Selected Lessons of International Experience,” ECD Working Paper Series No. 13 (Washington DC: World
Bank, 2005).
40
Ibid.
41
R. Boyle, “Evaluation Capacity Development in the Republic of Ireland,” ECD Working Paper Series
No. 14 (Washington DC: World Bank, 2005).
42
J. Toulemonde, “Incentives, Constraints and Culture-Building as Instruments for the Development
of Evaluation Demand,” in Building Effective Evaluation Capacity, eds. R. Boyle and D. Lemaire (New York:
Transaction, 1999).
43
Mackay.
44
Toulemonde.
INTERNATIONAL PUBLIC POLICY REVIEW
56
incentives. The budgetary dimension follows the logic of Niskanen: where budgets are
available, civil servants take interest.45 The career incentives refer to the incentives behind
following a specialised career as an evaluation expert. The “sticks,” as alluded to previously, refers primarily to compulsory evaluation, such as exampled by European Structural Funds. However, this can also entail giving evaluation authority: giving power to
the evaluators to ask specific questions, to request certain information, and to oblige people to use evaluation results (for example by threatening budget cuts in the case of failure
to comply).
Needless to say, the creation of sustainable demand for evaluation requires a mix of
both incentives and constraints. However, the third factor proposed by Toulemonde, the
“sermon,” falls into a different category. This refers to a state whereby evaluation is no
longer principally conducted in response to incentives and constraints, but because of a
true subscription to the norms and values of evaluation. In this case, evaluation takes
place because of the presence of an “evaluation culture.” Similarities can be drawn between this typology and the one identified above by Schimmelfennig and Sedelmeier.
4. FINDINGS
In order to explore some of the questions posed above and to look empirically at how
the accession process has impacted on the evaluation capacity of Bulgaria and Romania,
this paper has drawn on a variety of sources. Firstly, a review of both European Commission and national documentation relating to evaluation was carried out. Secondly, a series
of semi-structured elite interviews was conducted with European Commission officials
responsible for evaluation at the Directorates General (DG) for Enlargement, and for Regional Policy, as well as evaluators working on the evaluation of the pre-accession programme, “PHARE.” Thirdly, a questionnaire was distributed to relevant stakeholders
involved with the evaluation process in Bulgaria and Romania, such as representatives
from the national evaluation units and members of the European Commission Delegations. This questionnaire also served as an interview guide. In terms of the people targeted for the interviews and questionnaires, in view of the technical and specialised nature
of the subject, an elite approach was taken whereby only specialists in the field were contacted. Similarly, the actors within the Commission who are the most operationally involved in evaluation matters in Bulgaria and Romania are those from DG Enlargement
and DG Regional Policy, and therefore members of their evaluation units were targeted.
Appendix 1 shows the typology used for the definition and measurement of the dependent and independent variables. The results are presented below.
4.1 Pre-accession Programmes: An Introduction to Evaluation
Prior to the start of the accession process and the introduction of the pre-accession
programmes, evaluation was not a systematically used management tool in the Governments or public administrations of Bulgaria and Romania. Although there was a strong
tradition of reporting in both countries, the concept of evaluation as a “learning” mechanism46 was not a familiar one. There was also little demand for evaluation from either the
executive or legislative branches of government. Consequently, the institutional apparatus and culture of evaluation was largely missing.
4.1.1 Evaluation Demand
Demand for evaluation was created formally through the introduction of the PHARE,
ISPA and SAPPARD programmes, and the associated conditionality with this funding.
1974).
45
W. J. Niskanen, Bureaucracy and Representative Government (Chicago: Aldine Publishing Company,
46
Chelimsky.
57
VOL. 3, NO. 1 – JUNE 2007
The monitoring and evaluation of activities are specified in Article 8 of the PHARE Regulation.47 Article 27 of the EU’s financial regulation48 stipulates that all EU programme expenditure must be subject to monitoring and evaluation. Chapter 28 of the community
acquis, referring to financial control obligations, specifies the need, among other things, to
perform ex-ante, ongoing and ex-post evaluations of all EU expenditure. Therefore, with
the introduction of pre-accession assistance in Bulgaria and Romania came these additional responsibilities.
This formal demand is also complemented by responses in both interviews and
questionnaires (see Figure 1) that identified the accession process as being the most significant factor in the establishment of an evaluation function. The next most significant
factor was considered to be internal demand for improved decision-making, followed by
demand from national parliament. Although there was one respondent in Romania who
identified EU accession as least important, this does not fit with the legal and institutional
structures surrounding evaluation in this country. Nor does it explain why evaluation
was initially only associated with the EU funds, and not the national budget.
Figure 1: Factors Influencing the Development of an Evaluation Function in Bulgaria and
Figure 1 Factors Influencing
the Development of an Evaluation
Romania
Bulgaria
Romania
Function in Bulgaria and Romania
Least
Important
Less
Important
Most
important
Least
Important
Less
Important
Most
important
0%
Internal Demand for improved
decision-making information
Demand from national
Parliament
EU Accession
20%
40%
60%
80%
100%
4.1.2 Pre-accession Evaluation Structures
A gradual decentralisation of monitoring and evaluation activities took place during
the pre-accession period. This started with the delegation of monitoring responsibilities as
part of the decentralised implementation system (DIS).49 The National Aid Coordinator
(NAC) structures50 took over the responsibility for monitoring under the guiding framework of the Joint Monitoring Committee (JMC) System. This system was established in
2000/200151 for the purpose of “supervising the progress of EU-funded assistance programmes (PHARE, ISPA and SAPARD) towards their objectives and coordinating their
activities.”52 Meeting once a year, the JMC was responsible for proposing corrective actions regarding the activities, management, and technical and financial aspects of programmes; reallocations of funds within programmes; and revision of contracting and
disbursement periods for specific projects.53 A number of monitoring sub-committees
47
European Commission Interim Evaluation Guide, 2004.
COM (2002), p. 12.
49
COM (2006).
50
These bodies were established in Bulgaria and Romania in the late 1990s as central coordinators of
external assistance. The Management of EU Funds Directorate at the Ministry of Finance is the interlocutor
in Bulgaria and in Romania, the Managing Authority for Community Support Framework at the Ministry of
Public Finance.
51
The JMC system was formalised by an official mandate on 4 July 2002.
52
JMC Mandate, COM (2002), p. 1.
53
Ibid.
48
58
INTERNATIONAL PUBLIC POLICY REVIEW
(SMSC) were required to report to the JMC and to provide information at the sector level.
These SMSCs were also key sources of data for the evaluation exercise, and acted as forums for the discussion of evaluation results. According to the interviewees, these structures provided an initial exposure to the practice of evaluation for many of the members of
the public administrations. Numerous of the programme and project managers within
implementing authorities and line ministries participated in the evaluation exercises54 (often through interviews or by commenting on draft reports), and therefore also gained a
perspective on the role of evaluation within the programme cycle.
As the evaluation function was managed by the EC in Brussels, and external contractors conducted the evaluation work, there were very few budgetary incentives for departments to initiate evaluation activities. Similarly, the lack of formal evaluation
competence outside of the contact units at the Ministries of Finance meant that there were
few possibilities to pursue work in the area of evaluation within the public administrations. This is set to change with both countries about to receive Extended Decentralised
Implementation System (EDIS) accreditation, which will give them responsibility for the
ex-ante control of the programmes, as well as their interim evaluation.55 For both countries, this decentralisation will represent a first experience in the management of an
evaluation function. Both national evaluation units will have to organise and contract independent contractors to undertake the evaluation of the remainder of the pre-accession
programmes. This is in many ways a stepping-stone towards the responsibilities that they
will assume on accession regarding the evaluation of the Structural Funds.
4.1.3 Monitoring Information Systems and Indicator Measurement
Good quality data and adequate monitoring systems are fundamental to a well functioning evaluation process. Various attempts have been made to establish monitoring information systems for the pre-accession funds in Bulgaria and Romania. Despite the fact
that systems were introduced, there were, and there remain, problems with their application at many levels. While the basic financial monitoring of allocation and disbursement
figures tends to be adequate, there has been a lack of systematic monitoring of predetermined indicators. Thus, systems have often been designed without the basic building blocks consisting of the precise definition of the types and level of information required. In addition, there has not been sufficient attention paid to assuring that roles are
allocated within project units for the systematic completion of monitoring information. As
a consequence, monitoring has often been an ad-hoc activity overly reliant on process description as opposed to result measurement based on indicators. This is an area that will
require significant improvement under the Structural Funds. In particular, there will be a
need to adapt the systems in order to cope with monitoring at the more strategic Operational Programme level.
4.1.4 Evaluation Capacity-Building Activities
Both the Bulgarian and Romanian public administrations have received support in
order to assist them in building their monitoring and evaluation capacities predominantly
for specific programmes such as PHARE and Structural Funds, but also for national purposes.
Most capacity-building activities were delivered through the medium of training
seminars and workshops, provided within the context of the interim evaluation exercise
and financed by the Commission. However, specific twinning and technical assistance
projects also provided more long-term assistance.
54
Over 70 sectoral interim evaluations and two ex-post evaluations have taken place in Bulgaria and
Romania since 2001.
55
COM (2006).
59
VOL. 3, NO. 1 – JUNE 2007
Bulgaria has directed the majority of its assistance towards the pre-accession programmes (see Figure 2), perhaps unsurprisingly as preparations for EDIS have been a key
concern in the lead up to accession. Romania, on the other hand, has been slightly more
forward thinking in focusing much of its capacity-building preparations on the Structural
Funds.
Figure 2: Direction of Capacity-Building Assistance
Fig 3 Direction of Capacity-building Assistance
100%
80%
60%
BG
70%
RO
40%
49%
40%
20%
25%
5%
7%
4%
0%
Pre-accession
Programmes
Structural Funds
National Funds
Other
As well as the formal training workshops and seminars, and twinning and technical
assistance projects, there has also been considerable know-how transfer from the evaluation unit at DG Enlargement (this could be considered as the informal conditionality), as
well as the external evaluators. In addition to the exposure given to evaluation through
the involvement of sectoral line ministries in the evaluation process, the external evaluator, in collaboration with the NAC services, at the instruction of the DG Enlargement
evaluation unit, embarked on a shadowing process. In Bulgaria, two staff from the NAC
services shadowed evaluators on an interim evaluation of the economic development
sector. In addition, the leader of the external evaluation team is engaging in regular
meetings with the heads of the SMSC Secretariats in order to explain the purpose of the
interim evaluation reports, and how they fit in with their monitoring work. In Romania,
staff from the NAC services were seconded to the external evaluation team to shadow the
production of an evaluation of the economic and social cohesion sector.
In both cases, external evaluators have worked in cooperation with twinning and
technical assistance partners in order to ensure a coordinated approach to the know-how
transfer exercises. Respondents to the questionnaire and interviews were also satisfied
that the different capacity-building activities were delivering consistent messages (see
Figure 3).
Fig 4 Capacity-building Activities Provided Consistent
Figure 3: Capacity-Building
Activities Provided Consistent Advice Contributing Towards
Advice contributing Towards the Development of a
the Development
of a National Evaluation Capacity
National Evaluation Capacity
100%
80%
60%
80%
Bulgaria
67%
Romania
40%
33%
20%
20%
0%
Strongly
agree
Agree
Unsure
Disagree
Strongly
disagree
INTERNATIONAL PUBLIC POLICY REVIEW
60
4.1.5 Involvement within Evaluation Networks
The Evaluation Advisory Group, initiated in 2002 has been the primary networking
structure in place during the pre-accession process.56 It was established in order to provide a forum for exchanging good practices between Member States and Candidate
Countries in developing monitoring and evaluation capacity. It had three, key objectives.
Firstly, to promote the development of medium-term National Strategies for building local
monitoring and evaluation capacities (which would also cover the national public funds).
Secondly, to support candidate countries (via exchange of experience) in their preparation
of draft short-term Action Plans for development of local monitoring and evaluation capacities, within the framework of decentralisation of monitoring and evaluation. Thirdly,
to prepare a Guide to Good Practices in Monitoring and Evaluation Capacity Building, including national strategies, monitoring and evaluation models, capacity building strategies, and means to boost monitoring and evaluation capacity.57
Bulgaria and Romania participated in this advisory group, which was successful in
opening doors for communication and learning with the Member States. However, in
terms of the achievement of the first two fairly ambitious objectives, the group appears to
have had little impact. The group was initiated in 2002, and the beginnings of evaluation
strategies are only now starting to emerge. It would seem that the spark for activities in
the area of evaluation strategy development derive more from the approaching accession
deadline than from the activities of the group. Once Bulgaria and Romania become members of the EU, their respective central evaluation units will become members of the DG
Regional Policy evaluation network, where they will be able to benefit from Member State
experiences of Structural Fund evaluation. In addition, the evaluation unit in Romania
intends to become a member of the European Evaluation Society, which has a large membership including governments from all over the EU as well a number of professional organisations.
4.2 The Potential for the Institutionalisation Through European Structural Funds
Despite helping to raise the profile of evaluation, it is hard to say that evaluation has
been institutionalised through the management and implementation of the pre-accession
programmes. The Structural Funds, although still in a preparatory phase in Bulgaria and
Romania have the potential to consolidate the experience gained during the accession
process and embed evaluation in a wider institutional and organisational context. Given
the huge budgets associated with the Structural Funds, providing 336.1 billion Euros for
the 2007-2013 programming period,58 they represent an extremely significant influence on
the institutional and organisational structures of recipient countries.
4.2.1 Evaluation Demand
Similar to the pre-accession funds, the demand for evaluation in the framework of the
Structural Funds comes from the EU conditionality associated with this funding. The
Structural Funds Regulation59 sets out specific requirements in the field of monitoring
(Articles 62-65) and evaluation (Articles 45-47). In this respect, Bulgaria and Romania will
take over the responsibility for both ex-ante and mid-term evaluation of the National
56
EMS (2004).
Ibid.
58
This budget refers to all countries in the EU; however, 78% of this is allocated under the “convergence” objective the majority of which, since the recent enlargement, has shifted east towards the new Member States (COM 2004, p. 22).
59
COM (2004).
57
61
VOL. 3, NO. 1 – JUNE 2007
Strategic Reference Framework (NSRF)60 and Operational Programmes (OP),61 while the
remit of ex-post evaluation remains with the EC.
In Bulgaria, Council of Ministers Decisions for the appointment of a central coordinating unit, Structural Fund Managing Authorities, and intermediate bodies are currently
under approval. These decisions refer to the monitoring and evaluation of both the National Strategic Reference Framework and the Operational Programmes. In Romania, the
Managing Authority for the NSRF was set up at the Ministry of Public Finance through
Government Decision 403/3004.62 This decision assigns responsibility for the evaluation
function to each managing authority, as well as the central coordinating unit.
4.2.2 Evaluation Structures for the Structural Funds
There have been a number of capacity-building initiatives that have taken place
during the pre-accession period (outlined in the previous section above). It is generally
felt that these activities, and the capacity that was built up under these structures, will
form the basis for future evaluation activities (see Figure 4).
Fig 5 Evaluation Capacity Built Up During Pre-accession
Figure 4: Evaluation Process
CapacityWill
Built
Up During
Pre-accession
Process Will
Form the Basis for the
Form
the Basis
for the Evaluation
Function
Evaluation Function of the Structural and National Funds
of the Structural and National Funds
100%
80%
60%
Bulgaria
60%
Romania
40%
20%
0%
33%
33%
20%
20%
Strongly
agree
33%
Agree
Unsure
Disagree
Strongly
disagree
However, unlike the evaluation system under the pre-accession funds, the Structural
Fund evaluation will be predominantly delegated to the various managing authorities, but
will be co-ordinated by central bodies within the Ministries of Finance, which remain the
main evaluation structures within Bulgaria and Romania. Therefore, each of these
authorities will have to set up the appropriate evaluation structures, including monitoring
committees, and will be responsible for overseeing the progress towards reaching the objectives of the operational programmes.
Within the Structural Funds Budget, there is a “technical assistance” allocation,
which can be used to finance the preparatory, monitoring, administrative and technical
support, evaluation, audit, and inspection measures that are specified in the Structural
Funds Regulation.63 This is likely to give the authorities concerned a considerable incentive to develop both the monitoring and evaluation functions not only at the central level,
but also within the sectoral managing authorities concerned. This financial commitment
60
The strategy document ensures that Community structural aid is consistent with the Community
strategic guidelines, and identifies the link between Community priorities, on the one hand, and national
and regional priorities in order to promote sustainable development, and the national action plan on employment, on the other hand (COM 2004, p. 33).
61
Operational programmes are targeted towards one of the Structural Fund objectives and usually
address sectoral issues.
62
H. Curley and E. Perianu, “Assessment of the Evaluation Culture in Romania,” project document for
Technical Assistance for Programming, Monitoring and Evaluation, Romania: RO 2003/005-551.03.03.04
(2006).
63
COM (2004).
INTERNATIONAL PUBLIC POLICY REVIEW
62
will help to place evaluation as a permanent fixture on the organisational charts of the
managing authorities, thus creating new hubs for career development in this area. It can
therefore be seen that demand for evaluation under the Structural Funds will be more
widespread than under the pre-accession system.
4.2.3 The Local Evaluation Market
The sustained budget and timescale of the Structural Funds and the concomitant requirement for evaluation has the potential to spark the development of a local market for
evaluation services. A local professional community in this area is yet to be established,
mainly because of the lack of demand for external evaluation from the administrations in
Bulgaria and Romania. However, this is also due to the centralised nature of the evaluation of the pre-accession funds that have been managed by the EC in Brussels.
With the Bulgarian and Romanian public administrations taking on the responsibility
of evaluation and being given considerable scope to shape these functions to suit their national contexts, there could be an increased demand for local evaluators who can respond
to local needs. However, this process may take some time, as evaluation is a specialised
subject area, involving complex methodologies combined with a specialised knowledge of
a variety of policy fields. This raises the importance of involving the academic sector in
the development of evaluation capacity, as well as establishing professional bodies such as
national evaluation societies that can disseminate best practice guidelines, and provide a
mechanism for communication between both public and private evaluation professionals.
4.3 Impact Beyond EU Programmes: Moving Towards an Evaluation Culture?
The two chapters above have identified both the pre-accession funds and the EU
Structural Funds as being key drivers of evaluation in Bulgaria and Romania. Whilst the
pre-accession funds acted as an introduction to evaluation, the forthcoming Structural
Funds hold the potential to embed evaluation within a wider and more sustainable
framework. The institutional and organisational changes have largely been made to accommodate the EU regulations, and progress has been made in terms of the development
of information systems and the creation of human capacity for evaluation. However, these
structures relate predominantly to the EU funding mechanisms, and the demand would
appear, at least in the first instance, to be largely external. Therefore, has there been any
impact beyond the EU funds in terms of evaluation demand and capacity?
The answer to this question is largely mixed. In Bulgaria, the acting legislation concerning management of public investments stipulates requirements for good financial
governance in the public sector and for compliance with the principles of effectiveness and
efficiency. Additionally, it outlines requirements for the introduction of uniform rules in
planning and management of investments (EU and international standards for project
management), irrespective of the origin of funding. Despite this, however, there is little
evidence that evaluation is applied systematically to national programmes and policies.
Bulgaria is yet to develop a national evaluation strategy, and the efforts of the evaluation
unit are currently consumed with preparations for the decentralization of the evaluation of
PHARE in the short term, as well as preparations for the evaluation of the Structural
Funds in the medium term.
In Romania, there seems to be mounting interest in evaluation from a number of different parties. A national strategy for evaluation is currently under development, which
will address not only the evaluation of the EU budget, but also ways in which the EU
practices can be incorporated into the national system. In a recent assessment of Romanian
evaluation culture, the authors observed that there are a number of parties driving the
evaluation process within the country.64 These include the General Secretariat of the Government (in a bid to improve the public planning policy process); the Ministry of Public
64
Curley and Perianu.
VOL. 3, NO. 1 – JUNE 2007
63
Finance (in order to implement the Single Action Plan); the Chancellery of the Prime Minister (looking to improve economic forecasting and planning); the Parliament (looking to
improve the ex-ante analysis of legislation); and finally the Supreme Audit body (looking
to expand its role from purely financial auditing to a performance audit role including
evaluation questions). Despite this interest, as things stand, a formal legal framework
making evaluation a compulsory exercise is still lacking.
As Toulemonde states, evaluation pursues “different paths in a movement towards
maturity.”65 Therefore, it should not be expected that there is a fixed route towards
achieving evaluation capacity and the panacea of evaluation culture. Both countries are
just starting their journeys as Member States. Considering that for the majority of current
Member States evaluation was only truly institutionalised after several years of Structural
Funds implementation, it seems too early to make definite judgements on how the process
of membership has impacted upon them in this respect. Hyatt notes that the evaluation of
EU funds tends to be very focused on accountability,66 which strongly shapes how countries in CEE see it. Therefore, in order for a broader evaluation culture to take hold, it will
be necessary for administrations to design their own, purpose-built evaluations, which reflect more of a learning perspective and less of a bureaucratic, procedural approach. Both
Bulgaria and Romania are showing signs that evaluation is becoming more than just an
external obligation. The language of evaluation is being increasingly used, and its value as
a management tool is becoming more evident. However, it remains to be seen whether
this talk will be translated into national policy. For the time being, the fact remains that
the key driver of evaluation is the EU.
5. CONCLUSIONS
This paper concludes that the accession process has been significant in terms of the
establishment of evaluation capacity in Bulgaria and Romania. It has argued that the concept and vocabulary of evaluation were initially introduced to these countries through the
pre-accession programmes. Furthermore, preparations for the implementation and management of the Structural Funds helped to embed these principles. These processes have
been strongly guided through the use of institutional and organisational templates, technical assistance and twinning support as well as training and development. EU formal
conditionality has been a key feature in this process, underpinning institutional and organisational changes.
Although a number of changes have taken place in the area of evaluation, they remain largely associated with the EU funding process. In this respect, steps are being
taken, and strategies are being put in place to introduce systematic evaluation of EU
funding. However, what remains to be seen is whether the principles embedded in the
management of the EU budget will be embraced and systematically introduced into
mechanisms at the national policy level. This is largely dependent on the extent to which
the norms and values of evaluation are embraced, and the extent to which demand for
transparency and accountability is generated internally rather than externally.
Evaluation, as it stands, is rather mechanistic and bureaucratic, and until the association of evaluation with external control and compliance is removed, there is unlikely to be
a deeper-rooted evaluation culture established. Given the increased autonomy of the Bulgarian and Romanian public administrations in the area of evaluation ensuing accession,
there is likely to be a move away from this rigid approach and towards more tailored exercises that fit the needs of the local institutions concerned. It may, however, take some
time to assess the overall impact of this change in governance, and to assess the degree to
which the norms and values of evaluation have been embraced. Clear lessons can be
drawn from this example concerning the promotion of evaluation as good governance
through the provision of financial and technical assistance.
65
66
Toulemonde, p. 10.
Hyatt and Simons.
64
INTERNATIONAL PUBLIC POLICY REVIEW
REFERENCES
Boyle, Richard. “Professionalizing the Evaluation Function - Human Resource Development and the Building of Evaluation Capacity.” In Building Effective Evaluation Capacity, edited by Richard Boyle and Donald Lemaire. London: Transaction
Publishers, 1999.
----------. “Evaluation Capacity Development in the Republic of Ireland.” ECD Working
Paper Series No. 14: June 2005. Washington DC: World Bank, 2005.
Boyle, Richard, Donald Lemaire, and Ray Rist. “Introduction: Building Evaluation Capacity.” In Building Effective Evaluation Capacity, edited by Richard Boyle and Donald
Lemaire. London: Transaction Publishers, 1999.
Brusis, M. “The Instrumental Use of European Union Conditionality: Regionalization in
the Czech Republic and Slovakia.” East European Politics and Societies vol. 19, no. 2
(2005): pp. 291–316.
Checkel, J. “Compliance and Conditionality.” ARENA working papers 00/18. Prepared
for delivery at the 2000 Annual Meeting of the American Political Science Association, Washington DC, 31 August 31 – 3 September 2000.
----------. “Why Comply? Social Learning and European Identity Change.” International
Organisation vol. 55, no. 3 (2001): pp. 553-588.
Chelminsky, E. Programme Evaluation: Patterns and Directions. Washington DC: American
Society for Public Administration, 1985.
----------. “Thoughts for a New Evaluation Society.” Evaluation vol. 3, no. 1 (1997): 97–109.
Commission of the European Communities. “Communication to the Commission: Focus
on results: strengthening evaluation of Commission activities.” SEC (2000)1051: 26
July 2000.
----------. “European Governance: White Paper.” Brussels, COM (2001) 428 final: 25 July
2001.
----------. “Council Regulation (EC, Euratom) No 1605/2002 of 25 June 2003 on the Financial Regulation Applicable to the General Budget of the European Communities.”
Official Journal of the European Communities.
----------. “Aid Delivery Methods, Vol. 1. Project Cycle Management Guidelines.” Brussels
2004.
----------. “From Pre-accession to Accession: Interim Evaluation of PHARE Support Allocated in 1999-2002 and implemented until November 2003.” Directorate General for
Enlargement, Evaluation Unit, March 2004.
----------. “Proposal for a council regulation laying down general provisions on the European Regional Development Fund, the European Social Fund and the Cohesion
Fund.” Brussels, COM (2004) 492 final: 14 July 2004.
----------. “Main Administrative Structures Required for Implementing the Acquis.” Informal Working Document, May 2005.
----------. “Europa Glossary.” Available online: http://europa.eu.int/scadplus/glossary/
index_en.htm
----------. “Financial Assistance: Decentralisation.” DG Enlargement. Available online:
http://europa.eu.int/comm/enlargement/pas/phare/decentralisation.htm
----------. “Financial Assistance: Pre-accession Assistance.” DG Enlargement. Available online: http://europa.eu.int/comm/enlargement/financial_assistance/index_en.htm
Curley, H. and E. Perianu. “Assessment of the Evaluation Culture in Romania.” Project
document for Technical Assistance for Programming, Monitoring and Evaluation.
Romania: RO 2003/005-551.03.03.04, 2006.
Davis, D. F. “Do You Want a Performance Audit or a Programme Evaluation?” Public
Administration Review vol. 50, no. 1 (1990): pp. 35-41.
DiMaggio, P. and W. Powell. “The Iron Cage Revisited: Institutional Isomorphism and
Collective Rationality in Organisational Fields.” American Sociological Review vol. 48,
no. 2 (1983): pp. 147-160.
VOL. 3, NO. 1 – JUNE 2007
65
European Parliament Resolution on the Accession of Bulgaria and Romania. Strasbourg:
14 June 2006. Available online: http://www.europarl.europa.eu/sides/ getDoc.do;jsessionid=6FD18228762B3F90C3044733CD1249D0.node1?pubRef=//EP//T
EXT+TA+P6-TA-2006-0262+0+DOC+XML+V0//EN
Goetz, K. H. and H. Wollmann. “Governmentalizing Central Executives in PostCommunist Europe: A Four-Country Comparison.” Journal of European Public Policy
vol. 8, no. 6 (2001): pp. 864–87.
Grabbe, H. “A Partnership for Accession? The Implications of EU Conditionality for the
Central and East European Applicants.” Robert Schuman Centre Working Paper
12/99 San Domenico di Fiesole (FI): European University Institute, 1999.
----------. “How Does Europeanization Affect CEE Governance? Conditionality, Diffusion
and Diversity.” Journal of European Public Policy vol. 8, no. 6 (2001): pp. 1013–1031.
----------. “European Union Conditionality and the Acquis Communautaire.” International
Political Science Review vol. 23, no. 3 (2002): pp. 249–268.
Hix, S. and K. Goetz. “Introduction: European Integration and National Political Systems.” West European Politics vol. 23, no. 4 (2000): pp. 1-26.
Hughes, J., G. Sasse and C. Gordon. “Conditionality and Compliance in the EU’s Eastward Enlargement: Regional Policy and the Reform of Sub-national Government.”
Journal of Common Market Studies vol. 42, no. 3 (2004): pp. 523–551.
Hyatt, J. and H. Simons. ”Cultural Codes: Who Holds the Key? The Concept and Conduct
of Evaluation in Central and Eastern Europe.” Evaluation vol. 5, no. 1 (1999): pp.
23–41.
Leeuw, Frans and Richard Sonnichsen. “Introduction: Evaluations and Organizational
Learning: International Perspectives.” In Can Governments Learn: Comparative Perspectives on Evaluation and Organisational Learning, edited by Frans Leeuw, Ray Rist and
Richard Sonnichsen. London: Transaction, 2000.
Lönnroth, K. J. “Challenges for Evaluation in an Enlarged Europe.” Plenary Feedback
Session, Fifth European Conference on Evaluation of the Structural Funds. Budapest:
26-27 June 2003.
Mackay, Keith. “Institutionalization of Monitoring and Evaluation Systems to Improve
Public Sector Management.” ECD Working Paper Series, No. 15. Washington DC:
World Bank, 2006.
March, James and Johan Olsen. Rediscovering Institutions: The Organizational Basis of Politics. New York: New York Free Press, 1989.
Niskanen, William J. Bureaucracy and Representative Government. Chicago: Aldine Publishing Company, 1974.
North, D. C. Institutions, Institutional Change and Economic Performance. Cambridge: Cambridge University Press, 1990.
Pablo Guerrero, R. “Evaluation Capacity Development in Developing Countries: Applying the Lessons from Experience.” In Building Effective Evaluation Capacity, edited by
Richard Boyle and Donald Lemaire. New York: Transaction, 1999.
Republic of Bulgaria Council of Ministers. “Decision 204 of 14 April 2006 on the Update of
the System for Monitoring of Pre-accession Programmes Funded by the European
Union and Evaluation of the PHARE Programme.” Sofia: 2006.
Risse, Thomas. “Lets Argue!: Communicative Action in World Politics.” International Organization, vol. 54, no. 1 (2000): pp. 1-39.
Rist, Ray. “Linking Evaluation Utilization and Governance: Fundamental Challenges for
Countries Building Evaluation Capacity.” In Building Effective Evaluation Capacity, edited by Richard Boyle and Donald Lemaire. New York: Transaction, 1999.
Schiavo-Campo, S. “Building Country Capacity for Monitoring and Evaluation in the
Public Sector: Selected Lessons of International Experience.” ECD Working Paper Series No. 13. Washington DC: World Bank, 2005.
Schimmelfennig, F. and U. Sedelmeier. “Governance by Conditionality: EU Rule Transfer
to the Candidate Countries of Central and Eastern Europe.” Journal of European Public
Policy vol. 11, no. 4 (2004): pp. 661–679.
66
INTERNATIONAL PUBLIC POLICY REVIEW
----------. “Introduction: Conceptualizing the Europeanization of Central and Eastern
Europe.” In The Europeanization of Central and Eastern Europe, edited by F. Schimmelfennig and U. Sedelmeier. Cornell: Cornell University Press, 2005.
Toulemonde, J. “Incentives, Constraints and Culture-Building as Instruments for the Development of Evaluation Demand.” In Building Effective Evaluation Capacity, edited by
Richard Boyle and Donald Lemaire. New York: Transaction, 1999.
Varone, F., S. Jacob and L. De Winter. “Polity, Politics and Policy Evaluation in Belgium.”
Evaluation vol. 11, no. 3 (2005): pp. 253–270.
Download