best practice or lesson

advertisement
THE DIFFUSION OF
REGULATORY IMPACT ANALYSIS
BEST PRACTICE OR LESSON-DRAWING?
Claudio M. Radaelli
Professor of Public Policy,
School of Social and International Studies
Bradford University (UK)
On leave at the
RSC Forum, European University Institute until June 2003
ADDRESS FOR CORRESPONDENCE (as of 1 July 2003)
School of Social and International Studies
Bradford University
Bradford BD7 1DP, West Yorkshire, UK
e-mail: c.radaelli@bradford.ac.uk
TO APPEAR IN
EUROPEAN JOURNAL OF POLITICAL RESEARCH
2003
2
THE DIFFUSION OF REGULATORY IMPACT ANALYSIS: BEST
PRACTICE OR LESSON-DRAWING?
Claudio M. Radaelli
School of Social and International Studies, Bradford University, and European University Institute,
Florence
Abstract
This article presents the main results of a research project on regulatory impact
analysis (RIA) in comparative perspective. Its main theoretical thrust is to explore the
limitations of the conventional analysis of RIA in terms of de-contextualised best
practice and to provide an alternative framework based on the lesson-drawing
literature. After having discussed how demand and supply of best practice emerge in
the OECD and the European Union, some analytic (as opposed to normative) lessons
are presented. The main lessons revolve around the politics of problem definition, the
nesting of RIA into wider reform programmes, the political malleability of RIA, the
trade-off between precision and administrative assimilation, the roles of networks and
watchdogs, and institutional learning. The conclusions discuss the implications of the
findings for future research.
BACKGROUND AND MOTIVATION
This article deals with the diffusion of regulatory impact analysis (RIA) in nine
countries and the European Union (EU) 1. The diffusion of RIA has been quite
universal. In the early 1990s, only a handful of OECD countries were using RIA, but
by 1996 more than half of OECD countries had adopted this tool. At the beginning of
2001, 14 out of 28 countries were using impact assessment comprehensively, and
another 6 were using impact assessment selectively, for some types of regulations.
RIA includes a range of methods (from full cost-benefit analysis to simpler checklists,
see OECD 1997a) which can be used flexibly to measure ex ante the impact of
proposed regulatory policies on social welfare or on selected target populations, such
as small businesses, companies, non-profit organisations, and public administration.
The aim of impact assessments is to enhance the empirical basis of political decisions.
1
The nine countries are Australia, Canada, Denmark, France, Germany, Mexico, the Netherlands, UK,
and USA. This article draws on a research project funded by the Italian government on regulatory
impact analysis in comparative perspective. Detailed empirical information on the individual countries
and the full list of primary source materials are contained in the final report of the project (Radaelli
2001). The project was based on primary source materials, interviews with RIA officers and
independent experts, interviews with European Commission and OECD-PUMA officers, and scientific
literature review. At the moment of finalising this article (Summer 2002) supplementary information
3
The idea is not to substitute political decisions with technocratic solutions, but to
inform the decision-making process with empirical knowledge2. Another important
aim of RIA is to make the regulatory process more transparent and accountable.
Indeed, RIA cannot be reduced to a document – the usually concise paper containing
the data on costs and benefits and the choice of a regulatory option. Quite the
opposite, RIA consists of a series of steps which guide the regulatory process, raising
issues and questions3.
This is not an article on the pros and cons of RIA, but it is important to state up-front
that impact assessment has also attracted criticisms. Some rudimentary forms of RIA,
such as compliance cost assessment, may bias the policy process towards the interests
of a single constituency, such as the business community (Froud et al. 1998). One
could also question the theories of regulation upon which RIA is based (James 2000).
Distributional effects are difficult to handle in impact assessments (Lutter 2001).
Further, there is a full range of criticisms concerning specific methodologies of RIA,
especially cost-benefit analysis4. Some have criticised the attempts to measure the socalled ‘priceless goods’ (such as human life or wilderness). Others have raised
objections to the fact that cost-benefit analysis, although it acknowledges the
reparative costs of environmental hazards, does not identify the infringement of rights
of those who are harmed; and monetary valuation in sensitive areas has been
considered unreliable and immoral (Virani and Graham 1998; House 2000; Thiele
2000). On a more radical tone, Thiele observed that cost-benefit analysis ignores
‘threats to other species, to biodiversity, and to other components of human welfare,
such as mental health, spiritual well-being, and social stability’ (Thiele 2000:554).
was gathered via phone and e-mail with the aid of officers based at the Commission and at the Danish,
German, and British governments.
2
The British RIA is extremely clear on this point. The Minister signing the impact assessment form
declares that the benefits justify (not outweigh) the costs. The choice of the verb ‘to justify’ reflects the
element of political evaluation.
3
Typically, these steps include the following: identification of the purpose and intended effect of the
regulation, consultation, analysis of alternative options (including the option of not regulating),
comprehensive assessment of costs and benefits of the major options, monitoring & evaluation, and
final recommendation to the Minister.
4
For a comprehensive treatment of cost-benefit analysis see Boardman et al. (2001) and Adler and
Posner (2001).
4
Economists have suggested some solutions to these criticisms, from the use of
distributional weights to the adoption of principles and criteria for the economic
valuation of ‘priceless goods’ (Adler and Posner 2001). Additionally, one can extend
the analysis of costs to ‘social stability’ and other items listed by Thiele (although
measuring spiritual well-being is a very tall order indeed). But the main point is that,
in the absence of RIA, the decision to regulate or not (and to regulate with one
instrument or another) does not become easier: the problems of distribution, fairness,
equity, and threats to the environment and biodiversity would still be there. The main
difference being that the decision-maker would have to address these problems with
less empirical information. Add to this the fact that empirical information for RIA can
be produced by techniques different from cost-benefit analysis, such as multi-criteria
analysis. Criticisms of cost-benefit analysis do not automatically apply to all types of
RIA.
Be that as it may, the fact is that RIA has now become a common tool in the
regulatory policy process. This diffusion process provides a formidable example of
cross-national learning, often aided by the use of the OECD public affairs directorate
(PUMA) as a platform for transfer (Radaelli 2001). For most countries, the
introduction of RIA is an example of policy change. Change can be quite radical for
countries that have limited experience of economic analysis of regulation, poor
consultation mechanisms, and opaque regulatory processes. A considerable amount of
learning may be needed. How does one foster learning, then?
There are two main pathways to learning (Hemerijck and Visser 2001). The classic
pathway for learning is domestic. A country facing a prolonged crisis of its traditional
policies may learn in the sense of changing policy instruments or even policy
paradigms (Hall 1993). This type of learning can be less than perfect, due to several
factors such as bounded rationality, complacency, and the tendency to conceal failure
for political reasons (Hemerijck and Visser 2001). Most importantly, it typically
follows a major crisis or policy failure. Otherwise, political systems tend to act
conservatively and incrementally.
5
Can a country learn without failure? This is where the second pathway, that is crossnational learning, enters the scene. As Hemerijck and Visser (2001) argue, crossnational learning has potential in that it can stimulate learning ahead of failure. It is
within this second pathway that one finds a contrast between a certain use of best
practice & benchmarking (what I would call, following Hemerijck and Visser, a decontextualised approach) and the more interpretative and context-sensitive approach
of lesson-drawing. There is no doubt that, at least in RIA circles, best practice &
benchmarking are by far more popular than context-sensitive lesson-drawing. In this
article I will argue for lesson-drawing instead.
At the outset, however, I wish to make two remarks5. To begin with, benchmarking
and best practice is a technique for policy learning with its own history – a history that
does not start or end with RIA6. This article does not deal with the whole story of
benchmarking, but with its use in RIA.
The second remark is that the real issue is not about the black and white contrast
between totally de-contextualised prescriptions and interpretative lessons. The issue is
about different degrees of contextualisation. We are talking about the position along a
continuum, not about an absolute dichotomy. As Rose has shown (2001;2002), the
problem is that contextualisation is rather low in some policy circles working on
policy diffusion across countries. In the case of RIA, the catalogue of best practice
designed by the OECD7 and, more generally, the discussion among experts is closer to
the pole of de-contextualised benchmarking than to the lesson-drawing pole. The
language of best practice has now become the lingua franca of RIA.
Thus, the issue of contextualisation is the main motivation behind this article. The
organisation of the article is quite simple. I will first make my theoretical claims and
then present basic information on the diffusion of RIA. Next, I will present the main
5
I am grateful to an anonymous reviewer for these remarks.
See Yasin (2002) for a review and Richardson (2000) for benchmarking in the EU.
7
The OECD lists the following: maximise political commitment, design the institutional architecture
with great care, programme training, use flexible yet coherent methods, develop strategies of data
collection and implement them effectively, selectivity of RIA, integrate RIA with the regulatory
process (by preparing impact assessments at the beginning of the process), involve citizens, groups and
companies, and conduct impact assessments both on new proposals and on existing regulations.
6
6
results of the analysis in the ‘lesson-drawing mode’. A final Section will provide
some concluding thoughts.
IN SEARCH OF LEARNING
What’s wrong with best practice then? Actually, there are many good things to say
about this approach. It is inherently comparative as best practice distills the many
experiences of several countries into a manageable synthesis. The approach is also
useful as a point of departure for learning. RIA best practice (as defined by the
OECD) has the merit of highlighting the critical areas to monitor.
Indeed, I am not taking issue with the concept of best practice per se, but with decontextualised benchmarking. The latter designates an approach to best practice which
is normative, insensitive to context, and with a tendency to silence debate. By
‘normative’ I mean lists of best practice used as rigid instructions. By ‘insensitive to
context’ I mean best practice presented as universal laws which are supposed to work
under different institutional and political conditions. But in public policy, the coeteris
paribus assumption of universal laws does not work: the ‘other conditions’ are not
equal (Rose 2002).
Finally, a de-contextualised approach to benchmarking can silence debate by
assuming that there is one best way of doing things. Instead, best practice should
stimulate discussion. Notwithstanding several caveats and reminders, the risk of decontextualised benchmarking looms large in the debate on RIA. OECD precepts
suggest that impact assessment should follow one template and that the same elements
will deliver success independently of the context. Benchmarking exercises that take
place at the OECD routinely do not go much further than peer reviews of draft
reports. They do not provide a real forum for learning from different, contextsensitive national experiences. They do not air the views of NGOs, trade unions, and
other social actors on key issues such as consultation, social welfare, and fairness in
the RIA process.
7
The idea of measuring success by dint of de-contextualised benchmarking is yet
another obstacle to learning. One could simply rate countries by looking at a list of
best practice and give five stars to countries implementing all practices, four stars to
those who implement 90 per cent of the practices, and so on. The idea would be that –
no matter what country one is looking at – the catalogue tells us where success lies.
This definition of success is not sensitive to history and time. Specific institutional,
political, and institutional circumstances are neglected because of the assumption of
total fungibility of best practice (Rose 2001). However, in all processes of
administrative innovation and regulatory reform there are elements that cannot be
transferred from one country to another without taking into account institutional
legacies, state traditions, and the dominant legal culture. The fact that institutions
shape innovations such as RIA is a fundamental obstacle to the total fungibility
hypothesis. De-contextualised models are significantly silent on the ‘institutional
infrastructure and the political conflicts’ associated with the transfer of reforms from
one country to another (Hemerijck and Visser 2001:21).
Further, at the theoretical level there is no certainty that one single practice in country
A (no matter how ‘good’ it may be) will be decisive in country B. In administrative
and regulatory innovations, there are many ways of achieving success. Functional
equivalents are not rare, indeed. To continue with the limitations of best practice.
Even if one could demonstrate that practices X,Y, and Z are instrumental to the
success of countries A,B, and C, there is no certainty that they would deliver success
when transplanted in country D. The reason for that is that success seems to be the
product of an alchemy of elements. Put differently, there is a holistic dimension of
success that cannot be ignored.
Another point: the ‘one-size fits all’ approach to reform seems to ignore that RIA is
not about products traded in a market. The definition of success is easier in the case of
a product. But RIA is a process. The success of which is defined as – according to the
OECD (1997b: 203) – ‘making institutions think differently’. This is an ambitious
definition of success – one for which a ‘product-oriented’ approach is least useful.
8
One may then wonder why is the language of best practice so popular in OECD and
EU circles? One reason for that is that best practice is immensely attractive for
political reasons. Let us consider the supply of best practice. In policy arenas where
the major aim is to produce positive-sum games, the identification of best practice
maximises consensus. Country X is happy to remain in a club if the club
acknowledges that X has produced at least one best practice! Both the OECD and the
EU are not like domestic political systems where the preferences of government and
opposition clash in parliament. Domestic parliamentary arenas have some features of
negative-sum games - under normal circumstances, if the government wins the
opposition loses. But the OECD and the EU work differently. In policy areas where
innovation and reform are the most important goals, the OECD and the EU strive to
maintain the consensus of all participants. Hence best practice can be used to allocate
some ‘credits’ to each and every participant. This is how the supply of best practice
emerges.
What about the demand then? To begin with, policy-makers (but also public
management consultants involved in policy diffusion, Pollitt: 2001:942) need to
simplify a complex reality and show that innovation is possible. Thanks to RIA best
practice, a ‘community of discourse’ (Pollitt 2001:939) gets its own vocabulary and
concepts to handle reality. Policy-makers also need to convince sceptics that success
can be defined, that there are clear mechanisms linking certain steps to success, and
that the spread of success is relatively unconstrained. Hence best practice catalogues
provide a useful (argumentative) incarnation of these dimensions of success.
From the viewpoint of the individual government participating in the process, the
adoption of best practice maximises legitimacy for policy change. This is a
phenomenon well known to scholars of organisational behaviour (DiMaggio and
Powell 1991). Emulation stems from the need to cope with uncertainty by imitating
best practice which is perceived to be legitimate and successful. The trouble is that
imitation of models does not necessarily yield efficiency, although it may well
produce legitimacy. The emphasis on best practice may thus generate the diffusion of
legitimacy rather than efficiency. This conclusion is paradoxical for the advocates of
best practice, who certainly place efficiency high in their list of priorities.
9
Economic models of herding add that imitation is rational when information is costly
(Bikhchandani et al. 1992, 1998). The presence of ‘catalogues’ (of best practice)
strengthens the role of informational clues demanded by actors coping with
uncertainty. When a catalogue goes beyond a certain threshold of public acceptance,
there are incentives to conform to the ‘conventional wisdom’ rather than trying to ‘reinvent the wheel’. The example of transition economies is indicative of how
governments involved in multi-dimensional processes of innovation may appreciate
the presence of RIA catalogues8. Herding, however, may act as an obstacle to
learning. Political models of herding add other obstacles to learning. Imitation may be
based on the prestige of the model (not necessarily on the understanding of the
model), bandwagon effects, and ‘concealed inertia’ in which the old wine is preserved
by dressing it up in new bottles (Hemerijck and Visser 2001; Pollitt and Bouckaert
2000).
The upshot of this analysis is that the explanation of success is typically complex -- a
holistic phenomenon. To import one or even several best practices from other
countries does not guarantee success. Typically, successful RIA reforms hinge on (a)
a set of variables and (b) the institutional mechanisms connecting the elements of the
RIA process. Individual practices can produce a good or bad RIA, depending on the
alchemy provided by factors such as - to anticipate some of themes discussed below –
the politics of problem definition, the balance between administrative acceptability
and the precision of impact assessments, the strength of the RIA network, the balance
of carrots and sticks, and the presence or absence of institutional learning. By
dissecting RIA into unrelated components, the risk is therefore to underestimate the
importance of the holistic components.
In contrast to the approach of best practice, the lesson-drawing approach is more
amenable to holistic analysis in terms of connectedness and interaction among the
various elements. In addition, although the theory of lesson-drawing is still in its early
days, there has been some interesting theoretical work in this area (Page 2000; Rose
8
An example is provided by the SIGMA guide to policy analysis and impact assessment; see SIGMA
(2001).
10
1991, 2001, 2002, Dolowitz & Marsh 1996, Jacoby 2000, and, from an organisational
perspective, Czarniawska 1997, DiMaggio & Powell 1991). By contrast, it is difficult
to find solid theoretical frameworks behind the best practice approach. In addition, the
lesson-drawing approach can be more precise than best practice in crucial issues. Take
the example of institutional design. The OECD has included in its list the
recommendation ‘think carefully about the institutional architecture’. This is indeed a
very important point, but what does it mean precisely? It is only by considering
exemplary lessons (both positive and negative lessons) that the point can be made
clearer and aid cross-national learning.
More importantly still, the literature on lesson-drawing has two advantages. Firstly, it
is aware of the obstacles and limitations of cross-national learning. Consequently, it
can provide criteria to avoid or limit them. By contrast, the best practice approach
runs the risk of trumpeting success. This success-oriented approach may produce
some detrimental effects described by organisational theory. In fact, the preoccupation
for success and the desire to imitate the ‘best of the class’ via competitive
benchmarking can spawn cascades of adoption of useless innovations9. Thus, the bias
toward success of RIA benchmarking can be a hindrance to learning.
Secondly, lesson-drawing stresses the importance of contextualised learning. By
contrast, de-contextualised best practice externalises ‘the costs of applying advice
onto policy-makers who carry the burden of figuring out how to endogenyse the
political realities left out in abstract prescriptions’ (Rose 2002:5).
A CURSORY VIEW OF THE DATA
Let us start with some empirical information arising out of the case studies. I will use
tables 1 to 5 for this purpose10. Table 1 covers the broad, long-term political
commitment to RIA. Impact assessment is a standard tool in most countries. Even
9
This counter-intuitive result was obtained by dint of computational experiments by Strang and Macy
(2001).
10
Some items of the tables are similar to the ones produced by the Commission for a ‘Best procedure
workshop’ on 26 June 2001. However, the information included in the tables presented here is the
result of empirical research undertaken for the project upon which the article is based, whereas the
Commission relied on information provided by the governments.
11
countries with (comparatively speaking) weak public administrations such as Mexico
can afford it. The partial exceptions to the rule are France, Germany, and the EU.
However, France and the EU provide evidence of recent attempts to adopt a proper
RIA system (EU) or to reform the disappointing existing instruments (France).
Political commitment in Germany has been dwindling since the mid-1990s. A tenpoint checklist (the blue checklist) was introduced in 1984. Momentum for RIA was
reached in 1995 with the ‘advisory commission for the lean state’ but the RIA system
eventually chosen (the so-called Bhret system, dubbed after the name of its
designer) has not gone further than an experimentation stage. The poor
institutionalisation of RIA in Germany has less to do with the alleged complexity of
the Bhret system than with the lack of quality control and monitoring of the RIA
process.
Training is an important aspect of the long-term political commitment. The USA do
not seem to put money in federal training programmes for RIA, but one should
consider that in this country impact assessment is targeted towards executive federal
agencies (not towards the law-making process in Congress). Training takes place at
the level of regulatory agencies. Training is limited or absent or ‘in preparation’ in
France, Germany, the EU, and the Netherlands. These are all cases in which RIA has
not yet delivered major results (France, Germany, and the EU) or has not embraced
wholeheartedly quantitative methodologies such as cost-benefit analysis (the
Netherlands).
Table 2 provides information on the regulatory process and ‘quality control’. As will
be shown later, ‘control’ does not mean ‘powers of a central RIA unit’. However, the
adoption of central units is quite normal in the countries considered here. Typically,
the central unit is within (or very close to) core government. In the USA, the Office of
Information and Regulatory Affairs (OIRA) is located within the Office of
Management and Budget. OIRA has some 25 professionals reporting to the VicePresident. In the UK, the Regulatory Impact Unit of the Cabinet Office has some 50
professionals.
12
The quality of RIA hinges on mechanisms of accountability and the systematic
evaluation of the results achieved by impact assessments. The publicity of RIA is an
important aspect of accountability. However, annual reports on RIA, indicators on the
costs and benefits of impact assessments, and systematic evaluation are still limited.
In the US, the debate is fuelled by the work of think tanks and independent
economists who re-calculate federal RIAs and test their limits (Hahn 1999). In other
countries the debate is much less sophisticated.
The scope of RIA is not the same in every country, as shown by table 3. But there is a
common approach to selectivity. RIA costs time, institutional energy, and human
resources. It would be a mistake to cover every aspect of legislation. Countries such
as Australia have a very comprehensive RIA, but other countries (such as Canada,
Britain, and the Netherlands) are more selective and produce less than 200 impact
assessments per year. In the USA there is an element of selectivity in that RIA covers
the regulations of executive federal agencies, but does not apply to the standard
parliamentary process.
The potential of RIA in terms of assessment of existing laws is exploited in Canada,
the USA, Australia, and Britain. Governments have to manage a complex legacy of
regulations. Even the most innovative government will soon discover that the main
task is to manage the legacy of the past (Rose and Davies 1994). Consequently, exante RIAs have a limited impact on the regulatory process if mechanisms of ex-post
control (of the costs and benefits of existing regulations) are lacking.
Turning to methodology (table 4) there is convergence towards an eminently
quantitative approach, but there are also several exceptions. Indeed, there are different
views of what the balance between the quantitative and qualitative should be. The
Commission (2002:16), for example, acknowledges the role of quantitative analysis,
but adds that qualitative dimensions of impacts are also important and that an
important role of RIA is to highlight the great trade-offs of regulatory choices. The
Commission thinks that not always can the results be expressed ‘in one single figure
reflecting the net benefit or cost’. Technical guidelines are in preparation, however,
13
and they may change the final balance between the quantitative and qualitative
dimensions.
Consultation is a vital component of impact assessment (table 5). Its role varies
according to the dominant policy style. Pluralist countries have a different approach to
consultation than corporatist countries. The specialisation of consultation is apparent
if one observes the growth of specific instruments, such as consumer polls, focus
groups, and business test panels (BTPs).
The adoption of the same instrument, however, allows for considerable variability in
terms of performance. As shown in Radaelli (2001), instruments such as business test
panels can be imported selectively to enhance the legitimacy of poor RIA systems.
They may be used in a context very different from the one of the country of origin,
with the aim of avoiding radical reform by injecting a bit of good practice into
otherwise bad systems of consultation. The result can be a delay in the process of
change. This observation reminds us of the limitations of de-contextualised best
practice. As argued above, the lesson-drawing approach can overcome these
limitations. It is to the lessons that we now turn11.
SEVEN RIA LESSONS
THE POLITICS OF PROBLEM DEFINITION
RIA appears to be a typical solution in search of its problem. In fact, the problems to
which RIA is associated differ widely across countries. RIA is an attempt to tackle the
problem of competitiveness in Australia. It becomes a solution to the problem of
credibility in the process of liberalisation and economic integration (via NAFTA) in
Mexico. It certainly was a solution to the problem of ‘rolling the state back’ in the
early days of compliance cost assessment in the UK. It is an instrument geared
towards the general aim of simplification and the ‘slim state’ in Germany. It is a way
the EU tries to cope with the problem of legitimacy of its regulatory system.
The politics of problem definition shows that the introduction of RIA in different
systems is part of a process of interpretation. Impact assessment systems take on a
11
See Radaelli (2001) for the full presentation of the ten case studies from which the lessons are drawn.
14
specific meaning and ‘speak’ to the minds of policy-makers and public opinion in a
language that is framed by the changing priorities of economic policy in time and
space.
NESTING RIA INTO WIDER REFORM PROGRAMMES WITH HIGH
POLITICAL VISIBILITY
The politics of problem definition rings another important bell. In fact, in order to
make an impact on elites and public opinion (and, ultimately, to make an impact on
the reform of the regulatory system), RIA needs a broader vehicle, a ‘big’ national
problem which attracts political attention. Momentum for RIA is guaranteed by the
presence of key governmental programmes. The case of the MDW programme in the
Netherlands - the Cabinet programme on Competition, Deregulation and Legislative
Quality - illustrates how RIA can increase its visibility and consolidate its presence if
linked to major governmental initiatives.
In Germany, RIA has been so far ‘nested’ into the wider simplification programme.
However, a definition of RIA in terms of simplification may run the risk of not being
able to exploit the full potential of this instrument. As shown by the OECD (1997b),
simplification is the least sophisticated way to tackle the reform of regulation. It
remains to be seen whether simplification can provide a political vehicle which is
robust enough to maintain momentum for RIA or not? The answer from Germany is
not very encouraging. The case of France seems to suggest that RIA, when not
anchored to key reform initiatives, fails to get the saliency necessary to consolidate it.
The point is important because RIA is not a one-off reform, such as giving
independence to the central bank. It necessitates long-term political determination.
THE POLITICAL MALLEABILITY OF RIA
The comparison sheds light on the political malleability of RIA. The point can be
illustrated by juxtaposing the Dutch and the Danish case. In Denmark, domestic
institutions seem to be resilient enough to ‘digest’, ‘metabolise’, and accommodate
RIA without changing the main principles and rules of administrative practice (such
15
as the responsibility of individual ministers and the principle of cooperation across
departments). More importantly still, the adoption of RIA has not changed the main
patters of interaction between government and society. Institutions geared towards
social consensus have been able to transform RIA into yet another tool of neocorporatist policy-making. Consensus-building is preferred to the idea of putting the
decision-maker in front of clear-cut empirical assessments12. This is the reason – I
would argue - underlying three key choices in the Danish experience. Firstly,
responsibility for the RIA process is diffused. Secondly, monitoring and quality
control are also diffused, even if this may create a degree of incoherence in the whole
policy process. Thirdly, partial estimates of costs and benefits (produced and checked
by individual departments) are not channelled through a full cost-benefit analysis13, so
that compromise and trade-offs are always possible, until the last minute. Judged from
the Danish perspective, RIA seems to enter domestic regulatory processes only by
following the institutional riverbeds.
However, the relation can go in the other direction as well. RIA can be used to change
political institutions. In the Netherlands, RIA has been used deliberately to put
pressure on the neo-corporatist policy style. Here the government gave RIA the task
of seeking to break the cosy relationships between administration, unions and
employers and to bring back in the policy process the wider perspective of the
ordinary citizen, the small firms, and other stakeholders. As the OECD (1999a:5) puts
it, ‘the traditional decision-making structures and procedures have been changed in
such a way that possibilities were created for more influence on public decisions by
citizens who were directly concerned or who otherwise had an interest in those
decisions’. The lesson is that RIA can be used for ambitious projects of institutional
change.
12
This argument is based on Radaelli (2001: chapter 7), OECD (2000), the official website of the
Danish departments in charge of RIA. I am grateful to an anonymous Danish officer who provided
additional information in May 2002, although she is not responsible for the conclusions I draw in this
article.
13
Although, in principle, the regulators should follow some principles of cost-benefit analysis. On
balance, the Danish RIA is more qualitative than quantitative (see table 4).
16
THE TRADE-OFF BETWEEN TECHNICAL PRECISION AND
ADMINISTRATIVE ASSIMILATION
One hurdle facing the assimilation of RIA concerns the attitude of administration.
Different departments may conflict over who is really in charge of RIA. ‘Intrusive’
central units lack the vital support of departments. Denmark and the Netherlands
provide a lesson.
The lesson this time is about the trade-off between co-ordination among departments
(that is, inter-departmental consensus) and the quality of instruments. Both the
Netherlands and Denmark have reached a considerable amount of co-operation among
departments: no department is really ‘in charge’ of RIA, although the Department of
Economic Affairs seems to have taken the lead in the Netherlands and the Department
of Finance in Denmark. At least three departments co-operate in the Dutch process,
and their preferences are reflected in the three sets of questions included in the
checklists. Even the Dutch Helpdesk is more a promoter of co-ordination among
departments than a central unit or controller of RIA. The powers of the Helpdesk have
been deliberately kept to a minimum, after an early stage in which the interventionist
approach of the Helpdesk created more than one problem to the RIA process (OECD
1999b:135). Learning from that experience, the government has now changed the
Helpdesk into something closer to a service unit rather than to a central process coordinator.
Network-building and administrative co-operation are remarkable results of these two
experiences. This has come at a cost, however. The cost of administrative consensus
is a degree of vagueness in the instruments used for RIA. The three checklists used in
the Netherlands do not produce more than qualitative assessments of important items
such as the benefits. When quantitative, the results of the checklists do not produce a
single final ‘number’. The same happens in Denmark, where, as we have seen, partial
estimates do not flow into a cost-benefit analysis supporting the suggestion of a final
choice. There is a clear trade-off between inter-administrative co-operation and
network building, on the one hand, and precision, effectiveness, and efficiency of the
instruments, on the other.
17
The risk of ignoring the trade-off is to design ‘technically perfect’ RIA systems which
disintegrate when they hit the road of implementation. A cautious designer should be
aware of this trade-off before setting the targets of RIA. One possible mistake of
copying from the most powerful central units is that they may not take off in different
administrative contexts, especially when infra-administrative conflict is a real issue.
OF NETWORKS AND WATCHDOGS
Talking about central units, a frequently asked question is ‘what is the power of the
central RIA unit? Can it reject assessments compiled by the departments if they fall
short of certain requirements?’ This type of question is in a sense misleading. The
problem is less one of power of individual actors and more one of strength and
responsibility of the whole network. Of course, central units are useful in that they
corroborate the process, provide information and advice, and perform quality checks.
But even when the central units have gained in importance, their role is to monitor the
quality of the whole network. They are not there to give marks to the departments.
The USA and the UK show that quality is preserved by the function of control (a
function diffused throughout the network via multiple checks and balances), not by
the ultimate ‘controller’. The presence of different and apparently redundant actors in
the British RIA process (to mention a few actors: the regulatory impact unit, the better
regulation task force, the small business service, the panel for regulatory
accountability) is geared towards this objective. In the USA, a measure of institutional
competition is the best guarantee that the system is under control. Indeed, the issue
goes beyond control. It goes further, in the direction of distributing responsibility and
therefore making all actors feel more responsible and alert. In Canada control and
responsibility are delivered by a ‘regulatory game’ wherein sanctions & incentives,
flexibility & key principles, discretion & rules are well balanced.
A final remark on the issue of control. The exclusive emphasis on ‘the ultimate power
of the watchdog’ may transform RIA in a sort of punishment for administrators who
are already working under stress and with limited resources. If this is the case, it is
useful to combine the logic of control (and sticks) with the logic of incentives. RIA
18
should become a process capable of distributing incentives. Looking at RIA from the
angle of a civil servant, it should appear as an opportunity to be rewarded, rather than
as a hurdle which may bring sanctions.
INSTITUTIONAL LEARNING AS MAIN TARGET OF RIA
Learning processes are fundamental in impact assessment systems. RIA needs time,
and there is nothing wrong in starting from limited objectives and then learn by doing.
Even the hyper-ideological first Thatcher government (a government that sent reading
lists of public choice classics to the civil servants, Bosanquet: 1981) was more
empiricist and realistic than one would have thought. Upon closer inspection, the
introduction of compliance cost assessment in the UK was not the unfolding of an
ideological teleology, but an attempt to get to grips with the fundamentals of impact
assessment. Of course, compliance cost assessment was conceptually wrong (for the
reasons explained by Froud et al. 1998). It was also politically wrong, because it is
impossible to build a credible regulatory process around one single constituency.
Indeed, legitimacy soon became the Achilles’ heel of compliance cost assessment.
However, trial and error enabled refinement of both the regulatory philosophy and the
accuracy of the appraisal systems.
Clearly, the methodology of compliance cost assessment was not impeccable, the
requirements often bland, and the political impact controversial. However, one should
assess it by placing it within a context of institutional learning. Instead of trying,
unrealistically, to produce an ‘ideal’ regulatory process by dint of full-blown costbenefit analysis, the pioneers of RIA in Britain settled for a realistic strategy based on
a less ambitious exercise with precise and limited objectives. In fact, the realistic goal
of the conservative governments was not to get the regulatory process closer to the
ideal of an economics textbook, but to force regulators to pay attention to what it was
believed they had never considered: the compliance costs of business. Compliance
cost assessment was coherent with government perception of the main weakness of
the regulatory process: the lack of accountability of regulators towards business.
19
Learning in politics takes place within a political context, however. Although some
progress was made with the introduction of Regulatory Appraisal in 1996, it was only
with the Blair government that the political objective was changed -- from ‘setting fire
to regulation’ to the balance between the protection of consumers and citizens, on the
one hand, and the reduction of the regulatory burden on businesses, on the other.
Crucial in this learning process was the acknowledgement that the ‘bonfire of
regulations’ - a goal of the Thatcherite years - is not the right target. The process of
learning in the UK includes the acknowledgement that, no matter what the ideological
fury of governments may be, regulation is here to stay in the contemporary state.
There are indeed structural reasons why the bonfire of regulations is not feasible at all
(Majone 1990).
To conclude with two lessons, the British experience shows that one should look for
impacts of RIA systems at the level of the regulators. A fundamental target for RIA is
the impact on regulators - their cognitive beliefs, their attitude to consultation, their
understanding of alternatives to command & control regulation, and so on. A second
lesson is that institutional learning is crucial to the development of impact assessment
systems. An implication is that the results of RIA have to be assessed over a period of
a decade or so, not in the short-term.
LEARNING FROM NEGATIVE LESSONS
In different guises, France, Germany, and the EU – the less developed cases of impact
assessment – show the fragility of the system when political determination,
monitoring & quality control, and standards for consultation are lacking.
Up until recently, the EU negative lesson has been one of dissipating institutional
energy. The proliferation of instruments, initiatives, and pilot projects has not enabled
the EU to match the average level of regulatory quality of the OECD countries (EPC
2001, Mandelkern Group 2001). There have been several attempts to introduce partial
forms of impact assessment in the EU policy process, but only in 2002 did the
Commission take the commitment to introduce a single, integrated form of RIA
(Commission 2002). One reason for this fragmentation is the administrative
20
‘pillarisation’ of the EU (the different Directorate Generals do not share a single
regulatory culture). Another is the lack of a central focal point for impact assessment
within the Commission – a sore point addressed with specific remedies by recent
proposals (Commission 2002). Administrative fragmentation, therefore, results in the
fragmentation of the reform process. The problem is compounded by a legalistic
attitude that interprets RIA in terms of better legislation. The problem is governing
regulatory policy instead (Radaelli 1999).
Arguably, the main negative lesson provided by the EU is about the danger of skirting
around the real issue, that is, institutional design. Overall, the design of RIA is just a
component of the general plan to enhance the democratic legitimacy, accountability
and transparency of European regulation. This raises the question of the most
appropriate institutional design within which RIA can be efficient, legitimate, and
credible. One possible option is to make the whole regulatory system more credible
and more legitimate by looking at independent agencies as the key to institutional
design (Yataganas 2001). RIA would then be eminently a task of independent
regulatory agencies, with the Commission in charge of monitoring and quality control.
Another option is to maintain regulatory functions within the Commission, but then,
arguably, the secretary general should be strengthened. Otherwise RIA would remain
fragmented.
Instruments such as RIA do not perform in an institutional vacuum. One must raise
the question whether the current institutional design of EU regulation provides the
most appropriate institutional environment for the development of RIA. If the EU
does not tackle the institutional question, progress will be at best piecemeal and the
legitimacy and credibility of RIA will remain modest.
CONCLUSIONS
This article has argued that a lesson-drawing approach provides useful insights. By
contrast, de-contextualised lists of best practice have three limitations. Firstly, they
21
see the tree but not the forest. Secondly, even when they come close to the
suggestions of the lesson-drawing approach, they remain rather vague. Thirdly, by
focusing exclusively on success, best practice ignores the useful contribution of
negative lessons and may trigger inefficient cascades of adoption (Strang and Macy
2001).
The contrast – as mentioned above – is not about total de-contextualisation and very
high sensitivity to context. The real issue is about degrees of contextualisation. The
rather abstract prescriptions of the OECD list of best practice (see footnote 7) can be
usefully complemented by the lessons drawn in this article. For example, the
prescription ‘maximise political commitment’ (OECD list) can be complemented by
the consideration of the policy problems and major reform programmes in which RIA
is nested. The precept on institutional design becomes less abstract when one looks at
the lessons about the political malleability of RIA, the trade-offs between assimilation
and precision, and the relationships between central watchdogs and networks.
Although complementary, the two approaches differ in their emphasis on normative
recommendations and the definition of success. For the best practice approach (at
least in the form used in the debate on RIA), there is one list of instructions to follow
everywhere. The lesson-drawing approach is less bothered by ‘instructions’ and more
interested in different pathways to success. Indeed, the very notion of success and
achievement is plural rather than singular: in some countries RIA is considered
successful because it produces good cost-benefit analyses, other countries look at the
implications for institutional reform, state-society relations, and administrative
consensus. Entering institutions in the analysis of RIA diffusion has the advantage of
considering the prismatic dimension of success.
The lessons also shed light on the limitations of RIA rather than trumpeting success.
Issues of institutional learning, credibility, and legitimacy are rarely discussed in lists
of best practice. Yet the failure of RIA is often the result of systems that hinder
learning, lack credibility, and suffer from poor legitimacy. Most governments do not
report on the costs of impact assessment system. In most countries (the US being an
exception) there is no debate on RIA outside the circles of public administration and
22
government. This is an element of fragility. The institutionalisation of RIA needs a
community of stakeholders and independent experts willing to check and question
impact assessments produced by the government.
On balance, the evidence presented here differs from the best practice approach in
another respect, that is, the role of theories of the policy process. Benchmarking and
RIA best practice do not say much about the policy process, or assume a rationalsynoptic theory. The evidence on administrative assimilation and the role of
institutions speaks volumes on the limitations of rational-synoptic theories where RIA
fills in the information gaps of the decision-maker. The idea is one of a linear process
in which a problem exists, information is lacking, RIA produces information, and the
decision-maker can eventually decide. The lessons illustrated in this article, instead,
chime with a different view of the policy process, one in which bounded rationality,
learning by doing, administrative adaptation, and evolution are the most important
features.
Having said that, it would be simplistic to conclude that a designer of a RIA system
can just import lessons. History does not provide lessons, but a stock of ambiguous
evidence in search of interpretation. The literature on lesson-drawing and policy
transfer stresses the process of interpretation, institutional adaptation, and translation
of lessons. There are indeed lots of mistakes that a designer can make when importing
lessons. But the focus of this article is not normative - that is, it does not claim to
present a catalogue of lessons ready to be used - but analytic. The question how
policy-makers can learn from analytic lessons needs a separate treatment. One issue
for further research on this topic is that very high contextualisation allows no
generalisation at all. This reduces the scope for cross-national learning. Accordingly,
the suggestion for further research is not to go all the way down the continuum from
high generalisation to extreme contextualisation, but to learn how to draw lessons
without bracketing context14.
14
See Rose (2001; 2002) for suggestions along these lines.
23
Acknowledgements
I wish to acknowledge the insights and empirical research produced by the researchers
who worked with me in the comparative project funded by the Italian government in
2001: Alessandra Caldarozzi, Sabrina Cavatorto, Fabrizio de Francesco, Alessandra
Francesconi, Franois Lafond, Salvatore Monni, and Francesco Sarpi. Antonio La
Spina (scientific director of the wider pilot project on RIA of the Italian government)
and Pia Marconi (Presidency of the Council of Ministers) provided inspiration and
support. Ed Page organised the presentation of a draft of this article to an audience of
practitioners and academics at CARR, LSE, 11 March 2002. He also provided
extremely valuable suggestions for turning the LSE draft into an academic article.
Three EJPR reviewers and Morten Kallestrup sent perceptive comments and
criticisms. Finally, I wish to thank the RIA officers who made this project feasible by
giving access to primary source materials and by arranging meetings and interviews.
The usual disclaimer applies.
24
References
Adler, M.D. and E.A. Posner (2001) (Eds.) Cost-Benefit Analysis. Legal, Economic, and Philosophical
Perspectives, Chicago and London: University of Chicago Press.
Bikhchandani, S. Hirshleifer, D. and I. Welch (1992) ‘A theory of fads, fashion, custom, and cultural
change as informational cascades’, Journal of Political Economy, 100: 992-1026.
Bikhchandani, S. Hirshleifer, D. and I. Welch (1998) ‘Learning from the behavior of others:
Conformity, fads, and informational cascades’, Journal of Economic Perspectives, 12: 151-170.
Boardman, A.E., Greenberg, D.H, Vining, A.R., and D. Weimer (2001) Cost-Benefit Analysis:
Concepts and Practice, 2nd edition, Prentice Hall, Upper Saddle River, New Jersey.
Bosanquet, N. (1981) ‘Sir Keith’s reading list’ Political Quarterly, 52(3): 324-341.
Commission of the European Communities (2002) ‘Communication from the Commission on Impact
Assessment’ COM(2002) 276 Final, 5 June 2002.
Czarniawska, B. (1997) Narrating the Organization. Dramas of Institutional Identity, Chicago and
London: University of Chicago Press.
DiMaggio P. and W.W. Powell (1991) ‘The iron cage revisited: institutional isomorphism and
collective rationality in organizational fields’, in W.W. Powell and P. DiMaggio (eds.) The New
Institutionalism in Organizational Analysis, Chicago: University of Chicago Press, 63-82.
Dolowitz, D. and D. Marsh (1996) 'Who learns from whom: a review of the policy transfer literature',
Political Studies, XLIV: 343-57.
EPC - European Policy Centre (2001) Regulatory impact analysis: Improving the quality of EU
regulatory activity, EPC Occasional Paper, Brussels, September 2001.
Froud, J., R. Boden, A. Ogus, and P. Stubbs (1998) Controlling the Regulators, Macmillan:
Basingstoke, Hampshire.
Hahn R. (1999), “Regulatory Reform: Assessing the Government’s own Numbers”, AEI-Brookings
Joint Center for Regulatory Studies, Working Paper 99-6 (July 1999).
Hall, P. (1993) ‘Policy Paradigms, Social Learning and the State. The Case of Economic PolicyMaking in Britain’, Comparative Politics, 25(3) April: 275-296.
Hemerijck, A. and J. Visser (2001) Learning and mimicking: How European welfare states reform,
typescript.
House, E. R. (2000), ‘The Limits of Cost Benefit Evaluation’, Evaluation, Vol. 6(1): 79-86.
Jacoby, W. (2000) Imitation and Politics: Redesigning Modern Germany, Ithaca and London: Cornell
University Press.
James, O. (2000) ‘Regulation inside government.Public interest justifications and regulatory failures’,
Public Administration, 78(2): 327-343.
Lutter, R. (2001) Economic analysis of regulation in the US: What lessons for the European
Commission?, Report to the Entreprise Directorate General, European Commission, 26 June 2001.
Majone, G.D. (1990) (ed) Deregulation or Re-regulation? Regulatory Reform in Europe and the United
States, London: Pinter.
Mandelkern Group on Better Regulation (2001) Final report, 13 November 2001, typescript.
25
National Audit Office (2001) Better Regulation: Making good use of regulatory impact assessments,
Report by the Comptroller and Auditor General, HC 329 Session 2001-2002, 15 November 2001.
OECD (1997a) Regulatory Impact Analysis. Best Practices in OECD Countries, OECD Publications:
Paris.
OECD (1997b) The OECD Report on Regulatory Reform: Thematic Studies, OECD Publications,
Paris.
OECD (1999a) The Netherlands, Project “Strategic Review and Reform” Country Paper, OECD, Paris,
available in OECD web site (www.oecd.org). Accessed in March 2001.
OECD (1999b) Regulatory Reform in the Netherlands, OECD Publications, Paris.
OECD (2000) Regulatory Reform in Denmark, OECD Publications, Paris.
Page, E. (2000) Future Governance and the literature on policy transfer and lesson drawing,
Introduction prepared for the ESRC Future Governance Programme Workshop on Policy Transfer, 28
January 2000, Britannia House, London.
Pollitt, C. (2001) ‘Convergence: The useful myth?’ Public Administration, 79(4): 933-947.
Pollitt, C. and G. Bouckaert (2000) Public Management and Reform. A Comparative Analysis, Oxford:
Oxford University Press.
Radaelli, C.M. (1999) 'Steering the Community regulatory system: the challenges ahead', Public
Administration, 77(4): 855-71.
Radaelli, C.M. (2001) (ed.) ‘Regulatory impact analysis in comparative perspective’ (in Italian),
Soveria Mannelli, Rubbettino Editore.
Richardson, K. (2000) Big business and the European agenda, Sussex Working Paper no.35, University
of Sussex, September 2000.
Rose, R. (1991) 'What is lesson-drawing?', Journal of Public Policy, 11: 3-30.
Rose, R. (2001) Ten steps in learning lessons from abroad, Hull: ESRC Future of Governance papers
no.1, 2001.
Rose, R. (2002) ‘When all other conditions are not equal: The context for drawing lessons’ to appear in
C. Jones Finer (Ed) Social Policy Reform in Socialist Market China: Lessons for and from Abroad’
Aldershot: Ashgate.
Rose R. and P. Davies (1994) Inheritance in Public Policy: Change without Choice in Britain, New
Haven: Yale University Press.
SIGMA (2001) Improving policy instruments through impact assessment, Sigma paper n.31, Paris:
OECD-PUMA.
Strang, D. and M.W. Macy (2001) ‘In search of excellence: Fads, success stories, and adaptive
emulation’, American Journal of Sociology, 107(1) July: 147-182.
Thiele, L. P. (2000), 'Limiting risks: Environmental ethics as a policy primer', Policy Studies Journal,
28(3): 540-57.
Virani, S. and S. Graham (1998), Economic Evaluation of Environmental Policies and Legislation,
Final Report prepared for European Commission Directorate General III (Industrial Affairs) by Risk &
Policy Analysts Limited, Project: Ref/Title J236/CBA/CEA.
26
Yasin, M.M. (2002) ‘The theory and practice of benchmarking: then and now’, Benchmarking, 9(3):
217-243.
Yataganas, X.A. (2001) Delegation of regulatory authority in the European Union. The relevance of the
American model of independent agencies, Jean Monnet working paper no.3/01, Harvard Law School.
Table 1 – Political determination
Political commitment to RIA
Guide to RIA
Training programmes for RIA
Y = YES
N = NO
ISSUE IN PROGRESS
Mex
Can
US
Aus
Y
Y
Y
Y
Y
Y
Y
Y
NA
Y
N19
Y
NA=NOT AVAILABLE OR
Table 2 – Quality control of RIA
Central body for quality control and monitoring of the RIA process
Regulatory review of the OECD
15
Although the French RIA is not well-developed, there have been recent attempts to assess the current
state of affairs and make proposals for improvement.
16
Since the mid-1990s the political determination to develop RIA has declined.
17
Momentum for the development of a proper RIA system in the EU was reached recently, in 2002.
18
Technical guidelines for impact assessment and training programmes in preparation (Commission
2002).
19
Training sessions proposed by the Office for Management and Budget in 1998.
20
The Department of Finance checks almost exclusively the financial and administrative impact.
21
There is no central unit for RIA in the Netherlands. A small help desk assists the departments.
Control is diffuse rather than concentrated in this country.
Me
Ca
US
Y
Y
Y
Y
N
Y
27
Annual report on RIA published by the government
N
Y22
Y
RIA sent to parliament before the bill is discussed
Y
Y
N26
NA
Y
Y
N
Y
Y
N
N
Y
RIA results are made public when laws are published
Public debate on the quality of the RIA process
Annual cost of RIA is known
Table 3 –Scope of RIA
Mex
Can
US
Aus
Y
Y
Y30
Y31
N
N
N
N
Y
Y
Y33
Y
N
Y
Y
Y
RIA is selective
RIA of single MPs bills
Regional or subnational RIAs
RIA of exisitng laws
22
Each department presents a performance report to Parliament.
There is no annual report on RIA. However, the Better regulation task force monitors the state of the
art, undertakes specific inquiries (for example on economic regulators), and produces an annual report.
In 2001 the quality of RIA in the UK was assessed by an independent report of the National Audit
Office (2001).
24
Report on ‘Legislation and business’. The report covers the administrative and economic burden on
business of new legislation.
25
In 1997 the Commission published a report on business impact assessment, but this report was not
evaluatative. The annual Better law making report of the Commission is not focused on impact
assessment. The Mandelkern group has produced a succint analysis of impact assessment in the EU and
a list of proposals.
26
In the US RIA is applied to the proposed regulations of executive federal agencies. It does not
involve laws made in Congress.
27
The debate is focused on the poor results of impact assessment in France and how to improve.
28
The budget of the Regulatory Impact Unit is known, of course. But there are no data on the total
costs in RIA (departmental workload, for example).
29
An estimate of the cost of a possible integrated impact assessment system in contained in EPC
(2001).
30 Criteria defined in Executive Order 12866.
31 The Australian RIA is very broad and comprehensive. However, there are explicit circumstances in
which RIA is not compulsory. When conflict over the necessity to undertake an impact assessment
arises, the Office of Regulation Review decides.
32 This should happen, but in practice it does not happen.
33
There is no federal obligation to produce impact assessment at the state level, but some states have
adopted a RIA system.
34 In a sense, Slim (Simpler legislation for the internal market) comes close to some objectives of RIA.
However, Slim is about simplification. It does not produce impact assessments.
23
U
28
Specific RIA procedure for EU regulation
Table 4 - Methodology
Mex
Can
US
Au
Assessment of alternatives options (both regulatory and non-regulatory)
Y
Y
Y
Y
RIA covers the costs of companies
Y
Y
Y
Y
RIA covers the benefits of companies
NA
Y
Y
Y
RIA covers the costs & benefits for citizens
NA
Y
Y
Y
RIA covers the impact on public administration
NA
Y
Y
Y
Y38
Y
Y
Y
Y
Y
Y
Y
Y
Y
Y
Y
RIA is eminently quantitative
Detailed methodology for the quantitative analysis of costs
Detailed methodology for the quantitative analysis of benefits
Table 5 - Consultation
35
As the main emphasis is on red tape, the most common data on benefits refer to benefits arising out
of the abolition of red tape. Quantitative information on other types of costs exists, but it is less
systematic.
36
This is one of the goals of the Dutch RIA, but overall the level of quantitative analysis remains poor
by comparative standards.
37
Most Danish RIAs do not cover this dimension, although they are not neglected in the analysis of
proposed environmental regulations and in the analysis of red tape.
38
For RIAs with major impact.
39
Partial estimates exist, but they do not provide a comprehensive analysis of costs and benefits. The
qualification of the Danish RIA as more qualitative than quantitative is confirmed by the information
sent by the Danish government to the Commission for the Best procedure workshop on 26 June 2001.
40
The Commission (2002:16) argues for a balance between quantitative and qualitative. Technical
guidelines are in preparation.
41
Costs of firms. The expected administrative burdens are analysed by dint of different types of
business test panels (test panels, focus panels, and test groups).
29
Common guidelines for consultation (for all departments)
Results of consultation are made public
Specific procedures for the consultation of certain stakeholders (ex: business test panels, consumer polls, focus
groups, …)
Systematic (as opposed to episodic) use of internet
Me
Ca
Y
Y
Y
Y
NA
Y
Y
Y
Download