Concepts and ideas

advertisement
Concepts and Ideas
Monitoring and Evaluation in the practice of European Cohesion Policy 2014+
- European Regional Development Fund and Cohesion Fund –
This draft is a paper for discussion between DG Regional Policy and the Member States
evaluation network in April 2011. The paper will be revised taking into account the
discussions and – later in 2011 – the regulatory proposals of the European Commission for a
future Cohesion Policy.
The common indicators in annex 1 are set to become part of a regulation (most likely ERDFregulation or implementing regulation).
Note that the definitions for "result" and "impact" differ from the current practice.
Table of contents
1. Key concepts
1.1. Intervention logic of a programme as starting point
1.2. Monitoring and evaluation
1.2.1. Monitoring
1.2.2. Evaluation
1.2.2.1. Impact evaluation
1.2.2.2. Implementation evaluation
1.2.2.3. The evaluation of integrated programmes
2. Standards for evaluations
3. Practical points for the 2014-20 programming period
3.1. Programming
3.2. Ex ante evaluation of Operational Programmes
3.3. Monitoring of Operational Programmes and of the partnership contract
3.4. Evaluation during the programming period
3.5. Evaluation plan
3.6. Ex post evaluation
Glossary
Annexes
1 List of common indicators
2 Examples of result indicators
3 A structure for standards
4 Recommended reading
30 March 2011
Key concepts
This first section explains a common understanding of concepts and key terminology as a
basis for the remainder of the paper.
1.1. Programming as starting point. Results and result indicators1.
The starting point of designing any public intervention is to identify a problem or a need to be
addressed. In essence, as there will be always a multitude of real or perceived needs, the
decision on which needs should be tackled is the result of a deliberative social process (a
"political decision"). It is part of this process to also define the desired situation that should be
arrived at as a change from the current one. A public intervention often will aim at more than
one result. For instance, investment in the railway network might aim to improve the
accessibility of a region and to reduce the burden on the environment.
The intended result, or simply result, is the specific dimension of the well-being and progress
for people that motivates policy action, i.e. to be modified by the interventions designed.
Once a result has been chosen it must be represented by appropriate measures. This can be
done by selecting one or more result indicators. Examples for the above case of railways are a
reduction in travel time, a reduction in CO2 emissions and fewer traffic fatalities.
Result indicators are variables that provide information on some specific aspects of results
that lend themselves to be measured.
A precise definition of result indicators facilitates understanding of the problem and the policy
need and will facilitate a later judgement about whether or not objectives have been met. In
this context it can be useful to set targets for result indicators. Concentration of resources on a
limited number of interventions is obviously helpful to achieve such clarity.
Having identified needs and a desired result does not yet mean that the public intervention has
been fully designed. The reason behind this is that in most cases different factors can drive a
change. Any policymaker must analyse such reasons and decide which ones will be the object
of public policy. In other words, an intervention logic must be established. For example, if a
reduced number of traffic accidents is the result indicator of a programme, safer roads, a
modal shift towards rail or a better behaviour of drivers could be assumed to change the
situation. The programme designer must clarify which of those factors he wants to affect. The
specific activity of programmes can be typically be captured by output indicators.
Outputs are the direct product of programmes, they are intended to contribute to results.
Often it can be useful to illustrate an intervention graphically by a logical framework. As
mentioned above, such a stylised representation of a programme should reflect that an
intervention can have several results to be addressed and that several outputs can lead to these
changes. Equally, it can be useful to differentiate the result(s) by affected groups and time
horizons.
1
This section benefits from the methodological note "Outcome indicators and targets" produced for DG
Regional Policy by an expert group led by F. Barca and P. McCann. In this paper, the meaning of the term
"result" is the same as "outcome" in the Barca/McCann paper.
2
17 February
2016
Impact is the effect of the contribution of the outputs supported by the policy to the change in
the result indicator.
Differences of concepts and terms between 2007-13 and 2014+
In 2007-13, impact meant the ultimate effect of the intervention, in most cases after a
significant time lapse. Member States and the Commission dedicated resources to
distinguishing results (direct or short term effects) from impacts without sufficient attention to
the question how values against these categories could be obtained.
The new approach shifts the accent at all stages in the process to the policy objectives being
targeted. This enhances evaluability, as clarity of intended changes and ex ante identification
of evaluation methods means that the results of the policy can be monitored and evaluated.
Real policy decisions are driven by needs that are of different nature, be it "impacts" or
"results".
The new approach brings evaluation guidance closer to these real decision making.
In large parts of the literature, "impact" means the effect of the intervention net of other
influences on a certain variable independent of the question if the variables belongs to
outputs, results or impact in the traditional sense.
The new approach follows the argumentation in the literature. The aim is again to centre
attention on the question of evaluability.
Taken together, Cohesion policy is set to become more result-oriented with programmes that
are designed to deliver and to be evaluated.
1.2 Monitoring and evaluation: support to management and capturing effects
The public expects managing authorities to fulfil two essential tasks when running a
programme:
 to deliver the programme in an efficient manner and to be accountable on this (the
management of a programme) and
 to verify with credibility whether a programme has delivered the desired effects. We will
argue below that monitoring is a tool serving foremost the management purpose, while
evaluation contributes to both tasks.
Policy learning is an overarching objective of all evaluations.
1.2.1 Monitoring
To monitor means to observe. Monitoring of outputs means to observe whether desired
products are occurring and whether implementation is on track. In general, the outputs
measured are the direct and near-term consequences of project activities.
Cohesion policy programmes are implemented in the context of multilevel governance with a
clear demarcation of roles and responsibilities. The actors in this system – implementing
agencies, managing authorities, the national and the EU level - differ in their information
3
17 February
2016
needs to be met by monitoring. One of the tasks at the European level is to aggregate certain
information across all programmes in order to be accountable to the Council, Parliament, the
Court of Auditors and EU citizens in general on what Cohesion Policy resources are spent on.
This is the task of common indicators, mostly outputs, defined at EU level.
Monitoring also observes changes in the result indicators (policy monitoring). Tracking
the values of result indicators allows a judgement on whether or not the indicators move in the
desired direction, in other words, if needs are being met. If they are not, this can prompt
reflection on the appropriateness and effectiveness of interventions or, indeed, on the
appropriateness of the result indicators chosen.
The values of result indicators, both for baselines and at later points in time, in some cases can
be obtained from national or regional statistics2. In other cases it might be necessary to carry
out surveys or to use other observation techniques.
1.2.2 Evaluation
Changes in the result which actually take place are a result of the actions co-financed by the
public intervention, for example by the Funds, as well as other factors. In other words,
knowing the difference between the situation before and after the public intervention in most
cases does not equal the effect of public intervention.
Change in result indicator ═ contribution of intervention + contribution of other factors
1.2.2.1 Impact evaluation – capturing effects
To disentangle the effects of the intervention from the contribution of other factors and to
understand the functioning of a programme is a task for impact evaluation. Two distinctive
questions are to be answered:
 did the public intervention have an effect at all and if yes, how big – positive or negative –
was this effect. The question is: Does it work? Is there a causal link?
 why an intervention produces intended and unintended effects. The goal is to answer the
“why it works?” question.
Sometimes, we can provide quantified evidence that an intervention works. More often,
evaluations can provide judgements on whether the intervention worked or not. In both cases
it is preferable to design a methodology which uses more than one method ("triangulation"
suggests using 3!).
The importance of theory based impact evaluations stems from the fact that a great deal of
other information, besides quantifiable causal effect, is useful to policy makers to make
decisions and to be accountable to citizens. The question of why a set of interventions
produces effects, intended as well as unintended, for whom and in which context, is as
relevant, important, and equally challenging, if not more, than the “made a difference”
question. This approach does not produce a number, it produces a narrative. Theory-based
evaluations can provide a precious and rare commodity, insights into why things work, or
2
These statistics are also used for benchmarking Member States, regions, programmes or kind of activities
however without the purpose of linking the change of these statistics with the programmes intervention.
4
17 February
2016
don’t. The main focus is not a counterfactual (“how things would have been without”) rather
a theory of change (“how things should logically work to produce the desired change”). The
centrality of the theory of change justifies calling this approach theory-based impact
evaluation.
Typical methods are use of administrative data, literature reviews, case studies, interviews,
surveys and other qualitative methods. Often mentioned approaches are realist evaluation,
general elimination methodology and participatory evaluation. The evidence marshaled
during such an evaluation, both of quantitative and qualitative nature, should enable the
evaluator to answer the evaluation questions and to provide a judgment on the success of the
public intervention. Like for all other evaluations, this judgment will be based on imperfect
information. What is important is that the evidence base is good enough to ensure a decision
making with the degree of certainty necessary for the intervention under consideration.
Counterfactual impact evaluations have the potential to provide a credible answer to the
question "Does it work?". The central question of counterfactual evaluations is rather
narrow—how much difference does a treatment make—and produces answers that are
typically numbers, or more often differences, to which it is plausible to give a causal
interpretation based on empirical evidence and some assumptions. Is the difference observed
in the outcome after the implementation of the intervention caused by the intervention itself,
or by something else? Evaluations of this type are based on models of cause and effect and
require a credible and rigorously defined counterfactual to control for factors other than the
intervention that might account for the observed change.
Typical methods are difference-in-difference, discontinuity design, propensity score
matching, the use of instrumental variables and random control trials. The existence of
baseline data and information on the situation of supported and non-supported beneficiaries
at a certain point in time after the public intervention is a critical precondition for the
applicability of counterfactual methods.
Note that counterfactual methods can typically be applied only to some interventions (e.g.,
training, enterprise support), i.e. relatively homogenous interventions with a high number of
beneficiaries. If a public authority wishes to estimate the effects of interventions for which
counterfactual methods are inappropriate, other methods can be used. For the transport
example, this could be an ex post cost-benefit-analysis or a sectoral transport model.
Ideally, counterfactual impact evaluations and theory based evaluations should
complement each other. While they should be kept separate methodologically, policymakers
should use the results of both sets of methods as they see fit. Even assuming that the
counterfactual methods proved that a certain intervention worked and could even put a
number on this, this is still a finding about one intervention under certain circumstances 3.
More qualitative, "traditional" evaluation techniques are needed to understand to which
interventions these findings can be transferred and what determines the degree of
transferability.
Impact evaluations of both types are carried during and after the programming period (ex post
evaluation). A well-defined set of impact evaluations during a programming period also
means that the often cited problem of "late" ex post evaluations looses in importance.
3
For example, the intervention could not work because of circumstances that can easily be modified: this
understanding will not be given by a pure counterfactual evaluation.
5
17 February
2016
The ex ante evaluation of programmes can be understood as a kind of theory-based
evaluation, testing the strength of the theory of change and the logical framework before the
programme is implemented.
Are counterfactual methods another burden on beneficiaries?
The data requirements for counterfactual impact evaluation do not need to be burdensome. In
fact, the counterfactual method is at its best when relatively simple indicators are considered,
such as:
 Patent applications4
 Number of employees5
 Investment and turnover6
These data are already collected from firms, whether by the tax and labour authorities, or by
patent offices or databases such as AMADEUS. The only remaining data burden falls, not on
firms, but on managing authorities (who should be able to specify which firms were assisted
by which instrument, and by how much, in order to construct the treated and control groups).
As a result, in terms of burden on beneficiaries, counterfactual impact evaluation is far less
burdensome than more traditional methods, such as monitoring data (which require reporting
by firms) and beneficiary surveys (which require firms to respond to interviews and
questionnaires).
1.2.2.2 Implementation evaluation – the management side
Implementation evaluations look at how a programme is being implemented and managed:
Typical questions are whether or not potential beneficiaries are aware of the programme and
have access to it, if the application procedure is as simple as possible, if there are clear project
selection criteria, is there a documented data management system, are results of the
programme communicated, etc..
The methods of implementation evaluation are similar to theory-based evaluations.
Evaluations of this type typically take place early in the programming period.
Is there an ideal evaluation guaranteeing valid answers?
As illustrated on the example of impact evaluations, all methods and approaches have their
strengths and weaknesses. All evaluations need:
- to be adapted to the specific question to be answered, to the subject of the programme and its
context.
- whenever possible, evaluation questions should be looked at from different viewpoints and
by different methods. This is the principle of triangulation.
4
Counterfactual Impact Evaluation of Cohesion Policy: Work Package 2: examples from support to innovation
and R&D. DG REGIO, 2011
5
Ex post evaluation of the ERDF, Work package 6c, DG REGIO, 2010
6
Ex post evaluation of the ERDF, Work package 6c, DG REGIO, 2010 and Counterfactual Impact Evaluation of
Cohesion Policy: Work Package 1: examples from enterprise support. DG REGIO, 2011
6
17 February
2016
- The costs of evaluation need to be justified by the possible knowledge gain. When deciding
an evaluation, it needs to be considered what is already known about an intervention.
In sum: A mixed-method approach is the best approach to evaluation.
To date Cohesion Policy evaluations have tended to focus more on implementation issues
than capturing the effects of interventions. For the 2014+ period, the Commission wishes to
redress this balance and encourage more evaluations at EU, national and regional level, which
explore the impact of Cohesion Policy interventions on the well-being of citizens, be it
economic, social or environmental or a combination of all three. This is an essential element
of the strengthened results-focus of the policy.
1.2.2.3 The evaluation of integrated programmes
The evaluation of integrated programmes covering a range of different but interlinked
interventions represents a special challenge. It is a possible strategy to evaluate first of all the
constituent components of an integrated programme. If their effectiveness can be
demonstrated, it becomes more plausible that the whole programme is delivering on its
objective.
As a second element, evaluators could assess whether the intervention logic and objectives of
the different components fit with each other and make synergies likely to occur.
Thirdly, it is possible to apply methods that assess the effect of the integrated package as a
whole. Traditionally this has been undertaken by macroeconomic models. Other methods are
also being tested, for example counterfactual methods comparing the development of
supported with non-supported regions. As noted above, a combination of methods is likely to
be most effective.
2. Standards for evaluations
In order to ensure the quality of evaluation activities, the Commission recommends Member
States and regions to base their work on clearly identified standards, established either by
themselves or to use European Commission standards or those of national evaluation
societies, the OECD and other organisations. Most of the standards converge on principles
such as the necessity of planning, the involvement of stakeholders, transparency, use of
rigorous methods, independence and dissemination of results. A possible structure with some
explanations is provided in annex 3.
We recommend the consultation of the following sources:
 Quality of an evaluation report: EVALSED, The Guide.
http://ec.europa.eu/regional_policy/sources/docgener/evaluation/evalsed/guide/designing_i
mplementing/managing_evaluations/quality_en.htm
 Website of European Evaluation Society: It provides access to the standards of national
evaluation societies.
http://www.europeanevaluation.org/library/evaluation-standards.htm
 OECD, 1992. Principles for evaluation of development assistance.
http://www.oecd.org/dataoecd/31/12/2755284.pdf
7
17 February
2016
3. Practical points for the programming period 2014-20
The intention of this section is to provide (future) programme managers some pragmatic ideas
on what is required for monitoring and evaluation of cohesion policy and what should be done
when taking into account the ideas and principles sketched out in the previous section of this
paper and what has already been presented in the 5th Cohesion Report.
3.1 Programming
Programmes with a clear identification of changes sought, concentrated on a limited number
of interventions are a decisive condition for efficient and effective monitoring and evaluation
during the whole programming period.
3.1.1 Clear objectives as key condition for effective monitoring and evaluation
Each priority (sub-priority) should identify the socio-economic phenomenon that it intends to
change – the result - and one (or some very few) result indicators that best express this
intended change. Each priority should express the direction of the desired change (e.g., a
reduction or growth of the value of result indicator). Setting a quantified target or a range for
the addressed result indicator or the contribution of the programme might be possible in
selected cases.
Each result indicator needs a baseline value. A baseline is the value of an result indicator at
the beginning of the programming period (for example, the number of start-ups per year for a
priority that intends to drive up the number of start-ups in a region). Information about the
activity of a programme in the past does not present a baseline value (for example the number
of supported start-ups in the past).
Annex 2 lists examples of result indicators for different thematic priorities. It should be noted
that these are structured on the basis of the EU2020 Strategy and demonstrate credible
intermediate steps which show the link between actions on the ground and the EU2020
headline targets.
3.1.2 Provisions for monitoring in Operational Programmes
Output indicators should cover all parts of a programme. The indicators first need to capture
the content of individual interventions. Within this frame, Member States should use
indicators from the list of common indicators whenever appropriate (see annex 1).
The programme should set targets for output indicators for the effective end of the
programming period. In most cases, this would mean to set targets for 2022. Baselines for
output indicators would not be required.
Output indicators should be linked to categories of expenditure.
8
17 February
2016
3.2 Ex ante evaluation of Operational Programmes
An ex ante evaluation, as a rule under the responsibility of the future managing authority,
should appraise the following elements in order to improve the quality of operational
programmes.

the justification for the thematic priorities selected, and their consistency with the
EU2020 strategy, the Common Strategic Framework and partnership contract;
the relevance and clarity of the proposed result indicators and output indicators;
the plausibility of the targets for the indicators and for the explanation concerning the
contribution of the outputs to the results;
the consistency between the allocated financial resources and the targets for output
indicators;
the administrative capacity for management and implementation of the operational
programme;
the quality of the monitoring system, and of how necessary data will be gathered to carry
out evaluations.





3.3 Monitoring
3.3.1 Monitoring of Operational Programmes – the annual report
Discussions of the monitoring committee and annual reports are key elements of the
monitoring of an Operational programme. The use of quantitative information is one of the
tools. To date annual reports have followed a "checklist approach". The key change for the
future would be that the reports should analyse the information presented.
The annual report should
-
provide information about the implementation of a programme. Besides financial data,
this could require providing cumulative values for output indicators, starting from the
second year of implementation. Both actual and expected values would be necessary.
Transmission of such data is an obligatory part of annual reports.
-
If possible, report progress towards the desired result. This means to provide values for
the result indicators of programmes taken either from statistics or provided by
information sources specific to the priority such as specific surveys, at particular points
in time. Note that such values encompass the contribution of the programme and the
contribution of other factors.
-
provide a qualitative analysis of the contribution of the programme towards the change
of result indicators, using financial data, output indicators and managerial knowledge
gained during the implementation. When they become available, evaluations will provide
additional information and insights to be used here.
-
analyse why the objectives are being achieved or not and to judge if priorities and the
programme are on track or not.
9
17 February
2016
3.4 Evaluation during the programming period
Evaluation during the programming period should follow the evaluation plan adopted at the
beginning of the programming period, reflecting and adapting to the changing needs of the
individual programmes.
Ideally, all three types of evaluation –theory-based evaluation, counterfactual evaluation and
implementation evaluation - would play their role. Implementation evaluations supporting the
smooth delivery of a programme are more likely to be useful in the early stages of
implementation. Evaluation capturing the effect of priorities and looking into their theory of
change are more likely to occur at a later stage. This can include an examination of the
impacts of similar interventions in a previous programming period.
Each priority should be covered at least once by an impact evaluation.
A summary evaluation in 2020 could wrap up main evaluation findings and key information
from the monitoring system. One of the main purposes would be feeding the ex post
evaluation under the lead responsibility of the European Commission.
3.5 The evaluation plan
It is the purpose of an evaluation plan to improve the quality of evaluations carried out during
the programming period.
3.5.1 Establishing an evaluation plan
After adoption of the Operational Programme, the Member State or region would adopt an
evaluation plan. The plan would be sent to the Commission for information.
3.5.2 Elements of an evaluation plan
An evaluation plan at national or regional level could include the following elements:
- indicative list of evaluations to be undertaken, their subject and rationale;
- methods to be used for the individual evaluation and their data requirements;
- Provisions that data required for certain evaluations will be available or will be collected;
- a timetable;
- human resources involved;
- the indicative budget for implementation of the plan.
3.5.3 Annual review of evaluation plan
The Monitoring Committee would review the evaluation plan once per year and adopt
necessary amendments. Note that the existence of an evaluation plan would not exclude the
possibility of ad hoc evaluations.
3.6 Ex post evaluation
The purpose of the ex post evaluation would be to obtain a view of the programming period as
a whole. It should be able to tell what has been achieved and answer the question if it was
worth to be done.
The ex post evaluation could be a responsibility of Commission in collaboration with Member
States to be finished by 2022. The ex post evaluation would be facilitated by evaluations of
10
17 February
2016
Member States and Commission during the programming period, especially by the Member
States' summary of evaluations during period in 2020.
3.7 Transparency
All evaluation should be made public, preferably via internet. English abstracts are
recommended to allow for an European exchange of evaluation findings.
11
17 February
2016
Glossary
Baseline
The value of the indicator before the policy intervention at stake is undertaken.
Causality
The (unknown) relation which links a given output of policy intervention,
produced by spending financial resources (the inputs), to a change in results.
Common
indicators
A list of indicators with agreed definitions and measurement units to be where
relevant in Operational Programmes, ensuring comparability.
Evaluation
Evaluation is the systematic collection and analysis of information about the
characteristics and results of programs and projects as a basis for judgments, to
improve effectiveness, and/or inform decisions about current and future
programming.
Impact
The effect of a policy intervention on a result. The assessment of impacts
cannot be derived from the mere observation of changes in the result indicators
compared to the baseline, since many other factors can contribute to these
changes. It rather requires dedicated techniques, reconstructing causality, or
comparing the population affected by the intervention with a “similar”
population not affected (counterfactual impact evaluation).
Indicator
A variable that provides quantitative or qualitative information on a
phenomenon. It normally includes a value and a measurement unit.
Method
Methods are families of evaluation techniques and tools that fulfill different
purposes. They usually consist of procedures and protocols that ensure
systemisation and consistency in the way evaluations are undertaken. Methods
may focus on the collection or analysis of information and data; may be
quantitative or qualitative; and may attempt to describe, explain, predict or
inform action. The choice of methods follows from the evaluation questions
being asked and the mode of enquiry - causal, exploratory, normative etc.
Methodology
Most broadly, the overall way in which decisions are made to select methods
based on different assumptions about what constitutes knowing (ontology) what
constitutes knowledge (epistemology) and more narrowly how this can be
operationalised, i.e. interpreted and analysed (methodology).
Result
The specific dimension of the well-being of people (as consumers, workers,
entrepreneurs, savers, family or community members, etc.) that motivates
policy action, i.e. that is expected to be modified by the interventions designed
and implemented by a policy. Examples are: the improvement in mobility
pursued by building transport infrastructures; the increased competence pursued
by providing additional or modified training; the reduced rationing of SMEs
pursued by providing them with subsidised loans
Result indicator
An indicator describing a specific aspect of a result, a feature which can be
measured. Examples are: the time needed to travel from W to Y at an average
speed, as an aspect of mobility; the results of tests in a given topic, as an aspect
of competence; the share of firms denied credit at any interest rate, as an aspect
of banks’ rationing.
Output indicator
An indicator describing the “physical” effect of spending resources through
policy interventions. Examples are: the length, width or quality of the roads
built; the number of hours of extra-teaching hours provided by the intervention;
12
17 February
2016
the capital investment made by using subsidies.
Planned output
The planned “physical” effect of spending resources through policy
interventions.
Theory of change
An assumption of how spending financial resources (the inputs) for producing a
planned output causes a change in some results.
13
17 February
2016
Annex 1
Proposed List of Common Indicators
For all indicators targets, committed and actual values will be required. Most of the listed
indicators are output indicators.
UNIT
NAME
Km
Total length of newly built roads
Km
Total length of reconstructed or upgraded roads
Km
Total length of new railway line
Km
Total length of reconstructed or upgraded railway line
persons
Average number of passengers served by improved
urban transport services
Physical
Infrastructure
Transport
1
Roads
2
3
Railway
4
5
Urban transport
Environment
6
Water supply
persons
Additional population served by improved water supply
7
8
Wastewater
treatment
Sewage
population
equivalent
Km
Additional population served by improved wastewater
treatment
Total length of new sewage built
9
Solid waste
Tons
Annual waste recycling capacity
10
Risk prevention
persons
km2
Additional population benefiting from flood protection
measures
Additional population benefiting from forest fire
protection and other protection measures
Total surface area of rehabilitated land
11
12
persons
Land rehabilitation
Energy
13
Renewables
MWh
Energy produced from renewable sources
14
Energy efficiency
MWh
Decrease in energy consumption, annual average
15
GHG reduction
tons of
CO2eq
Estimated decrease of GHG in CO2 equivalents
persons
Population covered by broadband access of at least 30
Mb/s
ICT
16
Infrastructure
Social
Infrastructure
14
17 February
2016
17
persons
Service capacity
18
Childcare &
education
Health
persons
Service capacity
19
Housing
households Population benefiting from improved conditions
20
Cultural heritage
persons
Number of visitors at supported sites
square
meters
square
meters
square
meters
Area of renovated / newly developed open space in
urban areas
Area of renovated / newly developed commercial
buildings in urban areas
Area of renovated / newly developed housing in urban
areas
24
enterprises
25
enterprises
Number of enterprises receiving financial support
(including grants, loans, venture capital)
Number of enterprises receiving non-financial support
26
enterprises
Number of new enterprises supported
27
EUR
28
number
Private investment matching EU support in SMEs
(including grants, loans, venture capital)
Number of jobs created in assisted SMEs
Urban
Development
21
22
23
Productive
investment
Enterprise
Innovation and
RTD
29
number
30
number
31
EUR
32
number
33
number
34
Tourism
persons
Number of cooperation projects between enterprises and
research institutions
Number of research jobs created in assisted entities
Private investment matching EU support in innovation or
R&D projects
Increase in number of patents of supported companies
Increase in number of publications of supported
institutions
Number of visitors to supported attraction
15
17 February
2016
Annex 2
Examples of result indicators
This list is a suggestion of the Commission services, using the work of an expert group 7.
These examples reflect the thematic priorities of Cohesion policy as defined in the
regulations. Sources are identified where they are available. These indicators can be used by
the public authorities responsible for the development of a programme if the indicator is
relevant for the policy actions. The fact that an indicator appears in this list does not
necessarily mean that it would appropriately reflect the activities of a specific programme or
that it can be directly applied to a given priority.
Strengthening research, technological development and innovation
1. Percentage of SMEs introducing product or process innovations (Eurostat-CIS)
2. Percentage of SMEs introducing marketing or organizational innovations (EurostatCIS)
3. Growth of employment in knowledge intensive sectors (Eurostat-CIS)
4. Increase in human resources in science and technology (Eurostat) or R&D personnel
at the regional level (Eurostat CIS, RIS)
5. Business expenditure on R&D (BERD) as a percentage of GDP (Eurostat)
6. New to firm sales of all enterprises as a % of turnover (Eurostat, RIS)8
7. Added RTD/innovation private investment in supported companies (excluding
matching EU support)
(RIS: research and innovation scoreboard; CIS; Community Innovation Survey)
Enhancing accessibility to and use and quality of information and communication
technologies
1. Percentage of households subscribing to broadband networks providing access
between 30 and 100 Mb/s
2. Percentage of households subscribing to broadband networks providing access
above100 Mb/s
3. Percentage of enterprises purchasing and selling on-line (Digital Agenda
Scoreboard/Eurostat).
4. Percentage of public procurement expenditure carried out through eProcurement
Removing obstacles to the growth of SMEs
1. Number of start-ups (Eurostat)
2. Growth in employment in small firms (Eurostat)
3. Gross Fixed Capital Formation in SMEs (Eurostat)
7
Expert working group lead by F Barca and P. McCann, 2011.
New or significantly improved products to the firm as a percentage of total turnover. Theses products are not
new to the market. Sales of new to the firm but not new to the market products are a proxy for the use or
implementation of elsewhere already introduced products (or technologies). This indicator is thus a proxy for the
degree of diffusion of state-of-the-art technologies.
8
16
17 February
2016
Supporting in all sectors the shift towards a low-carbon, resource efficient and climate
resilient economy
1. Energy intensity of the economy (Eurostat)
2. Greenhouse gas emissions by sector
Promoting renewable energy sources
1. Share of renewables in gross final energy consumption (Eurostat)
2. Electricity generated from renewable sources
Promoting sustainable transport
1. Energy consumption of transport relative to GDP (Eurostat)
2. GHG per passenger km travelled in public transport / energy use of passenger km
travelled
3. Modal shift - traffic flow of freight using the new infrastructure and withdrawn from
road (Eurostat)
Removing bottlenecks in key network infrastructures
1. Time saved – reduction in journey time
2. Accessibility gains - accessibility allowed by new /upgraded transport infrastructure
Correcting and preventing unsustainable use of resources
1. Greenhouse gas emissions of non ETS sector, base year 1990 (Eurostat / EEA – for
EU2020 headline indicator)
2. Total greenhouse gas emissions by sector (Eurostat / EEA); see also Air Emissions
Accounts by activity (NACE industries and households) (Eurostat)
3. Renewable energy production (for EU2020 headline indicator)
4. Energy performance of buildings (see Directive 2002/91/EC; for EU2020 headline
indicator)
5. Urban population exposure to air pollution by particulate matter (Eurostat / EEA)
6. Treatment of waste (Eurostat, NUTS 1)
7. Resident population connected to wastewater collection and treatment systems
(Eurostat)
8. Urban wastewater treatment with at least secondary treatment (Eurostat); see also
Treatment capacity of wastewater treatment plants – design capacity, actual
occupation (Eurostat)
9. Resident population connected to public water supply (Eurostat)
Social infrastructure
Education and childcare infrastructure
1. School dropout rate or Early leavers from education and training (Europe 2020
Headline target, National level)
2. Formal child care by duration and age group (EU SILC, National level)
Health infrastructure
1. Self reported unmet need for medical examination or treatment (EU SILC, National
level)
2. Average distance of general clinic / hospital units from population centroid or
Average travel time to hospital services
17
17 February
2016
Housing infrastructure
1. Overcrowding rate9 by age / gender / poverty status / degree of urbanisation
2. Housing cost overburden rate10 by age / gender / poverty status (EU SILC, National
level)
3. Housing deprivation rate (EU SILC, National level)
9
A person is considered as living in an overcrowded household if the household does not have at its disposal a
minimum of rooms equal to: - one room for the household; - one room by couple in the household; - one room
for each single person aged 18 and more; - one room by pair of single people of the same sex between 12 and 17
years of age; - one room for each single person between 12 and 17 years of age and not included in the previous
category; - one room by pair of children under 12 years of age.
10
Percentage of the population living in a household where the total housing costs (net of housing allowances)
represent more than 40% of the total disposable household income (net of housing allowances).
18
17 February
2016
Annex 3
A structure of standards11:
A) Evaluation activities must be appropriately organised and resourced to meet their
purposes.
1. Programmes should use an evaluation function with a clearly defined responsibility for
co-ordinating evaluation activities.
2. For this evaluation function, human and financial resources must be clearly identified
and proportionately allocated.
3. Each programme must clearly define the procedures for the involvement of
stakeholders.
B) Evaluation activities must be planned in a transparent way so that evaluation results are
available in due time.
1. An evaluation programme is to be prepared by the evaluation function in consultation
with stakeholders.
2. All activities must be periodically evaluated in proportion with the allocated resources
and the expected impact.
3. The timing of evaluations must enable the results to be fed into decisions on the design
and modification of activities.
C) Evaluation design must provide objectives and appropriate methods and means for
managing the evaluation process and its results.
1. A steering group should be set up for each evaluation to advise on the terms of
reference, to support the evaluation work and take part in assessing the quality of the
evaluation.
D) Evaluation activities must provide reliable and robust results.
1. The evaluation must be conducted in such a way that the results are supported by
evidence and rigorous analysis.
2. All actors involved in evaluation activities must comply with principles and rules
regarding conflict of interest.
3. Evaluators must be free to present their results without compromise or interference.
4. The final evaluation reports must as a minimum set out the purpose, context,
questions, information sources, methods used, evidence and conclusions.
5. The quality of the evaluation must be assessed on the basis of the pre-established
criteria.
11
Adapted from: Evaluation standards of the European Commission. Communication to the Commission from
Ms Grybauskaite in agreement with the president. Responding to Strategic Needs: Reinforcing the use of
evaluation. Brussels, 2007.
19
17 February
2016
E) Evaluation results must be communicated in such a way that it ensures the best use of
the results
1. Evaluation results must be communicated effectively to all relevant decision-makers
and stakeholders.
2. The evaluation results must be made publicly available; summary information should
be prepared.
20
17 February
2016
Annex 4
Recommended reading
1. EVALSED. An online resource providing guidance on the evaluation of socioeconomic
development.
http://ec.europa.eu/regional_policy/sources/docgener/evaluation/evalsed/guide/ind
ex_en.htm
2. Impact Evaluation and Development. NONIE - Network of networks on impact
evaluation.
http://www.worldbank.org/ieg/nonie/
3. Outcome indicators and targets. Methodological note produced for DG Regional
Policy by the High Level Group led by F. Barca and P. McCann.
http://ec.europa.eu/regional_policy/sources/docgener/evaluation/performance_en.h
tm
4. Evaluation standards of the European Commission. Communication to the
Commission from Ms Grybauskaite in agreement with the president. Responding
to Strategic Needs: Reinforcing the use of evaluation. Brussels, 2007.
21
17 February
2016
Download