Sixth European Conference on the Evaluation of Cohesion Policy

Sixth European Conference on the Evaluation of Cohesion Policy, Warsaw, 30
November – 1 December 2009.
Workshop 6: Telling the Story: Using Case Studies
Chairman’s Introduction
Harvey Armstrong, University of Sheffield
Introduction to the Speakers
Welcome to workshop 6, which will focus on the use of case study evaluation methods for
European Cohesion Policy. I am personally particularly pleased to be able to participate in
this workshop and act as its chairperson. This is partly because case study evaluation has at
last been given a deserved role at the heart of the evaluation process, but also because its
precise role within the wider evaluation process remains a hotly argued one. The workshop
discussion therefore promises to be both important and controversial, characteristics which I
have come to savour at conferences.
The workshop is unusually fortunate in having such a strong and experienced group of
speakers to kick-start our discussions. The running order for the presentations is as follows:
Veronica Gaffey is Head of the Evaluation Unit at DG Regional Policy [biopic here]. Her
paper will focus the case studies which have been, and are still being undertaken as part of
the ex post evaluation of the 2000-2006 Objective 1 and 2 programmes. It sets out both the
intellectual basis of case study evaluation of EU regional policy and DG Regional Policy’s
expectations of what constitutes high quality case study work.
Julie Pellegrin is Director of the Regional Development Unit at CSIL – Centre for Industrial
Studies, Milan [biopic here]. Her paper draws upon CSIL’s experience of running a major
programme of 24 regional case study evaluations undertaken as part of the ex post evaluation
of the 2000-2006 ERDF. This presentation is of particular interest not only because of the
scale of the case study work and the expertise which both Julie and CSIL have brought to
bear, but also because Work Programme 4 (Structural Change and Globalisation), under
which this work was conducted, represents an unusually challenging thematic context for
case study evaluation.
Thomas Stumm is Managing Director of EureConsult S.A. Echternach/Luxembourg [biopic
here]. His paper draws upon his enormous experience in thematic, national, regional and
local evaluation of the EU Structural Funds dating back to the 1994-1999 programming
period to distill a series of highly pertinent lessons for conducting high quality case study
evaluations, with robust proposals for future developments for this type of work.
The ‘rules of engagement’ for the workshop are that I shall invite each speaker in turn to
make their presentations (for approximately 20 minutes each). I would ask conference
delegates to kindly try and hold back until all three speakers have presented. I will then throw
open the workshop for comments and questions. Could each delegate please identify
themselves and give their affiliation before making their intervention. I will take groups of
three or four interventions before asking the speakers to respond. John Walsh, Deputy Head
of Thematic Coordination at DG Regional Policy’s Innovation Unit has kindly agreed to act
as Rapporteur to close the session.
Some Opening Comments
Few, if any, people engaged in early attempts to evaluate regional policy would even in their
wildest dreams have imagined that by 2009 evaluation would have become so wide-ranging
and holistic. When I first began to dabble in regional policy evaluation in the early 1970s, the
ERDF did not exist and Moore and Rhodes had just produced their first seminal evaluation of
British regional policy (Moore and Rhodes, 1973). This work was typical of the times. The
‘quantitative revolution’ was in full swing, regional policy had simple, economic objectives
(reduced unemployment) and positivist econometric evaluation methods were in vogue
(modified shift-share analysis).
A brief look back over our shoulders at the origins of regional policy evaluation is important
in the context of this workshop for two reasons. First, it shows just how far evaluation
methods have come since those early days. Case study evaluation methods, for all of their
problems and complexities, do represent a further positive step forward towards. The
Commission can claim great credit in helping to push the evaluation agenda forward to where
we are today. Second, however, history is also important because we are still saddled with the
legacy of some of the fierce controversies which accompanied the move forward from
positivist evaluation. Case study evaluation methods unfortunately remain tangled up in the
continuing clash of ideologies. This can sometimes cloud the debate.
Let me try to justify this assertion by exercising my prerogative as chairman to set out what I
suspect will be some of the key elements around which our discussions will turn:
Home, sweet home.
At what point do case studies best fit within a modern, iterative evaluation system? This
seems to me to be a classic example of where the legacy of the historical debates still lingers.
The focus of much of the case study evaluation literature remains on the need for them to be
seen as fully-fledged pieces of research in their own right. This is done by setting up a
counterpoint argument that back in the bad old days of regional policy evaluation, case
studies were done as pilot studies ahead of the main (positivist, quantitative) evaluation
exercise. Moreover, their role was to serve positivism by providing information upon which
testable hypotheses could be built. Conclusion: they used to be ex ante evaluation and now
they no longer need to be.
This is superficially a very nice finding for our workshop, since the focus here is on the use
of case studies as part of the ex post evaluation of the 2000-2006 programmes. Unfortunately,
there seem to me to be two problems with this view of the history of evaluation:
As someone who lived through the quantitative revolution this is not as I remember it.
I do not remember case studies being used widely to generate hypotheses for
subsequent evaluations. I do remember qualitative analysis being used in this manner,
usually in the form of interviews and literature reviews, but not case studies.
Moreover, fully-fledged case studies also existed in large numbers well before the
1970s. They were, for example, the stock-in-trade of what at that time we called
‘regional geography’ and many of these studies incorporated government policy
impact analysis. We seem to be in thrall here to something of a caricature of the past.
More importantly, the caricature may be obscuring what seems to me to be the key
issue facing the placement of case study research – resource allocation. Case studies
are extremely costly in terms of both money and time. This is inevitable since they are
in-depth, context-dependent, multi-method and holistic pieces of work. The key issue
is not whether they can be used at different stages, but rather where exactly the
limited resources available should be focused. In terms of this workshop, this amounts
to asking whether the ex post stage is the most efficient point to allocate the limited
case study resources. This in turn invites discussion on the merits of case studies at
the ex ante stage in particular, but also at interim stages too.
Intellectual foundations.
Case study evaluation has emerged, as we have seen, from dissatisfaction with positive,
quantitative evaluation methods. Evalsed, and the MEANS guidance which pre-dated it both
side-step the on-going controversy between on the one side unreconstructed positivists, (still
strong in disciplines such as Economics), and on the other side contructivist and realist
evaluators. Evalsed advice to evaluators is, of course, to holistically engage with the best
aspects of each (e.g. quantitative and theoretical rigour from positivism, responsive and
interactive engagement from constructivism and ‘context-mechanism-outcome’ process
identification from realism). This advice has always seemed to me to be a wonderfully heartwarming example of the Commission’s ability to generate compromise, but also curiously
internally inconsistent. There can be no compromise between constructivism and positivism
since constructivism has always, and to this day still is, fiercely critical of the existence of
objective knowledge. While I am sure that today we will not wish to revisit these
philosophical controversies and, as is the case with all three of our papers today, be prepared
to go along with the Evalsed compromise, the intellectual foundations do raise an important
issue: how best can responsive, interactive and drawn out social investigation approaches of
constructivism and realism be effectively integrated into case studies? How can we avoid the
case studies being conducted as one-off free-standing pieces of evaluation? How can we best
engage partners and stakeholders iteratively in the case study research?
Generalization from case studies.
Of particular importance in the philosophical debate has been the key issue of how far it is
possible to generalize results from a single case study. Unreconstructed positivists such as
myself say ‘no’, but there are many, including many of the key proponents of case study
evaluation who say ‘yes’. This may seem not to be a particularly important issue here today
since the ex post evaluation of Objective 2 has opted for a scenario of generalization from
multiple case studies. This decision does not, however, in itself solve the generalization issue
because there remain key questions concerning:
How many case studies are necessary before we can generalize?
What criteria should be used to select cases? In particular, should the criteria seek our
‘representative’ regions or else deliberately un-representative regions? The latter are
frequently seen by the proponents of case study evaluation as being of more use and
interest. Flyvbjerg (2006), for example, argues that “when the objective is to achieve
the greatest possible amount of information on a given problem or phenomenon, a
representative or random sample may not be the most appropriate strategy. This is
because the typical or average case is often not the richest in information” (p.425). In
this case, he argues, extreme/deviant, maximum variation, critical or paradigmatic
cases may be a better choice. Support for extreme or critical case studies remains
strong and much continues to be claimed possible from such case studies (e.g. Koenig,
2009). If this is accepted, then do we need a mix of ‘representative’ and ‘nonrepresentative’ types? Our papers today contain much that is interesting and new on
this issue.
What degree of ‘top-down’ management of the content of the different case studies is
necessary to maximise generalization and by what methods can this be best achieved?
Here again our papers today have much to say which is new and interesting.
What case studies bring to the table.
This is an issue which I find particularly fascinating. It seems to me that one of the great
strengths of case study evaluation for Cohesion Policy lies in its ability to circumvent some
of the most intransigent inherent problems which have bedevilled its evaluation. Most in this
room I am sure have been as exercised as I have with these problems over the years. Let me
pick out three which I think are particularly important:
For econometric analysis, the idiosyncratic boundaries of Objective 2 (and many
Objective 1) regions severely limit the usefulness of harmonised EU regional data.
Case studies allow the more extensive and often better quality (e.g. much longer time
series) national, regional and local data sources to be brought to bear, including
secondary data from local surveys.
Multiple stakeholder partnerships, drawing down multiple funding streams, with
varying levels and types of engagement in programmes, and with the partnerships
changing (usually expanding) from one period to another. These are all highly
desirable characteristics of Cohesion Policy but they deliver serious headaches for
evaluators seeking the disentangle cause and effect and the relative importance of the
ERDF in this melange. Case studies, particularly once the brave decision is taken not
to try to cover all the Objective 1 or 2 regions, allow the necessary resources to be
brought to bear to get inside the programmes.
The lack of synchronisation between programming periods and the length of time
necessary for modern regional policies to realistically be expected to bear fruit. I have
done a little work myself on this issue in UK programmes. Many environmental,
infrastructure (e.g. road) and innovation policies require at least 20 years for markets
to adjust and business and household relocation decisions to take place to bear fruit.
Yet even the 2000-2006 programme (the longest programming period since the great
reforms of 1989) falls well short of this. Case studies do not need to be constrained by
the start dates of the programming periods. They can look as far back as is appropriate,
a merit of particular importance where key programme Priorities and Measures have
been rolled forward from one period to another.
Methodology overload?
Paradoxically, one of the great strengths of case study evaluation seems also be one of its
greatest challenges. The combination of quantitative and qualitative research methods, the
resources focus allowing many different types of data and qualitative information to be
accessed, the ability to build in an interactive process engaging communities and stakeholders
– these are all great attributes of case studies. But how do we avoid asking too much of the
evaluators? In the case studies to be discussed by our speakers great efforts have been made
to go beyond the usual core of management, administrative, financial and target monitoring
information sets to draw in specific regional data sources and undertake specific beneficiary
surveys. But how are these to be best combined? And what weight or emphasis should be
given to each component? To paraphrase Murray Saunders in his keynote paper at the start of
this conference, case studies offer an immense opportunity for evaluations which incorporate
“narratives, vignettes, depictions, modelling and statistical analysis”, but we cannot do them
all, or do them all equally well. Hard choices must be made.
Audiences for case study evaluations.
Finally, there is the important issue of for whom the evaluations are conducted. Our speakers,
correctly, focus on the need for an appropriate and well-written narrative approach in case
study evaluations. This in itself is by no means an easy task. For a start, narrative writing
skills are in very short supply. In addition, the author faces a daunting task since “each
narrative portion should integrate evidence from different data elements….. [and] any
narrative accounts should be organised around the substantive topics of the case study”, not
individual reports, notes etc. (Yin, 1981, p.60). Worse still, the author may find that a
compromise may be needed between a desire to stick closely to the facts and the wish to
enhance or ‘exaggerate’ a little for narrative purposes (Costantino and Greene, 2003).
There are also the perennial wider issues of how the results of case study evaluation can be
most effectively taken beyond the usual audiences of the Commission, stakeholder
partnerships and national governments. What are the various key audiences? How best can
they be reached?
Select Bibliography
Costantino, T.E. and Greene, J.C. (2003), ‘Reflections on the use of narrative in evaluation’,
American Journal of Evaluation, Vol. 21(1), pp. 35-49.
Moore, B. and Rhodes, J. (1973), ‘Evaluating the effects of British regional economic policy’,
Economic Journal, Vol. 83, pp.87-110.
Flyvbjerg, B. (2006), ‘Five misunderstandings about case study research’, Qualitative Inquiry,
Vol. 12(2), pp. 219-245.
Koenig, G. (2009), ‘Realistic evaluation and case studies: Stretching the potential’,
Evaluation, Vol. 15(1), pp.9-30.
Yin, R.P. (1981), ‘The case study crisis: Some answers’, Administrative Science Quarterly,
Vol. 26(1), pp.58-65.
Random flashcards
Arab people

15 Cards


30 Cards


46 Cards

History of Europe

27 Cards

Create flashcards