See discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/312174464 Qualitative Research in Political Science Introductions to the four volumes Chapter · August 2016 CITATIONS READS 5 4,591 3 authors: Joachim Blatter Markus Haverland Universität Luzern Erasmus University Rotterdam 161 PUBLICATIONS 2,672 CITATIONS 63 PUBLICATIONS 1,724 CITATIONS SEE PROFILE Merlijn van Hulst Tilburg University 75 PUBLICATIONS 951 CITATIONS SEE PROFILE Some of the authors of this publication are also working on these related projects: Teaching interpretively View project Transnational Voting in Europe !? View project All content following this page was uploaded by Markus Haverland on 10 January 2017. The user has requested enhancement of the downloaded file. SEE PROFILE Qualitative Research in Political Science 4 Volumes, published 2016 Editors: Joachim Blatter/Markus Haverland/Merlijn van Hulst SAGE Library of Political Science Note: This a preprint of the editors’ introductions to the four volumes. For the list of content of the four volumes please follow this link: https://uk.sagepub.com/en-gb/eur/qualitativeresearch-in-political-science/book245301#contents VOLUME I: BACKGROUNDS, PATHWAYS AND DIRECTION OF QUALITATIVE METHODOLOGY Joachim Blatter/Merlijn van Hulst/ Markus Haverland General Introduction and Overview Introduction In Political Science, as well as in neighboring and overlapping disciplines such as Sociology, Public Administration, Policy Studies and International Relations, we have witnessed an explosion of reflections on qualitative research and methodology in recent years. The establishment of specific sections or committees in disciplinary associations, such as the Qualitative and Multi-Method Research Section of the American Political Science Association or the Research Committee on Concepts and Methods of the International Political Science Association, devoted to qualitative research and the setting up of summer schools, for instance at the Maxwell School of Syracuse University, and professor positions assigned with the task to teach qualitative methods in Social/Political Science departments is an indicator for the fact that qualitative research is getting as reflective and professionalized as it has been the case with quantitative research earlier on. Consequently, selecting ‘major works’ has become more difficult, but at the same time it has become more important in order to provide some guidance through the bewildering array of qualitative research. Given the recent wave in qualitative methodology, our compendium includes works that played a major role in the history of qualitative research, but it contains even more articles and book chapters that we perceive as representing the most recent state of the art. The title of this collection is qualitative research and not qualitative methods; this terminology points to the fact that, especially in qualitative research, the development and application of methods and techniques is strongly embedded in reflections on ontology and theory, and aligned to specific stances in respect of epistemology and methodology. We equate qualitative research primarily with qualitative methodologies, since the latter term lies at the crossroad between philosophical debates on ontology and epistemology on one side and concrete methods and techniques of data collection/production and data analysis/interpretation on the other. Given the explosion of methodological reflections in recent years, we present the ‘major works’ not as a linear historical account, but we start with a book that can be seen as a major trigger or background of the recent wave of work on qualitative Political Science. This book is indicative for the hegemonic position that quantitative methodology and research had acquired at the end of the 20th century (at least in the United States), and this, in turn, explains why qualitative research is most often characterized by distinguishing it from quantitative research. When the eminent US-American scholars Gary King, Robert O. Keohane and Sidney Verba presented their Designing Social Inquiry: Scientific Inference in Qualitative Research in 1994, guided unabashedly by the methodological principles of quantitative research, this triggered some strong reactions in the United States (parallel to and in line with the Perestroika Movement in the American Political Science Association, see Monroe 2005). We present some of the most important reactions from US scholars, who argue for a specific qualitative methodology that sets it apart from quantitative methodology primarily through distinct analytic tools (e.g.; logic and set theory). Given the strong standing of US American scholars in Political Science, these alternatives to the quantitative-statistical template have begun to dominate our understanding of what qualitative research in political science means. Although, many European scholars follow the US-American lead, some scholars (mostly, but not only, from Europe) defend an understanding of qualitative research that builds on European hermeneutic traditions. They agree with the dominant American view on qualitative research that we should align ‘ontology and methodology’ (Hall 2003), but not, as Hall suggested, by tracing processes in order to reveal causal sequences and mechanisms, but by combining social constructivist theorizing with interpretative methods. The approaches presented by these scholars constitute a much more radical/consequent alternative to the quantitative-statistical template. Given the dominant position of quantitative methodology in Political Science, it seems quite reasonable to define the specifics of qualitative research through contrasting it to quantitative research (e.g., Mahoney and Goertz 2006, Goertz and Mahoney 2012). Nevertheless, such an approach does not only tend to downplay the internal differentiations within qualitative research; the wish to get recognized by quantitative scholars leads to a very limited departure from that hegemonic approach. Our major goal in this compendium, then, is to present the broad spectrum of qualitative methodology and research that we are witnessing in contemporary Political Science (and its neighboring disciplines). Orientation In this introduction we want to go beyond a mere demonstration of existing diversity. We aim to provide an orientation. Therefore, not only do we include two pieces in Volume I that categorize the diversity of approaches in qualitative research (Koivu and Damman 2014) and in case study research (Blatter and Haverland 2014), we also use this introduction to provide our own system of classification. It builds on these two pieces but expands them through further differentiations and through a kind of classification that is in line with qualitative principles, that is, overlapping instead of exclusive categories. In Figure 1.1, we locate qualitative approaches in a two-dimensional space. The first dimension captures ontological positions to which specific methodological approaches and concrete techniques can be aligned. Ontology refers to ‘the claims or assumptions that a particular approach to social inquiry makes about the nature of social reality - claims about what exists, what it looks like, what units make it up and how these units interact with each other’ (Blaikie 1993: 6). Within the ontological dimension we can distinguish a holistic and a 2 particularistic pole. Holism represents the presumption that the interaction of the units are strongly influenced by the entire web of interactions, which means that the whole has an ontological status on its own and cannot be reduced to its particular elements. Particularism signifies the claim that the individual units and their internal traits are the sole or at least the main determents of those interactions. The whole has not ontological status on its own, but is seen as a product of the particular elements and their interactions. Epistemology denotes ‘a view and justification for what can be regarded as knowledge - what can be known, and what criteria such knowledge must satisfy in order to called knowledge rather than belief’ (Blaikie 1993: 7).1 The concept that represents one pole within the epistemological dimension is conceptual and theoretical ‘coherence’ - a criterion that until now has only been aligned with interpretative research (Gerring 2003: 2). Indeed, it is the quintessential criteria for interpretative approaches as the following quote from Charles Taylor, one of the main references for interpretativists in the Anglo-Saxon world, illustrates: ‘Interpretation, in the sense relevant to hermeneutics, is an attempt to make clear, to make sense of an object of study. […] The interpretation aims to bring to light an underlying coherence or sense’ (Taylor 1971: 3). We will show in the following that theoretical or conceptual coherence plays also an important role in other qualitative methodological approaches. ‘Correspondence’ is the quality criterion that represents the opposing conceptual pole in the epistemological dimension: the assumption that the truth of knowledge can be established in its fit with or correspondence to ‘objectively existing’ external reality. This is the epistemological bedrock of positivism or naturalism and has been aligned primarily with quantitative research. As we will see, this criterion is also the underlying understanding of scientific knowledge in many qualitative approaches. In Figure 1.1 we locate the methodologies in those places where they belong from an ideal-typical point of view. Ideal-types in Max Weber’s sense are functionally coherent configurations of specific components of a concept. In our case the term ‘methodology’ represents the concept. The three components of such functionally integrated concepts are: the presumptions (a), the goals (b) and the means (c) of social research. Accordingly, a coherent methodology is based on (a) specific ontological presuppositions about the social world,(b) it aims at specific kinds of knowledge guided by distinct epistemological quality criteria for good social science,(c) such a methodology contains concrete techniques of data collection/production and data analysis/interpretation. In practice, other combinations of presuppositions, research goals and techniques might be found. Therefore, it is important to emphasize the ideal-typical nature of our presentation. In Figure 1.1 we presented only those configurations from the set of all possible combinations of these three features that form a coherent whole in the sense that from a functional point of view these expressions of the three components tie into each other most consistently and productively. Furthermore, it is important to realize that each methodology is not represented by a single point in the two-dimensional space, but as a plain which indicates that there are different expressions of each methodology, each still coherent but with different leanings in respect to ontology and epistemology. This results in the fact that the methodological ideal-types are not mutually exclusive categories, but overlapping concepts. 1 We must add here that constructivist thinkers might object to the last part of this definition, arguing that scientific knowledge is a kind of belief itself, even if it is generated through a particular practice that gives it a special status. 3 In Figure 1.1 we have tried to select a term that serves as a short and often used signifier for each methodological approach (upper left corner) and formulated the core aspect of the analytic technique that characterizes the methodology (lower right corner). Figure 1.1: Aligning qualitative methodologies and techniques to ontological presuppositions and epistemological principles The comparable cases strategy approach aims to isolate the causal effect of an independent variable (IV) on a dependent variable (DV) and thereby assumes that the IV exhibits a causal power that works in a uniform way within a specified population of similar cases and that works independent of the causal power of other variables. This strongly particularistic ontology is combined with a clear commitment to testing whether an expected hypothetical relationship between IV and DV finds correspondence in empirical data. Such a test does not rely on any abstract or comprehensive theory, the hypothesis on the relationship between IV and DV can be deduced from social science theories, but often it refers simply to a contested claim in the scientific or practical discourse. A good comparative case study buttresses the causal hypotheses with arguments on why a specific value of the IV should lead to a specific value of the DV, but it does not trace empirically whether the outcome is indeed a result of the presumed causal pathway. Instead, the methodological strategy is to select cases in such a way that it is possible to hold constant all other potential factors of influence (denoted as control variables), so that formal logic and the variation of IV and DV values across cases serve as a solid ground for drawing causal inferences for cases with the same scope conditions. Configurational comparative analysis (introduced first as Qualitative Comparative Analysis) aim to identify multiple configurations of conditions that constitute or cause a specific kind of 4 phenomenon/outcome.2 Causal configurations represent a less particularistic ontology in comparison to IVs. Nevertheless, since configurational comparative analysis focus on the mere co-existence of condition (and outcomes) and do not analyze whether the conditions within a configuration fulfill their causal work in a merely additive or in a truly interactive way (characterized by a principled non-substitutability of its components, see Blatter and Haverland 2014: 93/94), the step towards holism is very limited. The same is true in respect to the epistemological quality criteria. Correspondence is clearly the more important criteria. Coherence comes only into play if studies start with theory-based configurative hypotheses or when scholars discuss and judge the inductively gained results with reference to consistent theories and prioritize those revealed configurations or pathways which fit a consistent theoretical framework. Causal-process tracing (as defined by Blatter and Haverland 2014) aims to specify the social mechanisms and the temporal sequences which lead to a specific outcome/event. By focusing on social mechanisms, this methodology goes a step further away from the particularistic ontology of variable-centered research than those methodologies that focus on the configurations of conditions. Causal mechanisms are specific kinds of causal configurations - characterized by an asymmetric, logically sequential interaction of different kinds of social mechanisms (situational mechanisms, action-formation mechanism and transformational mechanisms), by including a basic theory of human action/behavior/communication and by involving multiple levels of analysis. These features represent an ontology that lies about half way in between radical particularism and consequent holism since it accepts not anymore the additive understanding of causality that cannot be ruled out with set-theoretical techniques, but it is still quite particularistic in the sense that the overall working of a causal mechanism is seen as the result of the sequential working of each individual social mechanism. Causalprocess tracing also takes an in between position in respect to epistemology. Its techniques can be used to test whether a mechanism that has been deduced from a social theory (or from other sources) finds empirical correspondence in social reality. But it can also be used in order to make better sense of a relationship between the factors of influence and the outcome by providing deeper insights (by going down the level of analysis) on how the factors work together and by delivering a denser picture of the social process (by analyzing the temporal sequences). Process tracing contributes to the recognition of coherent causal mechanisms. Since this allows for the possibility that a revealed causal mechanism consists of social mechanisms from different theories, it gives less weight to coherence than congruence analysis, the methodological approach that we describe next. Causal-process tracing and congruence analysis are located about half-way in between our conceptual poles, but process tracing tilts a bit more towards particularism and correspondence, whereas congruence analysis is slightly more in line with holism and coherence. Congruence analysis aims to use a plurality of theories to understand and explain different facets of specific cases of interest, but even more it aims to use case studies in order to contribute to the struggle among paradigmatic perspectives for recognition in scientific and practical discourses. In a congruence analysis, a broad set of empirical observations is compared to different sets of expectations that are derived from distinct comprehensive theories. Congruence analysis is based on a holistic understanding of theory as a specification 2 Set-theoretical approaches can be used for descriptive or for causal purposes. In the following, we concentrate on causal applications, in order to make the line of argument less complex. 5 of a paradigm, which in turn is a comprehensive and consistent worldview (ideology and/or ontology). From a coherent theory we can deduce not only co-variational or configurational hypotheses, and causal mechanisms, but also constitutive propositions about the social reality. Nevertheless, we can also detect some aspects of a particularistic ontology: the emphasis on the plurality of divergent theoretical perspectives that have to be applied to empirical analysis, and the assumption that theories are abstract schemas which are created primarily through distinction from other theories within the scholarly discourse. In contrast to interpretative methodologies, proponents of congruence analysis assume and propose a strong differentiation between social theory and social practice; they assume that scholars are primarily ‘situated in’ scientific discourses and only secondary in social realities. Furthermore, in contrast to interpretative approaches, congruence analysis as a methodology is not tied in to any specific theoretical presuppositions. It can be conducted by applying individualrationalist theories as well as social constructivist theories; actually, one of its main advantages is that it can serve as a bridge between theoretical paradigms (but only if these theories are getting disentangled from fundamentalist ontological and epistemological presuppositions). Also in epistemological terms, it takes an in between position. The term ‘congruence’ points to the weight that is given to the correspondence between abstract theory and concrete observations; but both abstract concepts and concrete information are also judged according to how coherent they are in respect to other concepts and other pieces of information. Ethnographic studies aim to reconstruct interpretative practices that we find in different times, places and cultures. Ethnographic approaches start with the holistic ontological assumptions that human action is always meaningful and that action and meaning cannot be separated (e.g., by treating them as two distinct variables). When it comes to epistemology, they assume that social science concepts should not be seen as neutral instruments that we can use in an objective fashion in order to measure and compare the social reality but that they should result from a dialog and from the ‘fusion of horizons’ between scholar and social actor (Bevir and Kedar 2008). This results in a rather inductive (or abductive) approach that strengthens not only the role of social actors in the process of knowledge creation, but also gives some weight to correspondence as a relevant quality criterion. That is, to most ethnographers it is important that the interpretations they come up with make some sense to the people they have studied. At the same time, the empirical ‘groundedness’ of ethnographic studies helps researchers to highlight the diversity and fluidity of meaning and meaning-making. (Con)Textual analysis is a short term that we use in order to bundle together those interpretative studies that focus on the role and use of language and communication in social reality. In contrast to the ethnographic strand of interpretivism, ontological holism does not refer primarily to the micro-level of analysis (the interdependence of action and meaning), but more to the macro-level of analysis in as much as it is assumed that intersubjective communicative structures and processes (e.g., discourses, narratives, traditions) are the major underlying entities which constitute and cause individual meanings and activities. Although textual analysis often focuses on written texts, they might include other sorts of text (interviews, images, etc.) and extent to social action and practices more in general (cf. Taylor 1971). Epistemologically, these approaches do not only question the neutrality of social science concepts, but even more the objectivity of the social world (Bevir and Kedar 2008). Selected concepts do not only have a strong bearing on the results of the empirical research, but the results of scholarly work impinge on the social reality, which makes it necessary for 6 scholars to reflect on their own normative position. In consequence, correspondence is relegated to the sidelines when it comes to evaluating the knowledge-claims of these approaches, and coherence takes precedence. With their often well-developed theoretical background in critical theories, they seem to lean more on theory then on data. At the same time, the antiessentialism that distinguishes them from earlier critical theories (like ‘oldschool’ Marxism) makes them sensitive to actual ‘texts’. Language-focused researcher value coherence even more than ethnographically oriented interpretivists, since they highlight the transformational character of knowledge and therefore they are searching for knowledge that provides coherent and therefore persuasive accounts of the social reality in order to criticize and to change current social practices. Overlaps We have stressed the fact that our systematic overview over qualitative methodologies works with overlapping concepts. Thereby we reflect the fact that in methodological advice and in research practices we find divergent specifications of the six methodological approaches that we laid out as functionally coherent ideal-types. For example, it has become quite common to equate the methodology of process tracing with the application of so-called ‘process tracing tests’, which involve the systematic evaluation of diagnostic evidence from within a case with the objective of evaluating a hypothesis about the case. These tests (e.g., hoop tests and smoking gun tests) have become more and more elaborated by applying the logic and the visualization techniques of set theory (e.g., Mahoney and Vanderpoel 2015: 72-77, see Volume III). Nevertheless, the more set theory has begun to shape the description of process tracing, the more this methodology gets aligned to a particularistic ontology and departs from the location of our ideal-type in the middle of our conceptual field. Proponents of these tests do not define causal mechanisms as configurations of social mechanisms, but focus on single mechanisms; they claim that a single observation can undermine a hypothesis; and they focus on reflecting on the ‘strength’ of individual necessary or sufficient causal factors - and not anymore on revealing the underlying interaction between configurations of causal factors, as has been the case when the term process tracing become popularized by George and Bennett (2005). Such an understanding of process tracing pushes this methodology towards the position within our conceptual field where we have located set theoretical methods. We have included another example of overlap in the last volume. The piece by Yanow (1992), in particular, involves both the empirical groundwork typical of ethnographic work, as well as the textual analysis that leans more on theory. Since different tendencies exist within all six methodological approaches there are overlaps among all neighboring approaches. Nevertheless, from a functional point of view, not all logically possible combinations of ontological presuppositions, epistemological principles and methodological approaches make sense. A method that builds on holistic presuppositions and highlights correspondence as quality criteria would have to be seen as inconsistent, and the same is the case for a method that starts with particularistic assumptions and focuses on the creation of coherent concepts or theories. The setting up of Volume I and the distinct foci of Volumes II to IV of this compendium on ‘major works’ in qualitative Political Science are guided by this overview and its structuring principles. Volume II is dedicated to the first two methodologies: the comparable cases strategy and configurational comparative analysis. Volume III contains the foundations and techniques for the two methodologies of within-case analysis: causal-process tracing and congruence analysis. Volume IV is concerned with the interpretative approaches and includes 7 various pieces that represent ethnographic studies and (con)textual analyses. For Volume I we present an overview below. But first we want to make explicit the criteria that guided our selection of the major works. Second, we want to point to similar compendiums, which provide a web of knowledge in which this compendium is embedded and has its special place. Selection Criteria and Further Important Works in Similar Compendiums We first applied two major criteria when deciding on which works should be included: relevance and diversity. We have tried to include those works which got recognized within the methodological discourse and which influence the practice of qualitative research in Political Science and adjoined disciplines. Nevertheless, given the fact that we have seen significant work developing only over the last ten years, it is not yet fully clear which works will strongly influence the practice of qualitative research in the coming decades. Furthermore, selecting the works according to their ranking in citation indices would certainly not be in the spirit of qualitative research. Therefore, we gave equal if not more weight to the second selection criterion in as much as we try to delineate and to represent the different strands of qualitative research. The criterion that we applied when choosing among alternative options is what we might call ‘non-self-referentiality’. In the end, methods and methodologies are not ends in themselves; they should help us to get a better understanding of the social world and to make better decisions when we shape the social world. In as much as we perceive the establishment of summer schools and professor positions for qualitative research as a sign of progress and maturation, we think that methodological discourses should not be disconnected from philosophical, theoretical and practical ones. Good qualitative research is based on a sound methodology, but even more it leads to the accumulation of useful knowledge. Finally, we want to point to the fact that this compendium is embedded in a wider web of similar endeavors. Given the fact that SAGE plays an important role in publishing and promoting (qualitative) social science methodology, it comes not to a surprise that three other SAGE compendiums come closest to our endeavor: The ‘SAGE Handbook of Case-Based Methods’ (2009), edited by David Byrne and Charles Ragin, the volumes that Mark Bevir edited under the title of ‘Interpretive Political Science’ (2010) and the volumes that Paul Atkinson and Sara Delamont edited as ‘SAGE Qualitative Research Methods’ (2011). Scholars interested in qualitative research in Political Science are well advices to have a look at these compendiums as well. Overview Volume I In Volume I, we include works from scholars that represent the most important reactions in the US to the challenge that the book by King, Keohane and Verba (1994) presented to qualitative research. The most comprehensive response is collected in Brady and Collier (2004). We have included the first chapter where the editors (and Seawright) present their main thrust of the book. They argue that we need to appreciate that qualitative research has a separate toolkit while at the same time we need to work towards common standards for utilizing the tools (Brady, Collier and Seawright 2004, see also Adcock and Collier 2001). Ragin (2000) supplement this position by explicating his view on cases as configurations as opposed to variable-oriented view on cases. These reactions lay the groundwork for configurational approaches and process-tracing methods. To these reactions we add works that guide us towards a more radical departure from the quantitative-statistical template. We include a seminal article by Charles Taylor (1971), the Canadian philosopher who transported the 8 Continental European hermeneutical traditions into the Anglo-Saxon world. His text is one of the main points of reference for those who stress the importance of meaning in the social (or human) sciences. Furthermore, we include two recent contributions by scholars who defend interpretative research. According to the first, Bent Flyvbjerg (2006), ‘[s]ocial science has not succeeded in producing general, context-independent theory and, thus, has in the final instance nothing else to offer than concrete, context-dependent knowledge’ (p. 223). The second, Dvora Yanow (2003), argues that it is necessary to understand interpretive research in its own terms, starting from its distinctive ontological and epistemological presuppositions. In the next section we present scholarly work that differentiates methodological approaches. The first article by Mahoney and Goertz (2006) focuses on the differences to quantitative work and shows a limited understanding of qualitative work. The second one by Koivu and Damman (2014) presents the larger picture by differentiating four qualitative approaches, reaching from what the authors call ‘quantitative emulation’ over ‘eclectic pragmatism’ and ‘set-theoretic’ to ‘empirical interpretivist’. The last piece written by Blatter and Haverland (2014) provides further clarification by showing how the classic comparable cases strategy is emulating quantitative principles, and by substituting the diffuse category ‘eclectic pragmatism’ with two distinct kinds of methodologies which are based on withincase analysis: causal-process tracing and congruence analysis. The works we selected for the overarching topic of concept formation reflect once again the entire spectrum of qualitative research. With a piece from Sartori (2009, originally published in 1984) we include the classic perspective on categorization and concept formation. The second article written by Collier and Levitsky (1997) represents the results of a distinct approach to concept formation that David Collier started at the beginning of the 1990s, when he introduced ‘radical categories’ and ‘family resemblance categories’ in an article with James Mahon (Collier and Mahon 1993). In the third article, Bevir and Kedar (2008) review and criticize these approaches from a radical antinaturalist or interpretivist perspective. Finally, we include two pieces that address an important issue that has come up in qualitative methodology: the call for greater transparency. Lupia and Elman (2014) introduce and explain the data access and research transparency (DA-RT) project that stimulated an extensive debate about adequate ways to make qualitative research more trustworthy. The conclusion that Büthe and Jacobs (2016) wrote in the Newsletter of the American Political Science Association Organized Section on Qualitative and Multi-Method Research represent the current state of this debate. The fact that we include three contributions from this Newsletter in this compendium and refer to further articles published there in our introductions, indicates how central this Newsletter has become for the scholarly exchange on qualitative methodology in Political Science References (beyond the pieces that are included in the four volumes): Adcock, R. and D. Collier (2001). Measurement Validity: A Shared Standard for Qualitative and Quantitative Research. American Political Science Review 95, 3: 529-546. Atkinson, P. and Delamont, S. (eds.) (2011). SAGE Qualitative Research Methods. London: Sage. Bevir, M. (ed.) (2010). Interpretive Political Science. London: Sage. Byrne, D. and Ragin, C. (2009). SAGE Handbook of Case-Based Methods. London: Sage. Collier, David and James E. Mahon (1993): Conceptual “Stretching” Revisited: Adapting Categories in Comparative Analysis. American Political Science Review 87 (4): 845-855. 9 Gerring, J. (2003) Interpretations and Interpretivism. Newsletter of the American Political Science Association Organized Section on Qualitative and Multi-Method Research 1(2), Fall 2003: 2-6. Goertz, G. and J. Mahoney (2012): A Tale of Two Cultures. Qualitative and Quantitative Research in the Social Science. Princeton and Oxford. Princeton University Press Monroe, K. R. (2005). Perestroika! The raucous rebellion in political science. Yale University Press. VOLUME II. CAUSAL REGULARITIES, CROSS-CASE COMPARISONS, CONFIGURATIONS Markus Haverland/ Joachim Blatter/Merlijn van Hulst Introduction and Overview Volume II focuses on comparative approaches - approaches that traditionally took center stage in qualitative methodology in Political Science. It includes two seminal treatments of comparative case study designs, refinements of these designs and extensively covers the important issue of case selection. These entries focus on the comparison of a few cases (‘small N’). From the 1980s on attempts were made to increase the number of cases while preserving the advantages of qualitative research. These so-called configurational approaches that were pioneered by Ragin will be the topic of the next section (see also Ragin, Volume I). We conclude with an entry that translates the configurational view back to the small N comparative designs. Seminal Treatments of Case Comparisons The first section of Volume II is dedicated to two classical statements on comparative case study design. Comparing political units is a common strategy of political science research since Aristotle’s study of the constitutions of Greek city-states, and reflections on why and how to compare have been written here and there throughout the last centuries. However, it were Lijphart’s article ’Comparative Politics and the Comparative Method’ (1971) and Przeworski and Teune’s `Logic of Comparative Social Inquiry´ (1970) that remain the major reference points for case comparisons and hence have been selected as starting points for this volume. Lijphart argues that the comparative case study, or as he called it ‘the comparative method’ is based on the same logic as the experiment and is similar to the statistical method, save with a lower number of cases. He provides a very systematic discussion of how to tackle the ‘small N – many variable’ problem, including selecting ´comparable cases´ (see also Lijphart 1975). As almost all comparative case study designs, his ´comparable cases design´ draws on Mill´s methods for causal inference, here specifically on Mill’s ‘method of difference’ and ‘method of concomitant variations’ (see also Mill 1875). At about the same time Przeworski and Teune presented their `Most Similar Systems Design´ (MSSD) and their ‘Most Different Systems Design’ (MDSD). Implicitly, the MSSD is based on the same inferential methods discussed by Mill as Lijpharts ‘comparable case strategy’, whereas the MDSD combines Mill’s method of ‘concomitant variation’ with his ‘method of agreement’. Strictly speaking, the MDSD is not a comparative case study design but a mixed research design. It operates on two levels, a lower level, typically individuals, and a higher level (the system level, e.g., countries). The core idea is to first establish an explanation for behavior on the individual level and then to analyze whether this explanation holds across 10 different systems. As there are typically very many individuals studied, while the number of systems are relatively small, the first stage is executed through quantitative research and the second stage take the form of a comparative (qualitative) case study. Refinements of Case Comparisons The two treatments of case comparisons discussed above have occasionally given rise to critical discussions and refinements in the two decades that followed (e.g., Frendreis 1983, DeFelice 1986). A particular well thought and rich contribution is Faure’s `Some methodological problems in comparative politics´ (1994). Among other things, he elaborates on the potential of abovementioned comparative case study designs by building on Popper’s distinction between hypotheses generation (context of discovery) and hypothesis testing (context of justification). He also discusses variations of the MSSD and the MDSD. Comparative case study designs typically compare cases at one point in time. Given sensitivity of qualitative political science to processes and related phenomena such as time, timing, and sequence, this is unfortunate. In his ‘On time and comparative research’ Bartolini (1993) has taken up the challenge to bring temporal variation into comparative designs and discusses how to actually define temporal units, how to go about across-time generalization, and how to deal with historical multi-collinearity. Case Selection No issue related to comparative case study design has attracted so much methodological debate as the issue of case selection. There are stern warnings by scholars working in a quantitative tradition, to ‘not select on the dependent variable’ (see e.g., Geddes 1990; King, Keohane and Verba 1994: 128-37). In other words, cases should not been chosen on the basis of the similarities of the outcome. If a researcher is interested in the successful adaptation of countries to globalization, he or she should not select successful cases only. If done this way the design would lead to a very serious selection bias. The probably most comprehensive and balanced discussion of the issue is by Collier and Mahoney, scholars versed in both quantitative and qualitative research. In their ‘Insight and Pitfalls. Selection bias in Qualitative Research’ (1996), they nicely elaborate on the nature and implication of selection bias in quantitative research and then discusses the implication for qualitative research. They show for instance that selection bias is even a problem when scholars do not care about generalization and that while selection bias might lead to an underestimation of a causal effect in quantitative research it might lead to an overestimation of the importance of an explanation in qualitative research. They also argue, perhaps controversially, that selection bias cannot be overcome by within-case analysis. Although the authors also lay out strategies to avoid or at least mitigate selection bias, the general thrust of the article is that selection bias is a serious problem that needs to be properly understood by qualitative researchers. The most straightforward way to avoid selection bias is to selected cases randomly. However, researchers using these approaches are often interested in testing hypotheses concerning exceptional phenomena such as wars. Relationships between two countries are most of the time characterized by ‘non-war’. Choosing a few cases randomly would most likely lead to no case where the phenomenon of interest is present. In this situation the researcher is better advised to select one or a few cases where the outcome has occurred, hence ‘on the dependent variable’ and to compare them with one or a few cases where the outcome has not occurred. Goertz and Mahoney (2004) have developed guidelines for how to select those cases of non-occurrence, or in their words ‘negative cases’. Crucially they argue that among 11 the very many negative cases, there are some cases that are more relevant to test theoretical argument than others. Researchers should select negative cases where the outcome could ‘possibly’ have occurred and they operationalize their ‘possibility principle’ through a number of rules that they illustrate with a number of examples of empirical studies. The next entry of this section, an article by Seawright and Gerring (2008), presents a synthesis of case selection techniques discussed in the abovementioned contributions. In addition, they explicitly link case selection to an ex ante quantitative analysis of the larger population of which the case (or cases) is a case of. They distinguish ‘typical’, ‘diverse’, ‘extreme’, ‘deviant’, ‘influential’, ‘most similar’ and ’most different’ cases and describe by which large-N method each of type can be identified. Configurational Approaches The contributions discussed so far take correlations between variables (Mill’s concomitant variation), as crucial ingredient for causal inference. We now turn to Configurational Comparative Methods (CCM) which build on an alternative approach to causation, based on necessary and sufficient conditions, increasingly couched in the language of set theory. This family of approaches is inherently linked to the name of Ragin who invented the first of its kind, called Qualitative Comparative Analysis (QCA, Ragin 1987). The methods are tailor made for studying a medium number of cases, typically between 20 and 50. Crucially, cases are conceived as configurations of conditions and outcomes. The goal is to identify the (combination) of (necessary and/or sufficient) conditions that lead to a certain outcome (see also Ragin in Volume I and Mahoney and Sweet Vanderpoel in Volume III). Within a combination of conditions the effect of one condition is depending on the effect of the others (conjunctural causation). The approach allows for the possibility that there is more than one (combination of sufficient) condition(s) for an outcome, in other words, different paths might lead to the same outcome (multiple conjunctural causation or equifinality, Bergschlosser et al. 2009, Wagemann and Schneider 2010). The configurational analysis is increasingly couched in set theoretic terminology. For instance, to say that a configuration C is sufficient for an outcome O implies that cases that have C are always in the set of cases that have O. In other words C is as subset of O. A necessary configuration C implies that C is always present when O is present, hence O is a subset of C. We start this section with a chapter by Bergschlosser and others (including Ragin) in an edited volume on configurational comparative methods (2009). The chapter locates QCA in the discussion on small N and large N comparisons and lays out key features of this approach such as the interactive relationship between cases and theory, the notion of multiple conjunctural causation (see above), the strive for parsimony while acknowledging complexity, and how ‘modest’ generalization is possible. As the whole idea of set relations might be difficult to grasp both for quantitative and qualitative researchers we have also included a chapter of Ragin’s monograph ‘Redesigning Social Inquiry’ (2008) in which he lays out the basic ideas of set relations in the social science research and how a set-relational view differs from a correlational view. We have also included a chapter from the same book that briefly and concisely lays out the idea of fuzzy sets that underlines Fuzzy Set QCA (fsQCA), a refinement of the initial QCA (which in the meanwhile has been renamed into Crisp Set QCA; csQCA). Briefly, QCA assumes dichotomous values. A (combination of) condition(s) or an outcome is either present or absent, or in settheoretic terms, those are either member of a set or not a member of the set. Fuzzy Set QCA 12 on the contrary, allows for values in between. A (combination) of condition(s) might be ‘neither fully in nor fully out of a set’, or ‘mostly but not fully in’. While the entries in this section so far have either provided overviews or foundations of configurational approaches, the next two entries deal with two issues more specifically. The first issue is how to select the conditions that enter the configurational analysis. Amenta and Poulsen (1995), discuss various strategies, such as relying on all explanations offered in the literature or an approach focusing on variables that have proved statistically significant. They are very critical about these and other strategies and advocate a strategy called ‘conjunctural theory approach’. They rightly argue that to exploit the analytical potential of configurational methods, the theories that inform the selection of conditions should themselves have a configurational character. Amenta and Paulsen discuss such a theory in the area of welfare state development. According to this theory generous welfare states will develop when voting rights are present, traditional party organizations absent, and either administrative powers are present or a left-center government is in power (p. 33). Configurational medium N comparisons face a similar challenge as their small-N counterparts: how to deal with the temporal dimension, with time, timing and sequence? Baumgartner has developed ‘coincidence analysis’ (CNA), an approach that is related to QCA but that presents a new algorithm that, according to the author allows for analyzing causal chains more rigorously than QCA (Baumgartner 2009, for a critique see Thiem 2015). We have chosen to include his article with Epple (Baumgartner and Epple 2014) because not only does it outline the idea of this approach but also offers an empirical application: an analysis of the causal dependencies between factors that have led to the Swiss Minaret ban, based on a comparison of the 26 Swiss cantons. Furthermore, it show how technically sophisticated this strand of qualitative methodology has become.We conclude this section with Wagemann and Schneider’s (2010) survey of recent developments and the current methodological agenda. They touch upon many issues and there is no space here to discuss any of them in any detail, so we limit ourselves to mention a few of them. They describe, for instance, innovations that help bridge the necessarily deterministic character of set-theoretical arguments with the analysis of ‘notoriously noisy social science data’ (p. 389). One innovation are the so called ‘coverage and consistency measures’ that have been developed to deal with problems such as measurement error. Another development they address is the emphasis on the analysis of necessary rather than on sufficient conditions. And finally, they sketch the development of means to better visualize the results of comparative configurational analyses which is currently on the agenda. Correlations versus Configurations While the small N comparative case study designs discussed in the first two sections of this volume are based on a correlational view of causation, the designs for analyzing a medium N just discussed take a configurational view focusing on necessary and sufficient condition increasingly expressed in set-theoretic terms. Yet, set theory can also inform small N comparisons (see also Dion 1998). We conclude this volume with a chapter of Rohlfing’s monograph on case study research (2012). In this chapter, he contrasts comparative case study designs based on a correlational view of causation with comparative case study designs that are based on set relations. He does so for three research goals: hypothesis building, hypotheses testing and hypotheses modification. Readers are invited to compare this 13 contribution with the first article of this volume to appreciate just how much progress has been made in the last four decades in the domain of comparative qualitative research. References (beyond the pieces that are included in the four volumes): Baumgartner, M. (2009). Inferring Causal Complexity. Sociological Methods and Research 38 (1): 71-101 DeFelice, G. E. (1986) ‘Causal Inference and Comparative Methods.’Comparative Political Studies 19(3): 415-37) Dion, D. (1998) ‘Evidence and Inference in the Comparative Case Study’, Comparative Politics 30 (2): 127-45. Frendreis, J. (1983). 'Explanation of Variation and Detection of Covariation: The Purpose and Logic of Comparative Analysis.' Comparative Political Studies 16(2): 255–72. Geddes, B. (1990). 'How the Cases You Choose Affect the Answer You Get: Selection Bias in Comparative Politics.' Political Analysis 2(1): 131–50. King, G., R. O. Keohane, and S. Verba (1994). Designing Social Inquiry: Scientific Inference in Qualitative Research. Princeton: Princeton University Press. Lijphart, A. (1975). 'The Comparable-Cases Strategy in Comparative Research.' Comparative Political Studies 8(2): 158–77. Mill, J. (1875). A System of Logic, Ratiocinative and Inductive: Being a Connected View of the Principles of Evidence and the Methods of Scientific Investigation. London: Longmans, Green, Reader, and Dyer. Ragin, C. (1987). The Comparative Method: Moving Beyond Qualitative and Quantitative Strategies. Berkeley: University of California Press. Ragin, C. (2000). Fuzzy-Set Social Science. Chicago: University of Chicago Press. Thiem, A. (2015). ‘Using Qualitative Comparative Analysis for Identifying Causal Chains in Configurational Data: A Methodological Commentary on Baumgartner and Epple (2014).’ Sociological Methods & Research, DOI: 10.1177/0049124115589032 VOLUME III. MECHANISMS, TIME AND WITHIN-CASE ANALYSIS Joachim Blatter/ Markus Haverland/ Merlijn van Hulst Introduction and Overview Volume III is dedicated to methodological foundations and techniques which have always been major features of qualitative research, but which only recently became recognized by methodologists: the goal to trace empirically the social and causal mechanisms which lead to a specific outcome; the epistemological and theoretical relevance of timing and causal sequences and the various approaches of within-case analysis with its specific techniques of data collection/creation and of data analysis/interpretation. Process tracing has become the most well-known technique of within-case analysis, but it is not the only one. Those who want to link within-case analysis to basic social theory are better advised to turn toward congruence analysis - a methodology that serves as a bridge between the realism that underlies most descriptions of process tracing and methods which are based on social constructivist ontologies and interpretative epistemologies (see Volumes I and IV). 14 Social and Causal Mechanisms The first section of Volume III is dedicated to shed some light on the term ‘mechanism’, for which we find plenty of different understandings in the literature. A ‘smallest common denominator approach’ leads to an understanding of mechanisms as ‘pathway[s] or process[es] by which an effect is produced or purpose is accomplished’ (Gerring, 2008: 178). Many proponents of mechanism-based explanations claim that variable- or conditioncentered methods of cross-case analysis do not provide any account on why and how a specific factor causes a specific outcome. They claim that this is true for the research practice in many correlational large-N studies, in many configurational medium-N studies, and in many qualitative comparative case studies. Nevertheless, it is not true when we look at the methodological advice on how to do these cross-case analytical studies (see Volume II). There it becomes clear that a good correlational, configurational or comparative cross-case analysis should be based on a theoretical specification of the mechanisms which link the IVs or the causal conditions with the DV or the outcome. The crucial difference between research designs that focus on cross-case analysis and those which concentrate on within-case analysis lies in the fact that within the latter designs the social and causal mechanisms are traced empirically. In the former designs, by contrast, the mechanisms are specified only theoretically and the empirical work is limited on testing or revealing regular conjunctions or successions among variables across cases. We have selected works that are helpful for making mechanism-based research an endeavor that contributes to the joint accumulation of knowledge, in the sense of building and refining abstract theories, and that avoids that each study adds its own idiosyncratic mechanism to an already bewildering array of causal factors or processes which are labeled mechanisms. The most important reference for those who share this goal is Hedström and Swedberg’s introductory essay on their volume on Social Mechanisms (1998). They are certainly not the first and not the only ones who propose the understanding of causal mechanism that we lay out in a minute - it seems worthwhile to mention that Merton, Coleman and Elster have been pioneers of a mechanism-centered approach to the social sciences. Nevertheless, the Hedström/Swedberg volume provides a link between theoretical and methodological discourses. Based on Coleman’s macro-micro-macro model for explaining collective social action, Hedström and Swedberg (1998: 22/23) distinguish three basic kinds of social mechanisms: situational mechanisms, which specify how social structures shape individual desires, beliefs and action opportunities; action-formation mechanisms which specify how a specific combination of individual desires, beliefs and action opportunities generate a specific action; and transformational mechanisms which specify how institutional and social structures and processes transform individual actions into some kind of collective outcome. According to such an understanding of social mechanisms, a fully fledged explanation of a social or political process is based on ‘configurational thinking’ - something that Ragin has introduced as a foundation for new methods of cross-case analysis (QCA and Set Theory, Ragin, 2008, see Volume II). In an approach that focuses on the empirical specification of causal mechanisms through within-case analysis, ‘configurational thinking’ means that an explanation demands the specification of three different kinds of social 15 mechanisms (which have a logical and temporal sequence and are mutually nonsubstitutable).3 Given its major proponents, such an understanding of causal mechanisms - which is based on a macro-micro-macro configuration of three social mechanisms - is often associated with a specific theory, namely Rational Choice theory. Nevertheless, the works we have selected in order to shed light on the concept of ‘mechanism’ show that Hedström and Ylikoski (2010) are right when they claim that such a configurational and multilevel model of explanation does neither imply a commitment to rational choice theory nor to any other strong versions of ‘methodological individualism.’ Instead, mechanism-based explanation can be combined with quite different theories and methods (Mayntz, 2004). The three articles we have selected to shed light on the meaning of social and causal mechanisms represent slightly different directions. In 2002, Mayntz edited a volume in which important philosophers of science (Daston, Mitchel), eminent political scientists (e.g Thelen, Levi, Scharpf) and sociologists (Esser) lay out how closely connected the search for mechanism is with the presumption of contingent causality, with the goal of contingent generalization and with the idea of ‘middle range theories.’ In the article we selected for this compendium, Mayntz (2004) provides convincing statements to disputed topics - among others, on the observability of mechanisms. Like variables or conditions, most mechanisms are abstract concepts which cannot be directly observed; ‘[b]ut it is in principle possible to observe the operation of a given mechanism in a specific instance, as it is possible to “observe” analytic constructs via the indicators operationalizing them’ (Mayntz, 2004: 243). Second, she makes clear that mechanisms should be seen as ‘building blocks’ for theories and makes a plea to develop further concepts for the micro-macro link (the transformational mechanisms in Hedström and Swedberg’s terminology) in order to balance the implicit micro- or agency-bias in the macro-micro-macro model. Falletti and Lynch (2009) follow up on this plea and present a sample of causal mechanisms. They order these mechanisms according to the level of analysis to which they refer and attempt to go beyond methodological individualism by aligning mechanisms with different causal agents: individuals, collectives or social systems (Falletti and Lynch, 2009: 1149-1150). They define ‘causal mechanisms as portable concepts that explain how and why a hypothesized cause, in a given context, contributes to a particular outcome’ (Falletti and Lynch, 2009: 1143). We would like to highlight that this definition implies configurational thinking. It is probably not by accident that Falletti and Lynch use the term ‘hypothesized cause’ and not the term ‘theorized cause’ because their approach is in line with a tradition of proposing ‘middle-range theories’, defined by Boudon (1991: 520) as ‘a set of statements that organize a set of hypotheses and relate them to segregated observations’. The last article we include in order to provide guidance for a productive understanding and application of mechanisms represents the principled alternative to such an approach which often ends up in equating theories with hypotheses. Bennett (2013) provides a first attempt on how to connect mechanism-based research with more general, paradigmatic theories (in International Relations). Such an attempt recognizes that paradigmatic theories serve as important cognitive and communicative devices that structure scientific and practical discourses. With 3 Often, the term ‘causal mechanism’ is used synonymously with the term ‘social mechanism.’ Blatter and Haverland (2014) argue that is better to call the entire configuration of the three social mechanisms a ‘causal mechanism.’ This avoids terminological confusion (applying two terms for the same meaning) and it highlights the configurational nature of mechanism-based explanations. 16 the combination of the three major paradigms (Realism, Liberalism and Constructivism) and four kind of mechanisms (he adds a direct macro-macro-link to the three links presented by Hedström and Swedberg) Bennett aims at ‘fostering cumulative theorizing and knowledge about politics’ (Bennett, 2013: 473).4 Timing and Sequences The second and closely connected reason why within-case analysis became more important during the last decades has to do with the fact that timing and sequences received more attention in Political Science theories. Part of the reason is that logical sequences play an important role in Game Theory which made strong inroads into Political Science (Hall, 2003, see Volume I). But the more substantial reason has to do with the fact that globalization renders one of the fundaments of classic comparative analysis (the independence of the units of analysis) not plausible anymore. Instead, the focus shifted to the study of diffusion processes, but also to the study of mechanisms that lead to path dependencies. Pierson has contributed much to shedding theoretical light on the influence of time and timing on politics (Pierson, 2004). In the article we selected for this compendium Pierson (2000: 74) explains that path-dependent sequences are characterized by mechanisms which lead to selfreinforcement. Furthermore, he highlights the characteristics of path-dependent processes and identifies some mechanisms which produce positive feed-back loops. The two following articles illustrate how differently the theoretical recognition of time finds its way into methodology. Brady’s (2000) short article on the US presidential election in the year 2000 is already a hallmark for methodologists who want to illustrate the fruitfulness of within-case analysis. He shows when and why an explanation that is based on causal-process observations is more convincing than a statistical analysis that is based on dataset observations. His argumentation depends crucially on timing. Those who have already voted could have not been swayed by (false) information provided by the TV stations 10 minutes before the polls closed. What makes (objective or natural) time such an important epistemological foundation for analyzing processes, is the fact that movement in time - in contrast to movement in space - is unidirectional and irreversible (Grzymala-Busse, 2011). Gryzmala-Busse (2011) addresses timing, but adds further aspects of temporality, which can play a role in the analysis of mechanisms and processes: duration, tempo and acceleration. She emphasizes the relevance of subjective perceptions and experiences of time, and stresses that the description of temporal developments or sequences alone is not sufficient for a convincing explanation. Nevertheless, certain temporal developments make certain mechanisms more likely or unlikely, and sometimes even impossible. Such an emphasis on the connection between specific forms of temporality and specific mechanism shows once again how much mechanism-based explanations and techniques of within-case analysis are based on ‘configurational thinking.’ Interestingly, such an emphasis on the relations and interdependencies among causal factors, mechanisms and contexts is getting lost in those approaches which do not focus on 4 We should mention, though, that Bennett (2013: 470) himself argues that his approach represents middlerange theorizing. The different interpretation is due to the fact that the term ‘middle-range theory’ is notoriously slippery and has been introduced in order to set oneself apart from ‘grand theorizing’ à la Talcott Parsons. This leaves room for different interpretations about what ‘middle-range theorizing’ is. We have selected two texts which represent different options: Whereas the approach provided by Falletti and Lynch is closer to define it as ‘a bundle of hypotheses’ in the sense of Boudon (1991), the article by Bennett shows that mechanisms can be seen as core elements of paradigmatic theories within specific fields of research. 17 aspects of time but on sequences; this can be observed in the last article of this section. Mahoney and Vanderpoel (2015) show how set diagrams can be useful for making sequence analysis and counterfactual analysis more stringent and more comprehensible. Sequence analysis is a method that aims ‘to assess the relative importance of causes in sequences composed of any combination of necessary and/or sufficient conditions, including a chain of necessary conditions in this instance’ (Mahoney and Vanderpoel, 2015: 82). The main goal is not anymore to show that the entire set of conditions is necessary and/or sufficient for the outcome, but the focus is on revealing which of those conditions is the most important one. This represents a turn toward more particularistic research goals and a turn away from coherence as quality criteria for qualitative research (see Volume I). This method has been first presented by Mahoney, Kimball and Koivu (2009), but we include the more recent article since it contains another important message for those who do within-case analysis: visual illustrations are very helpful both in respect to analytic stringency as well as in respect to presenting results in an accessible and convincing form. Within-Case Analysis: Approaches, Techniques and Practices This section includes two classics - Eckstein (1975) and George and McKeown (1985) - and an array of more recent articles, which provide different, but often overlapping prescriptions on how to do sound within-case analysis. In 1975, Eckstein laid out the first comprehensive reflection on the role of case studies in Political Science. He distinguished five types of case study and reflected on their contribution to theory building: configurative-ideographic studies, disciplined-configurative studies, heuristic studies, plausibility probes, and crucial case studies. In contrast, George and McKeown (1985), discuss different kind of case study strategies: controlled comparisons, the congruence procedure, process-tracing. In addition, they provide advice on how to implement the method of structured, focused, comparison. These strategies and methods have been restated in George and Bennett (2005), which in turn triggered the further specification of these methodological approaches. Checkel’s (2006) and Hall’s (2006) short contributions represent the methodological reflections of influential scholars who helped to establish a theoretical perspective in Political Science that highlights the role of ideas and social constructions in political processes. And it is important to highlight the fact that they have done so without turning towards a hermeneutic or interpretative methodology. Instead, Checkel (2006: 363) emphasizes that ‘process tracers are well placed to move us beyond unproductive “either/or” meta-theoretical debates to empirical applications in with both agents and structures matter as well as are explained and understood through both positivist and post-positivist epistemologicalmethodological lenses’ (emphasis by the Checkel). Hall (2006) presents a methodological approach that he calls ‘systematic process analysis’ and argues that this approach leads to ‘theory-oriented explanations’ in contrast to ‘historically specific explanations’ and to ‘multivariate explanations.’ It is a strongly deductive approach, and with reference to Lakatos, Hall (2006: 310) argues that ‘research in social science is most likely to advance when it focuses on a ‘three-cornered fight’ among a theory, a rival theory, and a set of empirical observations.’ Bennett’s article represents the currently dominant understanding of process tracing. According to him ‘[p]rocess tracing involves the examination of “diagnostic” pieces of evidence within a case that contribute to supporting or overturning alternative explanatory hypotheses’ (Bennett, 2010: 208). At the heart of such an approach stand four different kinds of empirical tests, focusing on evidence with different kinds of probative value: straw in the wind tests, hoop tests, smoking gun tests and tests which are double decisive in discriminating 18 among rival hypotheses. In 2015, Bennett and Checkel published a volume in which they take stock of the progress that has been made in developing process tracing into a sophisticated analytical technique and in applying it in practice. We have selected Schimmelfennig’s (2015) contribution to this volume because he not only includes the advice that the editors formulate as ‘best practices’ in Article 1 of the volume, but he adds valuable insights for making process tracing effective. Furthermore, he illustrates the practice of process tracing with important examples in the field of European Integration. Blatter and Blume (2008), in contrast, argue that there is not only one single approach to within-case analysis. In the tradition of Alexander George, they differentiate between causal process tracing and congruence analysis, a differentiation that has been developed further by Blatter and Haverland (2014, see Volume I). In contrast to the Anglo-Saxon mainstream, causal-process tracing is seen as an analytic technique that focuses on evidence that provides dense and deeper insights into the interactions of causal factors and that takes spatialtemporal continuity and contiguity as epistemological foundations for drawing causal inferences. Such an understanding of causalprocess tracing adheres strongly to ‘configurational thinking’, which means that the goal is not reveal the causal strength of distinct factors of influence, but to shed light on how the various factors of influence joined forces to produce the outcome of interest. Congruence analysis, in contrast, focuses on testing predictions which are derived from abstract theories. Insofar, this comes close to what others call process tracing. The difference is that congruence analysis is a methodology that attempts to link case study research to comprehensive (even paradigmatic) theories and not just to specific hypotheses on a low level of abstraction. The last work included in this volume delivers something that is very rare in the literature: Beach and Brun Pedersen (2013) present not only a comprehensive approach to process tracing based on a mechanismic understanding of causality, but in the selected article, they lay out standards and practical advice on how to collect and use data in order to turn observations into evidence. This kind of advice is especially valuable for the practice of withincase analysis, since these studies are strongly depended on the ingenuity of the researcher to discover nonstandardized information and to use these observations for making convincing statements in respect to the existence or the relevance of abstract causal mechanisms and on the temporal unfolding of causal processes. References (beyond the pieces that are included in the four volumes): Bennet, A. and Checkel, J.T. (2015). Process Tracing. From Methaphor to Analytic Tool. Cambridge University Press. Blatter, J. and Haverland, M. (2014). Designing Case Studies. Explanatory Approaches in SmallN Research. Paperback Edition. Palgrave Macmillan. Boudon, R. (1991). Review: What Middle-Range Theories Are. In Contemporary Sociology, 20(4): 519-522 George, A.L. and Bennet, A. (2005). Case Studies and Theory Development in the Social Sciences. Massachussets/London: MIT Press Cambridge Gerring, J. (2008). The Mechanismic Worldview: Thinking Inside the Box. British Journal of, Political Science 38(1): 161-179 Hall, P.A. (2003). Aligning Ontology and Methodology in Comparative Politics. In Mahoney, J. and Rueschemeyer, D. (Eds.) Comparative Historical Analysis in the Social Sciences. Cambridge, UK and New York: Cambridge University Press. pp. 373-404 Hedström, P. and Yikoski, P. (2010). Causal Mechanisms in the Social Sciences. Annual Review of Sociology, 36: 49-67 19 Mahoney, J. Kimball, E. and Koivu, K.L. (2009). The Logic of Historical Explanation in the Social Sciences. Comparative Political Studies, 42(1): 114-146 Mayntz, R. (2002). Akteure-Mechanismen-Modelle. Zur Theoriefähigkeit makro-sozialer Analysen. Frankfurt/New York: Campus Verlag Pierson, P. (2004). Politics in Time: History, Institutions, and Social Analysis. Princeton and Oxford: Princeton University Press Volume IV. Interpretive and constructivist approaches Merlin van Hulst/Joachim Blatter/Markus Haverland Introduction and Overview Volume IV of this compendium on Major Works in Qualitative Research in Political Science deals with the interpretive and constructivist approaches to the study of politics, for the sake of brevity here called ‘interpretivism’. These approaches have a strong family resemblance when it comes to their ontological and epistemological assumptions (Bevir 2003). Over the last 30 years, interpretivism in political science and elsewhere has been strengthened not only at the level of shared general assumptions, but also regarding its ability to spell out what these general assumptions mean for doing research in terms of concrete ways of accessing, constructing and analyzing data, and in judging the quality of the products of research (Brower et al. 2000; Dodge et al. 2005; Yanow and Schwartz-Shea 2013). As interpretivism will be less familiar to many readers than the approaches introduced in the other volumes, we allow ourselves some additional space to sketch the ontology and epistemology that ground it. Ontology A first issue to tackle is the ontology that interpretivists depart from. First, a basic assumption of interpretivism is that human actors give meaning to – interpret – their worlds (Blumer 1969). Meanings include not only the beliefs actors have, but also how they feel towards things and how they value them. Meanings are manifested in and communicated through language, but they are also embedded in human practices and artifacts more broadly. The interest in the making and manifestation of meaning, leads interpretivists to studying both process – storytelling, framing, and the like - and product - metaphors, discourses, frames, narratives and more. In turn, human actors act on the basis of the meaning things have for them (Hay 2011). Actors are thus not only creating meaning, meaning also guides them in their actions. Second, meanings are often characterized by inter-subjective nature and durability within a certain context. As human actors are socially embedded (Taylor 1971; Wedeen 2010), they come to form certain beliefs, hold certain values and experience certain feelings in big part as a result of their engagement with others. This starts with their enculturation in early life and continues during later stages. Meanings also become part of social practices and the artifacts that are created through these, like the language of everyday political debates and policy documents. When we investigate practices and artifacts, we should therefore always keep in mind that “categories, presuppositions, and classifications referring to particular 20 phenomena [are] manufactured rather than natural” (Wedeen 2010: 260). Beliefs, values and feelings are also embedded in physical objects, like government buildings, community centers and statues. Meanings typically institutionalize and become part of a system of meanings (Wedeen 2010). This is also the reason why we could call interpretive and constructivist approaches holistic (see Volume I of this compendium on Major Works in Qualitative Research in Political Science). Interpretivists are of the opinion that the elements of the social world are tightly bound together and the whole of social reality is not merely a sum of its parts. Third, because meanings are constructed and the context in which they are used changes over time, interpretivists argue that their content is inherently contingent and multiple. Meanings depend on the way human actors interpret what they experience and this may vary from one community to the next. Also, as reality unfolds, communities and their contexts change and meanings change with them. Meanings are also often struggled over (Stone 2002, Yanow 1992). And if they are not, one might wonder what meaning-making forces keep them in place. In the end, interpretivists claim, they can always become political and therefore an important topic for the study of politics. Epistemology A second issue to deal with is epistemology. As said, interpretivists study meaning and meaning-making as it takes place in a certain context. That is a first epistemological principle. As Hay (2011: 168) puts it: “Political subjects behave the way they do because of the beliefs and understandings they hold. […] political analysts must seek to establish the beliefs which motivate an actor’s conduct – or, put slightly differently, they must seek to reconstruct the meanings to actors of the conduct in which they engage.” Interpretivists also investigate the process through which these meanings are communicated or through which they are manifested. Interpretivists often put human actors at the center of their research. That is, they try to understand the perspective of the human individuals who (or whose artifacts) they interpret. Some, however, would stress that meaning-making processes and the products of meaning-making can or should be studied apart from human actors themselves and would only grant human actors limited agency. Actors, in this view, only occupy ‘a subject position’ given to them by meaning structures beyond their control. Second, interpretivists think that in the study of meaning it is impossible to conclusively determine the truth. As meanings and meaning-making are bound up with communities and context and both of these change over time, they argue, the study meanings and meaningmaking will not establish universal laws of the social world. Furthermore, research necessarily takes a certain perspective on the world. It is for interpretivists hence not possible to construct a ‘view from nowhere.’ That is to say, as beliefs are social constructions, so are scientific claims. For interpretivists, political scientists are meaning-making creatures, just as the human actors they investigate. Political scientists interpret people’s interpretations (Taylor 1971). The personal and socio-cultural background and prior experiences of the researcher(s), then, become important in production of scientific knowledge. In addition, the use of research methods such as interviews and surveys, entails a (co-) production of data. Data generated in such a way, interpretivists argue, were not just ‘brute facts’ laying around, waiting for the researcher to collect them. This also implies that judging the quality of scientific knowledge according to its level of correspondence to an external reality, the main quality criteria for many positivists approaches, does not make sense for interpretivists (see the introduction to 21 Volume I of this compendium). That does not mean that interpretivists belief that in the practice of interpretive science there is no way to judge quality and that ‘anything goes.’ Indeed, some ways of working are inter-subjectively agreed on within the community of researchers (Schwartz-Shea 2013). They are believed to lead to better science. A final assumption has to do with the normative stance of researchers using interpretive-constructivist approaches. The necessary perspectival nature of science problematizes a fact-value distinction. In the positivist paradigm, the researcher’s values should not be part of the procedure through which she arrives at the truth. However, an interpretivist would argue that if doing science means taking a perspective, choosing to look at phenomenon A and not at phenomenon B - at least implicitly - equals valuing A over B, using instrument A and not B - at least implicitly - equals valuing A over B, talking to actor A and not B - at least implicitly - equals valuing A over B, and so on. Even if the choice to ‘do’ A and not B might be a rather pragmatic one, ‘giving value’ takes place all of the time. For interpretivists, scientific practices, like other human practices, are part of a struggle over what is meaningful in life. Scientific knowledge, as a result, is “historically situated and entangled in power relationships” (Wedeen 2010: 260). Interpretivists often belief that because neutrality is impossible, they can legitimately invest their efforts in contributing to a more democratic, more just, etc. society. In line with this, some interpretivists think it is the researcher’s task to uncover hidden meanings and meaning-making processes, in an effort to critically reflect on them. In addition, ethnographers in particular invest time and effort to ‘get close’ to actors in the field, reconstructing the ways human actors look at their worlds, because they belief it is part of the researcher’s normative task to do so. Meaning-focused Research Above we have explained what the study of meaning-making entails in general. An introduction to interpretive thinking that elaborates these ideas is the Hay’s (2011), the first in the first section. This text deals with ontology, epistemology and methodology of a hermeneutic interpretivism, and includes valid criticism. What interpretivists should and do admit, is that there is a large diversity among them. Debates are related to subtle and more fundamental ontological and epistemological principles, e.g. on the matter of (lack of) agency of actors. Glynos and Howarth (2007), for example, differentiate between their own poststructuralist approach and a more hermeneutic approach they encounter in the work of Bevir and Rhodes (2003). Wagenaar (2011) recently stimulated the debate in interpretive studies with a division of meaning-focused policy analysis in three: hermeneutic meaning, discursive meaning and dialogical meaning. Dodge et al. (2005), the second text in this section, also present three approaches in interpretive inquiry and the specific narrative approaches: narrative as language; narrative as knowledge; and narrative as metaphor. Differences, such as those that Dodge et al. (2005) present, can be partly understood from differences in the ‘material’ studied. One can hardly be surprised that a researcher studying historical materials stretching decades constructs a different image of the world of politics than a researcher who studies recently erupted political debates over the use of a plot of land. This is not to say that such research might not connect or that the results of one of these studies is incompatible with the other or even be part of a larger research project (Dodge et al. 2005). Differences might also simply reflect the different focus of researchers studying similar material differently. From the same conversation, one researcher might reconstruct 22 political discourses while the other might point at the particular interpretation of political events that a certain person makes. The last text in our first section, a chapter from Deborah Stone’s (2002) Policy Paradox, gives the reader a handle on diversity. This text uses clear examples to show (in the spirit of Murray Edelman) how policy problems are symbolically presented through the use of language. Ethnography and Field Methods The second section of Volume IV is dedicated to ethnography and field methods. Ethnography, through immersion, intends to develop thick description of the site under study (Schatz 2009). It aims ‘… to chronicle aspects of lived experience and to place that experience in conversation with prevailing scholarly themes, problems and concepts’ (Wedeen 2010: 257). Participant observation, ethnography’s main method, has a long history in political science, especially in early studies of bureaucracy in the period 1940-1960. Well-known in Political Science for his methodological statements on observation, although not as interpretively grounded as much of the more recent ethnographic work, is Fenno’s (1986) research. Fenno points at the way participant observation - which in his research took the form of ‘following politicians around and talking with them as they go about their work’ (Fenno 1986: 3) – allows us to understand the contextual and sequential nature of politics. Wedeen’s (2010) recent article offers a welcome state of the art. It does not just describe ethnography, it elaborates on its critical stance and connects it to the wider domain of interpretive social science. Participant observation, the element that makes an ethnographic strategy distinctive from others, in particular allows researchers to study social practices as they are performed. In this volume we included a text (published in a general methodology journal) by Wolfinger (2002). This text will support the researcher to get a concrete grip on what people do through writing fieldnotes. But as important as participant observation is to ethnography, the interaction with people in the field often takes the form of an interview. We have therefore, and because it is used widely throughout the wider realm of qualitative research, also included a short text by Leech (2010) on this method. Discourse and Metaphor Analysis A second set of research strategies we want to highlight here are discourse and metaphor analysis. Discourse analysis itself has been used in a large variety of forms throughout the social sciences. For an excellent overview of sources in Critical Discourse Analysis, the reader might look at the Sage volumes edited by Wodak (2013). Discourse analysists in Political Science, often inspired by post-structuralist thinking, typically define discourses as historically contingent systems of signification, which (partly) produce what they represent (Milliken 1999; Glynos and Howarth 2007). Milliken (1999), included in this section of the volume, put discourse analysis on the agenda of International Relations research. Her text can be considered a very successful effort to provide an overview of discourse analysis in IR at the end of the previous millennium. Milliken, in addition, reflects on the way discourse analysists look at their work, labelling discourse analysis “a post-positivist project that is critically selfaware of the closures imposed by research programs and the modes of analysis scholars routinely use in their work and treat as unproblematic” (Milliken 1999, p. 227). In the same year as Milliken, Diez (1999) published a text that underlined the importance of language in the study of European governance. This text, the second in this 23 section, presents discourse analysis in the form of three ‘moves’: first, a move following the work of Austin, who pointed at the performative character of language – words do something; second, a move following the work of Foucault, who stressed the force of discursive context and how it imposes meaning; and third, a move following the work of Derrida, who reminded us that change is possible only if meanings are not fixed forever and “if the lines of contestation between various discourses are allowed to shift” (Diez 1999, p. 606). Laffey and Weldes (2004), subsequently, urge us not to confuse discourse analysis with mere content analysis of language. They propose to see discourse in terms of socio-cultural resources that are used by people and which use people, and structures of meaning-in-use. Meanings, for instance, are first created, then temporarily fixed and finally anchored in institutions and social relations. Discourse analysis studies such processes and their products. In the end, the authors do not forget to mention, discourse analysis is always (also) “about power and politics because it examines the conditions of possibility for practices, linguistically or otherwise” (Laffey and Weldes 2004, p. 29). Metaphor analysis is another type of analysis. It has been practiced in Political Science at least from the 1980s onwards. As the last text in this section, we have included Yanow’s (1992) Supermarket and the Culture Clash. Metaphor analysis, as practiced by Yanow, shows how metaphors used in policy making and organizations are more than decoration and more than purely literary devices. They are vehicles for ways of thinking that help to structure policy and organizational practices. With her exemplary, ethnographically generated analysis, Yanow leads us from the language we hear or read to the culture that underlies it. Framing and Narrative Analysis In terms of structure and agency, studies that look at political discourse often start their explorations at the structure side while ethnography approaches start with “situated actors”. Both, however, intent to move beyond the structure-agency dichotomy, struggling with the question what it actually is that triggers change or (alternatively) keeps things in place. In the broad middle ground between discourse and ethnography we find narrative and framing approaches, focusing on political actors as “framers” and “storytellers” who use language to both depict and shape their everyday realities. Rein and Schön (1986), whose text is the first in this section, present the basic ideas underlying frame analysis. Although they also use the concept of (policy) discourse, for them this is merely a background concept to point at “the interactions of individuals, interests groups, social movements and institutions through which problematic situations are converted into policy problems” (Rein and Schön 1986, p. 4). Drawing on their own earlier work and that of pioneers like Bateson and Goffman, they argue that actors in the policy discourse use frames to select, organize and make sense of a complex reality in order to act on it. A frame, then, is a perspective from which troublesome situation can be dealt with. As they often part of our natural, taken-for-granted world of speaking and thinking, they are not always that easy to uncover. What Rein and Schön hoped for was the possibility of uncovering frames and reframing them in ways that would bring actors together. Van Hulst and Yanow (2016), in their recent “answer” to Rein and Schön, are critical about the possibility to reframe in the way Rein and Schön said it might take place. They urge frame analysts to move from frames to framing, meaning that they take into account the thoroughly dynamic and political character of framing. Reconstructing frames, they argue, frames might divert attention from (sub-) processes through which they are constructed. Van Hulst and Yanow find anchor points in sense-making, selecting, naming, and categorizing, and storytelling. 24 To introduce narrative analysis, we have included two texts that represent overview and application. Patterson and Monroe (1998), the first of two text on narrative in this section, were among the first who have given a thorough introduction to a narrative in the discipline. Like frames and framing, narrative and storytelling point at “the ways in which we construct disparate facts in our own worlds and weave them together cognitively in order to make sense of our reality” (Patterson and Monroe 1998, p. 315). Narratives or stories are an invaluable tool to navigate “the myriad of sensations that bombard us daily” (ibid.). What differentiates narratives most clearly from the more general concept of frames, is that they are about the temporal ordering of events from a narrator’s perspective. Often focusing on the noncanonical, remarking on the exceptional, the unusual, the breach of norms, they work as cognitive and cultural devices. They (re)create coherence - where coherence may not necessarily exist – and explanation for their authors and audiences. In the final text in this section, then, Maynard-Moody and Musheno (2006) situate stories and the work they do in the research process, interviewing in particular. This text reflects on and describes the way they understood the world of front-line workers through the (interview) stories these workers told them. Importantly, Maynard-Moody and Musheno not only describe the ways they came to select interviewees and did their (narrative) interviews, they also explain the difficulties of analyzing stories and bringing out the larger patterns that a set of stories might tell. Quality Issues The last section in this Volume IV of this compendium on Major Works in Qualitative Research in Political Science deals with the quality of interpretive and constructivist approaches. Up until this section, the texts in this volume have dealt with interpretivist approaches in general or with particular research strategies, methods and analyses. And although Milliken (1999) pointed at the need for discourse analysis to establish itself as a “normal” research programme and Dodge et al. (2005) have argued that interpretive research is able to combine rigor and relevance, none of the texts has dealt with the particular issues of quality (see also, Schwartz-Shea 2013). In the first text in this section, Brower, Abolafia and Carr (2000) talk about the criteria for good qualitative (in our view ‘interpretive’-minded) research. Basing themselves on the analysis of research that has been published in public administration, they argue that research is convincing, when it is authentic, its analyses are deemed plausible and it “engages readers to think deeply and exercise critical judgment” (Brower et al. 2000, p. 382). Celia Lynch’s text, the second and final one in this section, volume and compendium, focuses on reflexivity. It explores the ethical relationship between the people doing research and the research subjects. It deals with the observation that researcher’s worldviews shape research questions, methods and results and that this poses ethical demands in terms of an ongoing, critical reflection on the researcher’s position. References (beyond the pieces that are included in the four volumes): Bevir, M. (2003). Interpretivism: Family resemblances and quarrels. Qualitative Methods, 1(2), 9-13. Bevir, M., & Rhodes, R. (2003). Interpreting British governance. London: Routledge. Blumer, H. (1969). Symbolic interactionism: perspective and method. London: Prentice Hall. 25 Brower, R. S., Abolafia, M. Y., & Carr, J. B. (2000). On improving qualitative methods in public administration research. Administration & Society, 32(4), 363-397. Diez, T. (1999) ‘Speaking Europe’, the politics of integration discourse, Journal of European Public Policy 6 (4), 598-613. Dodge, J., Ospina, S. M., & Foldy, E. G. (2005). Integrating rigor and relevance in public administration scholarship: The contribution of narrative inquiry. Public administration Review, 286-300. Fenno, R.F. (1986) Observation, Context, and Sequence in the Study of Politics, American Political Science Review, vol. 80 (1), 3-15. Glynos, J., & Howarth, D. (2007). Logics of critical explanation in social and political theory. London: Routledge. Hay, C. (2011). Interpreting interpretivism interpreting interpretations: The new hermeneutics of public administration. Public Administration, 89(1), 167-182. Laffey, M., & Weldes, J. (2004). Methodological reflections on discourse analysis. Qualitative Methods, 2(1), 28-31. Leech, B. L. (2002). Asking questions: techniques for semistructured interviews. Political Science & Politics, 35(04), 665-668. Lynch, C. (2008). Reflexivity in research on civil society: constructivist perspectives. International Studies Review, 10(4), 708-721. Maynard-Moody, S. and Musheno, M. (2006). Stories for Research. In Interpretation and Method: Empirical Research Methods and the Interpretive Turn, ed. D. Yanow and P. Schwartz- Shea. Armonk, NY: M.E. Sharpe. ch. 18: 316-330. Milliken, J. (1999). The study of discourse in international relations: A critique of research and methods. European journal of international relations, 5(2), 225-254. Patterson, M., & Monroe, K. R. (1998). Narrative in political science. Annual Review of Political Science, 1(1), 315-331. Rein, M., & Schön, D. A. (1986). Frame-reflective policy discourse. Beleidsanalyse, 4, 4-18. Schatz, E. (ed.) (2009). Political ethnography. Chicago and London: University of Chicago Press. Schwartz-Shea, P. (2013) ‘Judging Quality’, in Yanow, D., & Schwartz-Shea, P. (Eds.). Interpretation and method, 2nd ed. Armonk, NY: M. E. Sharpe. Stone, D. A. (2002). Policy paradox. The Art of Political Decision Making. Chapter 6: Symbols. New York and London: W.W. Norton and Company. Pp. 136-162. van Hulst, M., & Yanow, D. (2016). From Policy “Frames” to “Framing” Theorizing a More Dynamic, Political Approach. The American Review of Public Administration, 46(1), 92-112. Wagenaar, H. (2011). Meaning in action. Interpretation and dialogue in policy analysis. Armonk, NY: M. E. Sharpe. Wedeen, L. (2010). Reflections on ethnographic work in political science. Annual Review of Political Science, 13, 255-272. 26 Wodak, R. (2013). Critical Discourse Analysis. London: Sage. Wolfinger, N.H. (2002). On writing fieldnotes: collection strategies and background expectancies. Qualitative research, 2(1), 85-93. Yanow, D. (1992). Supermarkets and culture clash: The epistemological role of metaphors in administrative practice. The American Review of Public Administration, 22(2), 89-109. Yanow, D., & Schwartz-Shea, P. (Eds.). (2013). Interpretation and method, 2nd ed. Armonk, NY: M. E. Sharpe. 27 View publication stats