Case Studies and Theory Development in the Social Sciences - Alexander L. George [12]
We also disagree with DSI’s treatment of process-tracing as simply another means of increasing the number of observable implications of a theory. In fact, process-tracing is fundamentally different from statistical analysis because it focuses on sequential processes within a particular historical case, not on correlations of data across cases. This has important implications for theory testing: a single unexpected piece of process-tracing evidence can require altering the historical interpretation and theoretical significance of a case, whereas several such cases may not greatly alter the findings concerning statistical estimates of parameters for a large population.
DSI’s arguments on all these methodological issues may be appropriate to statistical methods, but in our view they are ill-suited or even counterproductive in case study research. We differ, finally, with DSI on a presentational issue that is primarily pedagogical but has important implications. This is the fact that there is an unresolved tension in DSI between the authors’ emphasis on research objectives that address important theoretical and policy-relevant problems and the fact that many of the examples used to illustrate various points in DSI are either hypothetical or entail research objectives of a simple character not likely to be of interest to sophisticated research specialists.
This gap is aggravated by the fact that many of the hypothetical and actual examples are of quantitative, not qualitative research. DSI cites very few qualitative research studies that in its authors’ view fully or largely meet the requirements of its methods or deserve emulation, nor do the authors cite their own works in this regard.32 This is not surprising since both Gary King and Sidney Verba are quantitatively oriented researchers. On the other hand, Robert Keohane’s voluminous research is largely of a qualitative character and, surprisingly, none of his previous studies are cited in Designing Social Inquiry as examples of the methods advocated therein.33
In contrast, in the present volume we present numerous examples of qualitative research on important policy-relevant problems, including research we ourselves have done. We do so not to imply that our own or others’ work is methodologically flawless or worthy of emulation in every respect, but because the hardest methodological choices arise in actual research. Illustrating how such choices are made is vitally important in teaching students how to proceed in their own work. In addition, understanding methodological choices often requires sophisticated familiarity with the theories and cases in question, which reinforces the usefulness of using one’s own research for examples.
Certainly King, Keohane, and Verba deserve the fullest praise and appreciation for their effort to improve qualitative research. DSI, despite our many disagreements with it, remains a landmark contribution. It is not alone in viewing the goals, methods, and requirements of case studies partly from the viewpoint of statistical methods. We choose to critique DSI in such detail not because it is the starkest example of this phenomenon, but because its clarity, comprehensiveness, and familiarity to many scholars make it an excellent vehicle for presenting our contrasting view of the differences, similarities, and comparative advantages of case study and statistical methods. In the next sections we define case studies and outline their advantages, limitations, and trade-offs, distinguishing between criticisms that in our view misapply statistical concepts and critiques that have real merit regarding the limits of case studies.
A major new reassessment of Designing