The quantitative and qualitative research traditions can be thought of as distinct cultures marked by different values, beliefs, and norms. In this essay, we adopt this metaphor toward the end of contrasting these research traditions across 10 areas: (1) approaches to explanation, (2) conceptions of causation, (3) multivariate explanations, (4) equifinality, (5) scope and causal generalization, (6) case selection, (7) weighting observations, (8) substantively important cases, (9) lack of fit, and (10) concepts and measurement. We suggest that an appreciation of the alternative assumptions and goals of the traditions can help scholars avoid misunderstandings and contribute to more productive “cross-cultural” communication in political science.
No abstract
This article discusses process tracing as a methodology for testing hypotheses in the social sciences. With process tracing tests, the analyst combines preexisting generalizations with specific observations from within a single case to make causal inferences about that case. Process tracing tests can be used to help establish that (1) an initial event or process took place, (2) a subsequent outcome also occurred, and (3) the former was a cause of the latter. The article focuses on the logic of different process tracing tests, including hoop tests, smoking gun tests, and straw in the wind tests. New criteria for judging the strength of these tests are developed using ideas concerning the relative importance of necessary and sufficient conditions. Similarities and differences between process tracing and the deductive-nomological model of explanation are explored.
A central challenge in qualitative research is selecting the "negative" cases (e.g., nonrevolutions, nonwars) to be included in analyses that seek to explain positive outcomes of interest (e.g., revolutions, wars). Although it is widely recognized that the selection of negative cases is consequential for theory testing, methodologists have yet to formulate specific rules to inform this selection process. In this paper, we propose a principle--the Possibility Principle--that provides explicit, rigorous, and theoretically informed guidelines for choosing a set of negative cases. The Possibility Principle advises researchers to select only negative cases where the outcome of interest is possible. Our discussion elaborates this principle and its implications for current debates about case selection and strategies of theory testing. Major points are illustrated with substantive examples from studies of revolution, economic growth, welfare states, and war.
Qualitative analysts have received stern warnings that the validity of their studies may be undermined by selection bias. This article provides an overview of this problem for qualitative researchers in the field of international and comparative studies, focusing on selection bias that may result from the deliberate selection of cases by the investigator. Examples are drawn from studies of revolution, international deterrence, the politics of inflation, international terms of trade, economic growth, and industrial competitiveness. The article first explores how insights about selection bias developed in quantitative research can most productively be applied in qualitative studies. The discussion considers why qualitative researchers need to be concerned about selection bias, even if they do not care about the generality of their findings, and it considers distinctive implications of this form of bias for qualitative research, as in the problem of what is labeled “complexification based on extreme cases.” The article then considers pitfalls in recent discussions of selection bias in qualitative studies. These discussions at times get bogged down in disagreements and misunderstandings over how the dependent variable is conceptualized and what the appropriate frame of comparison should be, issues that are crucial to the assessment of bias within a given study. At certain points it becomes clear that the real issue is not just selection bias, but a larger set of trade-offs among alternative analytic goals.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.