Even after a quarter-century of debate in political science and sociology, representatives of configurational comparative methods (CCMs) and those of regressional analytic methods (RAMs) continue talking at cross purposes. In this article, we clear up three fundamental misunderstandings that have been widespread within and between the two communities, namely that (a) CCMs and RAMs use the same logic of inference, (b) the same hypotheses can be associated with one or the other set of methods, and (c) multiplicative RAM interactions and CCM conjunctions constitute the same concept of causal complexity. In providing the first systematic correction of these persistent misapprehensions, we seek to clarify formal differences between CCMs and RAMs. Our objective is to contribute to a more informed debate than has been the case so far, which should eventually lead to progress in dialogue and more accurate appraisals of the possibilities and limits of each set of methods.
To date, hundreds of researchers have employed the method of Qualitative Comparative Analysis (QCA) for the purpose of causal inference. In a recent series of simulation studies, however, several authors have questioned the correctness of QCA in this connection. Some prominent representatives of the method have replied in turn that simulations with artificial data are unsuited for assessing QCA. We take issue with either position in this impasse. On the one hand, we argue that data-driven evaluations of the correctness of a procedure of causal inference require artificial data. On the other hand, we prove all previous attempts in this direction to have been defective. For the first time in the literature on configurational comparative methods, we lay out a set of formal criteria for an adequate evaluation of QCA before implementing a battery of inverse-search trials to test how this method performs in different recovery contexts according to these criteria. While our results indicate that QCA is correct when generating the parsimonious solution type, they also demonstrate that the method is incorrect when generating the conservative and intermediate solution type. In consequence, researchers using QCA for causal inference, particularly in human-sensitive areas such as public health and medicine, should immediately discontinue employing the method’s conservative and intermediate search strategies.
For many years, sociologists, political scientists, and management scholars have readily relied on Qualitative Comparative Analysis (QCA) for the purpose of configurational causal modeling. However, this article reveals that a severe problem in the application of QCA has gone unnoticed so far: model ambiguities. These arise when multiple causal models fare equally well in accounting for configurational data. Mainly due to the uncritical import of an algorithm that is unsuitable for causal modeling, researchers have typically been unaware of the whole model space. As a result, there exists an indeterminable risk for practically all QCA studies published in the last quarter century to have presented findings that their data did not warrant. Using hypothetical data, we first identify the algorithmic source of ambiguities and discuss to what extent they affect different methodological aspects of QCA. By reanalyzing a published QCA study from rural sociology, we then show that model ambiguities are not a mere theoretical possibility but a reality in applied research, which can assume such extreme proportions that no causal conclusions whatsoever are possible. Finally, the prevalence of model ambiguities is examined by performing a comprehensive analysis of 192 truth tables across 28 QCA studies published in applied sociology. In conclusion, we urge that future QCA practice ensures full transparency with respect to model ambiguities, both by informing readers of QCA-based research about their extent and by employing algorithms capable of revealing them.
We present QCA, a package for performing Qualitative Comparative Analysis (QCA). QCA is becoming increasingly popular with social scientists, but none of the existing software alternatives covers the full range of core procedures. This gap is now filled by QCA. After a mapping of the method's diffusion, we introduce some of the package's main capabilities, including the calibration of crisp and fuzzy sets, the analysis of necessity relations, the construction of truth tables and the derivation of complex, parsimonious and intermediate solutions.T. P. Wickham-Crowley. A qualitative comparative approach to Latin American revolutions.
Background Implementation of multifaceted interventions typically involves many diverse elements working together in interrelated ways, including intervention components, implementation strategies, and features of local context. Given this real-world complexity, implementation researchers may be interested in a new mathematical, cross-case method called Coincidence Analysis (CNA) that has been designed explicitly to support causal inference, answer research questions about combinations of conditions that are minimally necessary or sufficient for an outcome, and identify the possible presence of multiple causal paths to an outcome. CNA can be applied as a standalone method or in conjunction with other approaches and can reveal new empirical findings related to implementation that might otherwise have gone undetected. Methods We applied CNA to a publicly available dataset from Sweden with county-level data on human papillomavirus (HPV) vaccination campaigns and vaccination uptake in 2012 and 2014 and then compared CNA results to the published regression findings. Results The original regression analysis found vaccination uptake was positively associated only with the availability of vaccines in schools. CNA produced different findings and uncovered an additional solution path: high vaccination rates were achieved by either (1) offering the vaccine in all schools or (2) a combination of offering the vaccine in some schools and media coverage. Conclusions CNA offers a new comparative approach for researchers seeking to understand how implementation conditions work together and link to outcomes.
The search for necessary and sufficient causes of some outcome of interest, referred to as configurational comparative research, has long been one of the main preoccupations of evaluation scholars and practitioners. However, only the last three decades have witnessed the evolution of a set of formal methods that are sufficiently elaborate for this purpose. In this article, I provide a hands-on tutorial for qualitative comparative analysis (QCA)-currently the most popular configurational comparative method. In drawing on a recent evaluation of patient follow-through effectiveness in Lynch syndrome tumor-screening programs, I explain the search target of QCA, introduce its core concepts, guide readers through the procedural protocol of this method, and alert them to mistakes frequently made in QCA's use. An annotated replication file for the QCApro extension package for R accompanies this tutorial.
The analysis of necessary conditions for some outcome of interest has long been one of the main preoccupations of scholars in all disciplines of the social sciences. In this connection, the introduction of Qualitative Comparative Analysis (QCA) in the late 1980s has revolutionized the way research on necessary conditions has been carried out. Standards of good practice for QCA have long demanded that the results of preceding tests for necessity constrain QCA's core process of Boolean minimization so as to enhance the quality of parsimonious and intermediate solutions. Schneider and Wagemann's Theory-Guided/Enhanced Standard Analysis (T/ESA) is currently being adopted by applied researchers as the new state-of-the-art procedure in this respect. In drawing on Schneider and Wagemann's own illustrative data example and a meta-analysis of thirty-six truth tables across twenty-one published studies that have adhered to current standards of good practice in QCA, I demonstrate that, once bias against compound conditions in necessity tests is accounted for, T/ESA will produce conservative solutions, and not enhanced parsimonious or intermediate ones.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.