This article proposes a combination of a popular evaluation approach, contribution analysis (CA), with an emerging method for causal inference, process tracing (PT). Both are grounded in generative causality and take a probabilistic approach to the interpretation of evidence. The combined approach is tested on the evaluation of the contribution of a teaching programme to the improvement of school performance of girls, and is shown to be preferable to either CA or PT alone. The proposed procedure shows that established Bayesian principles and PT tests, based on both science and common sense, can be applied to assess the strength of qualitative and quali-quantitative observations and evidence, collected within an overarching CA framework; thus shifting the focus of impact evaluation from 'assessing impact' to 'assessing confidence' (about impact).
Commissioners of impact evaluation often place great emphasis on assessing the contribution made by a particular intervention in achieving one or more outcomes, commonly referred to as a 'contribution claim'. Current theory-based approaches fail to provide evaluators with guidance on how to collect data and assess how strongly or weakly such data support contribution claims. This article presents a rigorous quali-quantitative approach to establish the validity of contribution claims in impact evaluation, with explicit criteria to guide evaluators in data collection and in measuring confidence in their findings. Coined as 'Contribution Tracing', the approach is inspired by the principles of Process Tracing and Bayesian Updating, and attempts to make these accessible, relevant and applicable by evaluators. The Contribution Tracing approach, aided by a symbolic 'contribution trial', adds value to impact evaluation theory-based approaches by: reducing confirmation bias; improving the conceptual clarity and precision of theories of change; providing more transparency and predictability to data-collection efforts; and ultimately increasing the internal validity and credibility of evaluation findings, namely of qualitative statements. The approach is demonstrated in the impact evaluation of the Universal Health Care campaign, an advocacy campaign aimed at influencing health policy in Ghana.
This article argues that Qualitative Comparative Analysis can be a useful method in case-based evaluations for two reasons: a) it is aimed at causal inference and explanation, leading to theory development; b) it is strong on external validity and generalization, allowing for theory testing and refinement. After a brief introduction to QCA, the specific type of causality handled by QCA is discussed. QCA is shown to offer improvements over Mill's methods by handling asymmetric and multiple-conjunctural causality in addition to counterfactual reasoning. It thereby allows the explicitly separate analysis of necessity and sufficiency, recognizing the relevance of causal packages as well as single causes and of multiple causal paths leading to the same outcome (equifinality). It is argued that QCA can generalize findings to a small, medium and large number of cases.
Qualitative comparative analysis (QCA) is gaining ground in evaluation circles, but the number of applications is still limited. In this article, we consider the challenges that can emerge during a QCA evaluation by drawing on our experience of conducting one in the field of development cooperation. For each stage of the evaluation process, we systematically discuss the challenges we encountered and suggest solutions on how these can be addressed. We believe that sharing this kind of lessons learned can help evaluators become more familiar with QCA, shedding light on what it is to be expected when considering the application of QCA for an evaluation, at the same time reducing unfounded fears and promoting awareness of traps and requirements. The article can be insightful and potentially inspirational for both commissioners and evaluators.
This article presents an innovative evaluation design which was used to evaluate the Swiss Environmental Impact Assessment. The design is new in that it amalgamates the realistic approach to evaluation with the method of Qualitative Comparative Analysis (QCA), the two of which are conspicuously similar. They share a complex view of causality, a generative perspective, a theory-driven approach to empirical observation and a limited claim to generalization. These conceptual parallels, as derived from the literature, are described in the first section, after a short introduction to realistic evaluation and the method of QCA. The following empirical section exemplifies their joint application and tackles the problems encountered. Based on this experience, the initial theoretical parallels are then reviewed. The article concludes that, under certain conditions, realistic evaluation and QCA provide a powerful tandem to produce empirically well-grounded context-sensitive evidence on policy instruments.
This IDS Bulletin is the first of two special issues presenting contributions from the event 'Impact Innovation and Learning: Towards a Research and Practice Agenda for the Future', organised by IDS in March 2013. The initiative, as well as these two issues, represent a 'rallying cry' for impact evaluation to rise to the challenges of a post-MDG/post-2015 development agenda. This introduction articulates first what these challenges are, and then goes on to summarise how the contributors propose to meet these challenges in terms of methodological and institutional innovation. Increasingly ambitious development goals, multiple layers of governance and lines of accountability require adequate causal inference frameworks and less ambitious expectations on the span of direct influence single interventions can achieve, as well as awareness of multiple bias types. Institutions need to be researched and become more impact-oriented and learning-oriented.
This article discusses the integration of a diagnostic lens in qualitative or mixed methods evaluations, arguing that this will improve quality: in particular, it will improve transparency, credibility and reliability of evaluation findings.We start by unpacking the notion of evaluation quality and pointing out the typical weaknesses of qualitative methods. We then introduce the basic notions of diagnostic approaches and how they relate to theory-based evaluation, process tracing, and Bayesian updating, arguing for the merits of taking a formal Bayesian approach, founded on the confusion matrix, which amongst else reduces confirmation bias and conservatism. This article draws parallels between the process tracing tests and elements of the confusion matrix.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.