The International Initiative for Impact Evaluation (3ie) is an international grant-making NGO promoting evidence-informed development policies and programmes. We are the global leader in funding, producing and synthesising high-quality evidence of what works, for whom, how, why and at what cost. We believe that using better and policy-relevant evidence helps to make development more effective and improve people's lives. 3ie systematic reviews 3ie systematic reviews appraise and synthesise the available high-quality evidence on the effectiveness of social and economic development interventions in low-and middle-income countries. These reviews follow scientifically recognised review methods, and are peerreviewed and quality assured according to internationally accepted standards. 3ie is providing leadership in demonstrating rigorous and innovative review methodologies, such as using theory-based approaches suited to inform policy and programming in the dynamic contexts and challenges of low-and middle-income countries.
Calls for rigorous impact evaluation have been accompanied by the quest not just to find out what works but why. It is widely accepted that a theory-based approach to impact evaluation, one that maps out the causal chain from inputs to outcomes and impact and tests the underlying assumptions, will shed light on the why question. But application of a theory-based approach remains weak. This paper identifies the following six principles to successful application of the approach: (1) map out the causal chain (programme theory); (2) understand context; (3) anticipate heterogeneity; (4) rigorous evaluation of impact using a credible counterfactual; (5) rigorous factual analysis; and (6) use mixed methods.impact evaluation, theory-based research, mixed methods, Bangladesh, nutrition, programme theory,
We provide a 'how to' guide to undertake systematic reviews of effects in international development, by which we mean, synthesis of literature relating to the effectiveness of particular development interventions. Our remit includes determining the review's questions and scope, literature search, critical appraisal, methods of synthesis including meta-analysis, and assessing the extent to which generalisable conclusions can be drawn using a theory-based approach. Our work draws on the experiences of the International Initiative for Impact Evaluation's (3ie's) systematic reviews programme.
This explanation and elaboration document is intended to accompany the PRISMA-E 2012 statement and the PRISMA statement to improve understanding of the reporting guideline for users. The PRISMA-E 2012 reporting guideline is intended to improve transparency and completeness of reporting of equity-focused systematic reviews. Improved reporting can lead to better judgment of applicability by policy makers which may result in more appropriate policies and programs and may contribute to reductions in health inequities.
Evidence and Gap Maps (EGMs) are a systematic evidence synthesis product which display the available evidence relevant to a specific research question. EGMs are produced following the same principles as a systematic reviews, that is: specify a PICOS, a comprehensive search, screening against explicit inclusion and exclusion criteria, and systematic coding, analysis and reporting. This paper provides guidance on producing EGMs for publication in Campbell Systematic Reviews. This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited.
A debate on approaches to impact evaluation has raged in development circles in recent years. This paper makes a contribution to this debate through discussion of four issues. First, I point out that there are two definitions of impact evaluation. Neither is right or wrong, but they refer to completely different things. There is no point in methodological debates unless they agree a common starting point. Second, I argue that there is confusion between counterfactuals, which are implied by the definition of impact evaluation adopted in this paper, and control groups, which are not always necessary to construct a counterfactual. Third, I address contribution rather than attribution — a distinction that is also definitional, mistaking claims of attribution to mean sole attribution. I then consider accusations of being ‘positivist’ and ‘linear’, which are, respectively, correct and unclear. Finally, I suggest that these arguments do not mean that there is a hierarchy of methods, rather quantitative approaches, including RCTs, are often the most appropriate methods for evaluating the impact of a large range of interventions, having the added advantage of allowing analysis of cost effectiveness or cost-benefit analysis.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.