2015
DOI: 10.1080/19439342.2015.1068354
|View full text |Cite
|
Sign up to set email alerts
|

Aid at the frontier: building knowledge collectively

Abstract: This paper articulates how programme evaluation generally, and impact evaluation specifically, contributes to good governance -not by replacing politics, but by informing it. We argue that institutions with the mandate to accelerate progress in the developing world through aid transfers are particularly well suited to fund impact evaluations. We argue, in fact, that funding impact evaluations through a collective vehicle like the International Initiative for Impact Evaluation (3ie) should be a primary focus of… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
8
0

Year Published

2015
2015
2021
2021

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 11 publications
(8 citation statements)
references
References 21 publications
(18 reference statements)
0
8
0
Order By: Relevance
“…Development agencies already pursue a plethora of different tasks today and have varied strengths in addressing them. Whereas some agencies will continue to pursue a role in implementing programs and projects in developing countries, others may increasingly emphasize their knowledge generation and dissemination functions and diminish their financial or operational roles (Levine & Savedoff, ). In addition to knowledge, advisory, or financing functions, the tasks of agencies of the future may relate to geographical areas of intervention, sectoral specialization, or their role in intergovernmental relations as entities facilitating coordination among varied bureaucracies.…”
Section: Discussion: the Future Of Development Agenciesmentioning
confidence: 99%
“…Development agencies already pursue a plethora of different tasks today and have varied strengths in addressing them. Whereas some agencies will continue to pursue a role in implementing programs and projects in developing countries, others may increasingly emphasize their knowledge generation and dissemination functions and diminish their financial or operational roles (Levine & Savedoff, ). In addition to knowledge, advisory, or financing functions, the tasks of agencies of the future may relate to geographical areas of intervention, sectoral specialization, or their role in intergovernmental relations as entities facilitating coordination among varied bureaucracies.…”
Section: Discussion: the Future Of Development Agenciesmentioning
confidence: 99%
“…Examples of evaluations employing ToC analysis are found in assessments of most blended finance instruments and mechanisms, including equity and debt (Ogunforwora, 2020[54]; PIDG, 2020 [55]; IFC, 2020 [39]; Jackson, 2013 [56]), guarantees (Carnegie Consult, 2016 [42]; Hansen, Rand and Winckler Andersen, 2020 [40]; USAID, 2013 [43]), development impact bonds (Joynes, 2019[57]), performance-based grants (Jackson and Alvarez, 2018 [58]) and structured funds (Koenig and Jackson, 2016 [22]; Orth et al, 2020 [59]).…”
Section: Approaches and Methodsmentioning
confidence: 99%
“…KFEs contributed to a shift away from an input/output paradigm, which tracks resources used and deliverables produced to judge success, to a paradigm focused on outcomes and causal attribution. This shift continues, and remains one of the crucial contributions of KFEs to global development practice (Levine et al 2015). Several development practitioners interviewed for this paper remarked that implementers are becoming increasingly conversant in evaluation concepts; other respondents highlighted increasingly serious approaches to impact measurement among pioneering funders such as the UK Department for International Development (DfID), the Development Impact Ventures initiative of the United States Agency for International Development (USAID) and the Global Innovation Fund.…”
Section: Channel 2: Influencing Development Discoursementioning
confidence: 99%
“…Academic publications reward the novelty of the intervention being tested, relevance to development theory and use of innovative methodologies, incentivising researchers to push implementers towards evaluations with these characteristics. In contrast, policymakers and practitioners tend to want evidence on operational topics that may seem mundane to researchers (Dhaliwal et al n.d.;Levine et al 2015). In a 2011 interview, Dean Karlan acknowledged the divergent priorities of researchers and implementers, noting there are 'tests that are not academically interesting but…”
Section: Weak Link 1: Differing Evaluator and Implementer Prioritiesmentioning
confidence: 99%