2011
DOI: 10.1080/19439342.2011.587017
|View full text |Cite
|
Sign up to set email alerts
|

When does rigorous impact evaluation make a difference? The case of the Millennium Villages

Abstract: When is the rigorous impact evaluation of development projects a luxury, and when a necessity? We study one high-profile case: the Millennium Villages Project (MVP), an experimental and intensive package intervention to spark sustained local economic development in rural Africa. We illustrate the benefits of rigorous impact evaluation in this setting by showing that estimates of the project's effects depend heavily on the evaluation method. Comparing trends at the MVP intervention sites in Kenya, Ghana, and Ni… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

1
19
0

Year Published

2012
2012
2023
2023

Publication Types

Select...
5
5

Relationship

0
10

Authors

Journals

citations
Cited by 46 publications
(20 citation statements)
references
References 23 publications
1
19
0
Order By: Relevance
“…In contemporary conversation on evaluation methodologies in development aid, evidence appears to be connected to experimental evaluation settings and, moreover, to measurable and thus easily comparable research results. The main arguments supporting experimental evaluations state that they increase the rigour of, and decrease the selection bias in, knowledge production (White and Bamberger, 2008;Clemens and Demombynes, 2010). Experimental impact evaluation utilizes the logic of impact evaluation typical of medical trials (Banerjee, 2007, pp.…”
Section: Current Methodological Controversies In Evaluating Ngos In Dmentioning
confidence: 99%
“…In contemporary conversation on evaluation methodologies in development aid, evidence appears to be connected to experimental evaluation settings and, moreover, to measurable and thus easily comparable research results. The main arguments supporting experimental evaluations state that they increase the rigour of, and decrease the selection bias in, knowledge production (White and Bamberger, 2008;Clemens and Demombynes, 2010). Experimental impact evaluation utilizes the logic of impact evaluation typical of medical trials (Banerjee, 2007, pp.…”
Section: Current Methodological Controversies In Evaluating Ngos In Dmentioning
confidence: 99%
“…If one looked at outcome measures in those non-participant communities, in some cases they too also experienced improvements during these years. Thus, it's unclear whether the Millennium Villages had any additional effect on observed improvements or if those would have been observed anyway (Clemens and Demonbynes, 2010;Clemens, 2011a, b;McKenzie, 2011;Wanjala and Muradian, 2013).…”
mentioning
confidence: 95%
“…In our case, IOCC provided greenhouses to every household that met the eligibility criteria in every community targeted; all nonbeneficiaries are, by design, different from beneficiaries. Clemens and Demombynes (2010) make the point that before and after evaluations can be biased when there is no comparison to untreated groups, but when beneficiaries are purposefully selected by groups with local knowledge, even trends from untreated groups provide little guidance to the evaluator. Downloaded by [University of Waterloo] at 09:41 13 December 2014 Therefore, it is necessary to evaluate the benefits by comparing beneficiaries to other beneficiaries and exploiting the programme rollout to identify gains and patterns of gains from treatment.…”
mentioning
confidence: 99%