The Most Significant Change (MSC) technique is a dialogical, story-based technique. Its primary purpose is to facilitate program improvement by focusing the direction of work towards explicitly valued directions and away from less valued directions. MSC can also make a contribution to summative evaluation through both its process and its outputs. The technique involves a form of continuous values inquiry whereby designated groups of stakeholders search for significant program outcomes and then deliberate on the value of these outcomes in a systematic and transparent manner. To date, MSC has largely been used for the evaluation of international development programs, after having been initially developed for the evaluation of a social development program in Bangladesh (Davies, 1996). This article provides an introduction to MSC and discusses its potential to add to the basket of choices for evaluating programs in developed economies. We provide an Australian case study and outline some of the strengths and weaknesses of the technique. We conclude that MSC can make an important contribution to evaluation practice. Its unusual methodology and outcomes make it ideal for use in combination with other techniques and approaches.
Because of the global scale and diversity of their work, international aid agencies face major problems when attempting to represent their plans and evaluate their achievements. In this second of two articles looking at types of change processes, the focus is on complex processes of change that include mutual influence, parallel processes and feedback loops. Four practically oriented arguments are put forward for using a network perspective to represent these processes: the broad applicability of a network framework, its scalability, the range of measurement and descriptive tools available and the multidisciplinary body of theory and research available to inform agencies’ theories of change. Networks are then contrasted with hierarchies as background metaphors, and implications are identified for the monitoring and evaluation of development projects. In this article relevant examples have been drawn from the author’s consultancy experience with development aid programmes in Bangladesh and Ghana.
In 2012 the United Kingdom's Department for International Development (DFID) funded a review of the literature on Evaluability Assessments, which was undertaken by Rick Davies. Although the review focused on practical guidance Evaluability Assessments are not unproblematic. Evaluability Assessments involve an additional layer of cost and procedure. The diversity of evaluation approaches is a challenge to any categorical judgement about evaluability. The reach of an evaluability assessment can become over-extended. Evaluability questions asked of individual projects may not be so easily applied to larger portfolios of projects. The purpose of this article is to give more attention to the problematic aspects of Evaluability Assessments. In doing so it does not seek to revise the broad conclusions of the Working Paper, which unambiguously encouraged the wider use of Evaluability Assessments.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.