This IDS Bulletin is the first of two special issues presenting contributions from the event 'Impact Innovation and Learning: Towards a Research and Practice Agenda for the Future', organised by IDS in March 2013. The initiative, as well as these two issues, represent a 'rallying cry' for impact evaluation to rise to the challenges of a post-MDG/post-2015 development agenda. This introduction articulates first what these challenges are, and then goes on to summarise how the contributors propose to meet these challenges in terms of methodological and institutional innovation. Increasingly ambitious development goals, multiple layers of governance and lines of accountability require adequate causal inference frameworks and less ambitious expectations on the span of direct influence single interventions can achieve, as well as awareness of multiple bias types. Institutions need to be researched and become more impact-oriented and learning-oriented.
Ray Pawson and Nick Tilley first sketched out their ideas on realist evaluation some 25 years ago, in a paper presented at the 1991 annual conference of the American Society of Criminology, in San Francisco. The book Realistic Evaluation followed in 1997. Since then, Ray Pawson in particular, along with many collaborators, have developed the ideas further and pushed them in new directions. Realistic Evaluation has so far garnered more than 5000 citations according to Google Scholar. Other key publications using or developing realist evaluation, have also been widely cited, several having appeared in previous issues of Evaluation. In the past quarter century realist evaluation has gained considerable traction. Realist evaluation has gone from being the heretical product of a pair of sociologist newcomers to evaluation to a commonly accepted and adopted methodology both for conducting individual evaluations and for synthesizing evidence. 1 The question, 'What works for whom in what circumstances?' is now widely asked, albeit that its basis in realist evaluation is often forgotten. The importance of identifying causal mechanisms and the contextual contingency for their activation (or deactivation) is recognized in much evaluation. Although the early examples used by Pawson and Tilley were drawn mainly from crime, evaluations framed in realist terms are now found across a wide array of domains such as public health, health services research, education, environmental studies, and international development. Developments and use in health-related research are especially notable. The contemporary significance of 'complexity'-often expressed in terms of multiple causal pathways that are contextualized-has given realist evaluation a further important impetus.
This IDS Bulletin is the second of two that follow an Institute of Development Studies event seeking to define an agenda for research and practice of development impact evaluation. It focuses on exploring the potential of systems ideas and complexity concepts to meet the increasingly complex challenges of an increasingly ambitious development agenda. In particular, the contributions seek to: (a) redefine ‘learning’ according to the number of ‘learning loops’ involved; (b) understand how to identify the most relevant impact evaluation questions; (c) simulate systems states in two sectors (leather and health) following the implementation of (combinations of) different policy options and other events; and finally, (d) shake the foundations of the impact evaluation institutional system, recommending that the notions of multiple perspectives and system boundaries are fully embraced, and that the system ultimately transitions from an ‘evaluation industrial complex’ to an ‘evaluation adaptive complex’. While the issue is a step in the right direction, much more work remains to be done.
This special issue of Evaluation edited by Oliver James and Deborah Wilson addresses an important topic for evaluators: the spread of performance management systems in the public sector. This is seen by many (but not all) as a form of evaluation practice and certainly merits careful analysis and evaluation. The special issue editors have brought together key articles from a major UK research programme that also has a comparative and international dimension. As James and Wilson themselves note, this issue builds on debates that have already featured in the pages of this journal and will hopefully stimulate further debates in future issues.
Elliot SternEvaluation 16(1) 3
For us this is a special issue at a particular moment: the journal Evaluation is marking its 20th anniversary, an occasion that challenges us to stand back and take stock -and to do so in a more nuanced way than the Social Science Citation Index on its own allows. 1 It offers an opportunity to reflect on the aspirations that launched Evaluation back in 1995, on where we are now, and where we might be going. As the editors of this special anniversary issue have all been active protagonists since the journal's launch, we are well placed to give an account of ourselves -even though many of our editorial priorities, developed in dialogue with authors, referees and readers, have only become clear to us through a rear-view mirror.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.