When decision-makers want to use evidence from monitoring and evaluation (M&E) systems to assist them in making choices, there is a demand for M&E. When there is great capacity to supply M&E information, but low capacity to demand quality evidence, there is a mismatch between supply and demand. In this context, as Picciotto (2009) observed, ‘monitoring masquerades as evaluation’. This article applies this observation, using six case studies of African M&E systems, by asking: What evidence is there that African governments are developing stronger endogenous demand for evidence generated from M&E systems?The argument presented here is that demand for evidence is increasing, leading to further development of M&E systems, with monitoring being dominant. As part of this dominance there are attempts to align monitoring systems to emerging local demand, whilst donor demands are still important in several countries. There is also evidence of increasing demand through government-led evaluation systems in South Africa, Uganda and Benin. One of the main issues that this article notes is that the M&E systems are not yet conceptualised within a reform effort to introduce a comprehensive results-based orientation to the public services of these countries. Results concepts are not yet consistently applied throughout the M&E systems in the case countries. In addition, the results-based notions that are applied appear to be generating perverse incentives that reinforce upward compliance and contrôle to the detriment of more developmental uses of M&E evidence.
Background: Evaluation is not widespread in Africa, particularly evaluations instigated by governments rather than donors. However since 2007 an important policy experiment is emerging in South Africa, Benin and Uganda, which have all implemented national evaluation systems. These three countries, along with the Centre for Learning on Evaluation and Results (CLEAR) Anglophone Africa and the African Development Bank, are partners in a pioneering African partnership called Twende Mbele, funded by the United Kingdom’s Department for International Development (DFID) and Hewlett Foundation, aiming to jointly strengthen monitoring and evaluation (M&E) systems and work with other countries to develop M&E capacity and share experiences.Objectives: This article documents the experience of these three countries and summarises the progress made in deepening and widening their national evaluation systems and some of the cross-cutting lessons emerging at an early stage of the Twende Mbele partnership.Method: The article draws from reports from each of the countries, as well as work undertaken for the evaluation of the South African national evaluation system.Results and conclusions: Initial lessons include the importance of a central unit to drive the evaluation system, developing a national evaluation policy, prioritising evaluations through an evaluation agenda or plan and taking evaluation to subnational levels. The countries are exploring the role of non-state actors, and there are increasing moves to involve Parliament. Key challenges include difficulty of getting a learning approach in government, capacity issues and ensuring follow-up. These lessons are being used to support other countries seeking to establish national evaluation systems, such as Ghana, Kenya and Niger.
This article describes the development of the national evaluation system in South Africa, which has been implemented since 2012, led by the Department of Planning, Monitoring and Evaluation (DPME, previously the Department of Performance Monitoring and Evaluation) in the Presidency. It suggests emerging results but an evaluation of the evaluation being carried out in 2015 will address this formally. Responding to dissatisfaction with government services, in 2009 the government placed a major emphasis on monitoring and evaluation (M&E). A ministry and department were created, initially focusing on monitoring but in 2011 developing a national evaluation policy framework, which has been rolled out from 2012. The system has focused on improving performance, as well as improved accountability. Evaluations are proposed by national government departments and selected for a national evaluation plan. The relevant department implements the evaluations with the DPME and findings go to Cabinet and are made public. So far 39 evaluations have been completed or are underway, covering around R50 billion (approximately $5 billion) of government expenditure over a three-year expenditure framework. There is evidence that the first evaluations to be completed are having significant influence on the programmes concerned. The big challenge facing South Africa is to increase capacity of service providers and government staff so as to be able to have more and better quality evaluations taking place outside of as well as through the DPME.
Background: South Africa has pioneered national evaluation systems (NESs) along with Canada, Mexico, Colombia, Chile, Uganda and Benin. South Africa’s National Evaluation Policy Framework (NEPF) was approved by Cabinet in November 2011. An evaluation of the NES started in September 2016.Objectives: The purpose of the evaluation was to assess whether the NES had had an impact on the programmes and policies evaluated, the departments involved and other key stakeholders; and to determine how the system needs to be strengthened.Method: The evaluation used a theory-based approach, including international benchmarking, five national and four provincial case studies, 112 key informant interviews, a survey with 86 responses and a cost-benefit analysis of a sample of evaluations.Results: Since 2011, 67 national evaluations have been completed or are underway within the NES, covering over $10 billion of government expenditure. Seven of South Africa’s nine provinces have provincial evaluation plans and 68 of 155 national and provincial departments have departmental evaluation plans. Hence, the system has spread widely but there are issues of quality and the time it takes to do evaluations. It was difficult to assess use but from the case studies it did appear that instrumental and process use were widespread. There appears to be a high return on evaluations of between R7 and R10 per rand invested.Conclusion: The NES evaluation recommendations on strengthening the system ranged from legislation to strengthen the mandate, greater resources for the NES, strengthening capacity development, communication and the tracking of use.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.