Evaluations of public programs in many fields reveal that different types of programs—or different versions of the same program—vary in their effectiveness. Moreover, a program that is effective for one group of people might not be effective for other groups, and a program that is effective in one set of circumstances may not be effective in other circumstances. This paper presents a conceptual framework for research on such variation in program effects and the sources of this variation. The framework is intended to help researchers—both those who focus mainly on studying program implementation and those who focus mainly on estimating program effects—see how their respective pieces fit together in a way that helps to identify factors that explain variation in program effects, and thereby support more systematic data collection. The ultimate goal of the framework is to enable researchers to offer better guidance to policymakers and program operators on the conditions and practices that are associated with larger and more positive effects.
The ideas, writing, editing, and review of this paper involved a great many people who vastly improved its quality. In particular, we would like to acknowledge the contributions of MDRC staff members Gordon Berlin, Ginger Knox, Shira Mattera, James Riccio, and Marie-Andrée Somers, and consultant Kay Sherwood. They provided guidance on the ideas addressed by the paper, suggested many of the examples that are used to illustrate these ideas, and in these ways and others added depth to the paper. A special thanks to MDRC's Caitlin Platania who provided significant support developing figures, editing and formatting text, checking references, and a long list of other miscellaneous tasks; her contributions were critical to completing this paper. We also would like to thank the William T. Grant Foundation for its support. Not only did the Foundation fund this work but its president, Robert C. Granger, and program officer, Kim DuMont, provided meticulous feedback and valuable advice on early versions of the paper. In addition, we thank Joseph Durlak,
Standard-Nutzungsbedingungen:Die Dokumente auf EconStor dürfen zu eigenen wissenschaftlichen Zwecken und zum Privatgebrauch gespeichert und kopiert werden.Sie dürfen die Dokumente nicht für öffentliche oder kommerzielle Zwecke vervielfältigen, öffentlich ausstellen, öffentlich zugänglich machen, vertreiben oder anderweitig nutzen.Sofern die Verfasser die Dokumente unter Open-Content-Lizenzen (insbesondere CC-Lizenzen) zur Verfügung gestellt haben sollten, gelten abweichend von diesen Nutzungsbedingungen die in der dort genannten Lizenz gewährten Nutzungsrechte. We thank Jonathan Davis, Elizabeth Debraggio, Laurien Gilbert, Shani Schechter, and Zach Seeskin for expert research assistance and Colleen Sommo and Jed Teres for extensive help in understanding the data. Joshua Angrist, Jonas Fisher, Luojia Hu, David Lee, Bruce Meyer, Derek Neal, Chris Taber, and seminar participants at the American Education Finance Association meetings, Federal Reserve Bank of Chicago, Harris School of Public Policy Studies, and MIT provided helpful conversations and comments. We also thank Louis Jacobson and Christine Mokher for graciously providing additional estimates on the value of college courses. The data used in this paper are derived from data files made available by MDRC. The authors remain solely responsible for how the data have been used or interpreted. Any views expressed in this paper do not necessarily reflect those of the Federal Reserve Bank of Chicago or the Federal Reserve System. Any errors are ours. Terms of use: Documents in Paying for Performance: The Education Impacts of a Community College ScholarshipProgram for Low-income Adults AbstractWe evaluate the effect of performance-based incentive programs on educational outcomes for community college students from a random assignment experiment at three campuses. Incentive payments over two semesters were tied to meeting two conditions-enrolling at least half time and maintaining a "C" or better grade point average. Eligibility increased the likelihood of enrolling in the second semester after random assignment and total number of credits earned. Over two years, program group students completed nearly 40 percent more credits. We find little evidence that program eligibility changed types of courses taken but some evidence of increased academic performance and effort.
This article assesses the state of theory development in the study of social policy implementation and offers a political economy framework as a synthesis of current theory and research. The article reviews and classifies the major theoretical and empirical studies of implementation. On the basis of the review, a political economy model of implementation is developed, consisting of the following components: policy-making, policy instruments, critical actors, driving forces, service delivery system, and policy output. The policy output is measured by a correspondence index defined as the correspondence between eligible, processed, and served populations and between identified needs and services delivered. It is argued that policy output is determined by organizational systems which develop as a result of technological specifications, economic considerations, and power relations.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.