This chapter posits two principal streams of participatory evaluation, practical participatory evaluation and transformative participatory evaluation, and compares them on a set of dimensions relating to control, level, and range of participation. The authors then situute them among other forms of collaborative evaluations.Framing Participatory Evaluation J. Bradley Cousins, Elizabeth Whitmore Forms and applications of collaborative research and inquiry are emerging at an astounding pace. For example, a bibliography of published works on participatory research in the health-promotion sector listed close to five hundred titles (Green and others, 1995), with some items dating back as early as the late 1940s. The vast majority, however, have surfaced since the mid-1970s. In the evaluation field, one label that is being used with increasing frequency as a descriptor of collaborative work is participatory evaluation (PE). The term, however, is used quite differently by different people. For some it implies a practical approach to broadening decision making and problem solving through systematic inquiry; for others, reallocating power in the production of knowledge and promoting social change are the root issues.The purpose of this chapter is to explore the meanings of PE through the identification and explication of key conceptual dimensions. We are persuaded of the existence of two principal streams of participatory evaluation, streams that loosely correspond to pragmatic and emancipatory functions. After describing these streams, we present a framework for differentiating among forms of collaborative inquiry and apply it as a way to (1) compare the two streams of participatory evaluation and (2) situate them among other forms of collaborative evaluation and collaborative inquiry We conclude with a set of questions confronted by those with an interest in participatory evaluation.
Previous research has represented teacher efficacy (TE) as a unitary disposition, despite theoretical arguments that TE is task specific. Experienced secondary teachers (N=52) responded to a survey probing their feelings of personal efficacy toward teaching different classes (up to four per respondent). Teachers' performance expectancies varied among teaching assignments: Within-teacher factors accounted for 21% of the variance in TE. The influence of within-teacher factors on TE was moderated by between-teacher variables (subject, experience, education, gender, preference for student-directed instruction and innovative assessment).
Participatory evaluation is presented as an extension of the stakeholder-based model with a focus on enhancing evaluation utilization through primary users' increased depth and range of participation in the applied research process. The approach is briefly described and then justified from theoretical and empirical bases. The literature on organizational learning provides theoretical support for participatory evaluation stemming primarily from the view that knowledge is socially constructed and cognitive systems and memories are developed and shared by organization members. Twenty-six recent empirical studies were found to support an organizational learning justification of the model. Studies were classified into one of six emergent categories: conceptions of use; effects of participation on the use of research; effects of participation on the use of disseminated knowledge; effects of research training; school-university partnerships; and internal evaluation. Requirements of organizations and evaluators and an agenda for research are discussed.Evaluation practice has improved considerably over the past decades, but as Alkin (1991) acknowledged, evaluation theory is not well developed. It has, however, evolved and will continue to do so. Perhaps the most powerful catalyst in this evolution has been research and theory about evaluation utilization. Several points made by Alkin reflecting this view include a distinction between "evaluation" and "research" on the basis of the presence of an intended user; the orientation toward responsive evaluation; the view toward the engagement of preconceived critical decision makers; and the "notion of an adapting, reacting evaluator, interacting with and sensitive to the changing nature of evaluation concerns" (p. 102). Over the past 2 decades considerable knowledge has accumulated concerning how and why evaluation data are used.The purpose of this article is to build upon existing knowledge about utilization and propose a "participatory" model of evaluation that we believe has particular value for evaluators in educational settings. Our orientation to this proposition is light on prescription and comparatively heavy on justification, partly because the form of participatory evaluation will depend significantly upon local context and partly because it is our belief that prescription without solid grounding in theory and data is little more than preference. First, we review briefly what is known about evaluation utilization and set the stage for the participatory model. Our description of the model is followed by theoretical justification from the perspective of organizational learning and a review of empirical research to support this theory. We conclude with thoughts about requirements of organizations and evaluators and an agenda for research. 397
This paper reviews empirical research conducted during the past 15 years on the use of evaluation results. Sixty-five studies in education, mental health, and social services are described in terms of their methodological characteristics, their orientation toward dependent and independent variables, and the relationships between such variables. A conceptual framework is developed that lists 12 factors that influence use; six of these factors are associated with characteristics of evaluation implementation and six with characteristics of decision or policy setting. The factors are discussed in terms of their influence on evaluation utilization, and their relative influence on various types of use is compared. The paper concludes with a statement about implications for research and practice.
Organizational evaluation capacity building has been a topic of increasing interest in recent years. However, the actual dimensions of evaluation capacity have not been clearly articulated through empirical research. This study sought to address this gap by identifying the key dimensions of evaluation capacity in Canadian federal government organizations. The methodology used, based on Leithwood and Montgomery’s Innovation Profile approach, featured semistructured interviews with evaluation experts and a validating exercise conducted in four government organizations. The framework developed as a result of the study identifies six main dimensions of evaluation capacity (human resources, organizational resources, evaluation planning and activities, evaluation literacy, organizational decision making, and learning benefits), each one broken down into further subdimensions. The evaluation capacity of organizations on each of these dimensions and subdimensions can be described using four levels: low, developing, intermediate, and exemplary. The study found that government organizations vary in terms of their capacity from one dimension to the next, and indeed, from one subdimension to the next.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.