This article contributes to research on evaluation by examining the capacity and contribution of developmental evaluation for innovating. This case study describes the preformative development of an educational program (from conceptualization to pilot implementation) and analyzes the processes of innovation within a developmental evaluation framework. Developmental evaluation enhanced innovation by (a) identifying and infusing data primarily within an informing process toward resolving the uncertainty associated with innovation and (b) facilitating program cocreation between the clients and the developmental evaluator. Analysis into the demands of innovation revealed the pervasiveness of uncertainty throughout development and how the rendering of evaluative data helped resolve uncertainty and propelled development forward. Developmental evaluation enabled a nonlinear, coevolutionary program development process that centered on six foci-definition, delineation, collaboration, prototyping, illumination, and reality testing. This article concludes by encouraging evaluators to understand the demands of innovation and the value of design thinking when innovating.
Recently, Shulha, Whitmore, Cousins, Gilbert, and al Hudib (2015) proposed a set of evidence-based principles to guide collaboration. Our research undertakes a case study approach to explore these principles in a developmental evaluation context. Data were collected at two points in an 18-month period where an evaluation group collaborated with the program team from a national organization. This article explores the contributions of selected collaborative approaches to evaluation principles as they are applied in a developmental evaluation. The article concludes with a reflection on the implications for collaboration in theory and practice of developmental contexts. Also identified are the practical insights for implementing the principles in evaluation practice.Récemment, Shulha, Whitmore, Cousins, Gilbert et al Hudib (2015) ont proposé un ensemble de principes pour guider les pratiques collaboratives. Par une étude de cas nous explorons ces principes dans un contexte d’évaluation développementale. Des données ont été recueillies à deux moments au cours d’une période de 18 mois, lors d’une collaboration entre un groupe d’évaluateurs et l’équipe d’un programme d’une organisation nationale. L’article explore les contributions de certaines approches collaboratives au respect de ces principes dans le cadre de l’évaluation développementale. L’article propose une réflexion sur les implications théoriques et pratiques de la collaboration dans ces contextes. Nous identifions également des moyens pour implanter ces principes dans la pratique évaluative.
The Tri-Council Policy Statement: Ethical Conduct for Research Involving Humans (TCPS) was first developed to establish a standard of practice in research ethics by the three federal agencies responsible for funding institutional research in Canada: Canadian Institutes of Health Research (CIHR), Natural Sciences and Engineering Research Council (NSERC), and Social Sciences and Humanities Research Council (SSHRC). In 2010, a second edition of the policy, known as the TCPS 2, was released with updated information and expanded coverage of research ethics issues. According to the TCPS 2, the Agencies' mandate is Bto promote research that is conducted according to the highest ethical standards,^and the TCPS 2 serves as a benchmark for this with respect for human dignity as its underlying value. Research institutions receiving Agency funding are to comply with this policy statement by forming Research Ethics Boards (REBs) to review all research involving human participants. The intention behind this review requirement is to provide a proportionate assessment of the benefit-to-risk ratio of the research, and in that process, to safeguard Brespect for persons^, express a Bconcern for welfare^, and uphold Bjustice^(CIHR, SSHRC, NSERC 2010, p. 8).Research may not proceed until ethics approval is granted by an institution's REB. The current study evaluates REB members' perspectives on their knowledge of research ethics, and juxtaposes these perceptions with those of researchers. Specifically, we are interested in the extent to which REB members with less experience read the TCPS 2, and whether those with less experience have decreased confidence in their ethics knowledge.
This issue of the Canadian Journal of Program Evaluation (CJPE) is one of our most comprehensive to date. Not only does it include five full articles, fi ve practice notes, and two book reviews, but it also covers a wide range of evaluation-related topics, practices, and studies. I am pleased to note that our editorial team contin ues to receive high-quality submissions, and I encourage you to keep thinking of the CJPE as an outlet for your work. The articles and practice notes included in this issue focus on four recurring themes that reflect current topics in our field. First, evaluative thinking and capac ity building in non-governmental organizations is the subject of articles by Rog ers, Kelly, and McCoy, as well as by Lu, Elliot, and Perlman. Both articles provide insights into the facilitators of, and barriers to, evaluation capacity building as well as the multiple roles played by evaluators in fostering evaluative thinking amongst organizational staff members. Second, process evaluation appears to be of interest to many evaluators and researchers: Leblanc, Gervais, Dubeau and Delame focus on process evaluation for mental health initiatives, while Parrott and Carman pro vide an example of how process evaluation can contribute to program scaling-up efforts. Chechak, Dunlop, and Holosko also focus on process evaluation and its utility in evaluating youth drop-in programs. Teachers and students of evaluation may be interested in our third theme, which focuses on student contributions to evaluation, both through peer-mentoring-as described in the practice note written by LaChenaye, Boyce, Van Draanen, and Everett-and through the CES Student Evaluation Case Competition-described in a practice note written by Sheppard, Baker, Lolic, Soni, and Courtney. And fourth, we continue to advance our methodological approaches to evaluation, and this is reflected in an article on evaluation in Indigenous contexts by Chandna, Vine, Snelling, Harris, Smylie, and Manson, as well as in an article on the use of an outcome monitoring tool for performance measurement in a clinical psychology setting by Rosval, Yamin, Jamshidi, and Aubry. Czechowski, Sylvestre, and Moreau also feature methods in their practice note on secure data handling for evaluators, a key competency that continues to evolve as our data collection and storage mechanisms adapt to new technology. In addition to these articles and practice notes, this issue also features two book reviews that are sure to interest our readers. First, Bhawra provides an account of
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.