2017
DOI: 10.1177/1098214017699275
|View full text |Cite
|
Sign up to set email alerts
|

Toward the Development of Reporting Standards for Evaluations

Abstract: This article first makes a case for the need to establish evaluation reporting standards, support for which is rooted in the growing demand for professionalization, in the growing metaevaluation literature, and in growing efforts to develop reporting standards for inquiry efforts. Then, a case is made for a particular set of such standards introduced in this article—the CHecklist for Evaluation-Specific Standards (CHESS). In doing so, this article outlines the process used and presents the resulting checklist … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
13
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
7
2

Relationship

0
9

Authors

Journals

citations
Cited by 18 publications
(14 citation statements)
references
References 55 publications
(55 reference statements)
0
13
0
Order By: Relevance
“…This lack of recognition is problematic for a number of reasons. First and foremost, evaluation is a discipline debating the merits of formal professionalization, that is, the process a field goes through to become a profession (Castro et al, 2016; Conner & Dickman, 1979; Gauthier et al, 2010; House, 1993; Jacob, 2008; Jacob & Boisvert, 2010; Montrosse-Moorhead & Griffith, 2017; Morell & Flaherty, 1978; Picciotto, 2011), with many practitioners and academics arguing in favor of such professionalization (Bickman, 1997; Picciotto, 2011). Supporters of this view suggest professionalization of the field would guide training, provide special privileges (e.g., access, salaries), enhance prestige, and offer a degree of respectability that is not possible without formal recognition (Becker, 1970; Larson, 1977; Picciotto, 2011).…”
Section: The Ambiguity Around Evaluationmentioning
confidence: 99%
See 1 more Smart Citation
“…This lack of recognition is problematic for a number of reasons. First and foremost, evaluation is a discipline debating the merits of formal professionalization, that is, the process a field goes through to become a profession (Castro et al, 2016; Conner & Dickman, 1979; Gauthier et al, 2010; House, 1993; Jacob, 2008; Jacob & Boisvert, 2010; Montrosse-Moorhead & Griffith, 2017; Morell & Flaherty, 1978; Picciotto, 2011), with many practitioners and academics arguing in favor of such professionalization (Bickman, 1997; Picciotto, 2011). Supporters of this view suggest professionalization of the field would guide training, provide special privileges (e.g., access, salaries), enhance prestige, and offer a degree of respectability that is not possible without formal recognition (Becker, 1970; Larson, 1977; Picciotto, 2011).…”
Section: The Ambiguity Around Evaluationmentioning
confidence: 99%
“…Thus, a large part of success in professionalization requires evaluators to engage with the public in defining and describing their work, the very task evaluators appear, anecdotally at least, to struggle with. Given that evaluation professionalization has now become a real priority for the field (Montrosse-Moorhead & Griffith, 2017) continued progress in this domain will require that evaluators communicate effectively with the public, both through a process of distinguishing evaluation from other professions and by enhancing the public’s perception of value in evaluation work.…”
Section: The Ambiguity Around Evaluationmentioning
confidence: 99%
“…There was no significant association between the highest quality scores and unintended consequences, even though one might have hypothesized that higher quality evaluations would have been more likely to consider unintended consequences on the grounds that doing so is a good practice. On the other hand, reporting unintended consequences has not been established as an evaluation reporting standard, according to a recent review (Montrosse-Moorhead & Griffin, 2017). However, the middle group of “acceptable” quality evaluations had significantly increased odds of considering unintended consequences at the .05 level in one model specification, which might lend some support to a conclusion that the evaluators who spent the most time interviewing and observing, though not always in the most rigorous of ways, were the most likely to detect unintended consequences.…”
Section: Research Design Data and Methodsmentioning
confidence: 99%
“…To maximise utility of longitudinal evaluation findings, the Checklist for Evaluation Specific Standards will be used to develop evaluation-specific elements for reporting, 45 while the Standards for Reporting Implementation…”
Section: Data Managementmentioning
confidence: 99%