This article first makes a case for the need to establish evaluation reporting standards, support for which is rooted in the growing demand for professionalization, in the growing metaevaluation literature, and in growing efforts to develop reporting standards for inquiry efforts. Then, a case is made for a particular set of such standards introduced in this article—the CHecklist for Evaluation-Specific Standards (CHESS). In doing so, this article outlines the process used and presents the resulting checklist and associated standards, which are defined as the minimum, evaluation-specific elements that must be reported to make judgments about the quality of the evaluation. This article also describes a few of the categories to offer illustrative information to help readers better understand and consider the usefulness of the proposed checklist. A discussion of CHESS, including challenges and next steps, is also included.
Background: Despite consensus within the evaluation community about what is distinctive about evaluation, confusion among stakeholders and other professions abounds. The evaluation literature describes how those in the social sciences continue to view evaluation as applied social science and part of what they already know how to do, with the implication that no additional training beyond the traditional social sciences is needed. Given the lack of broader understanding of the specialized role of evaluation, the field struggles with how best to communicate about evaluation to stakeholders and other professions.
Purpose: This paper addresses the need to clearly communicate what is distinctive about evaluation to stakeholders and other professions by offering a conceptual tool that can be used in dialogue with others. Specifically, we adapt a personnel evaluation framework to map out what is distinctive about what evaluators know and can do. We then compare this map with the knowledge and skill needed in a related profession (i.e., assessment) in order to reveal how the professions differ.
Setting: Not applicable.
Intervention: Not applicable.
Research Design: Not applicable.
Data Collection and Analysis: Not applicable.
Findings: We argue that using a conceptual tool such as the one presented in this paper with comparative case examples would clarify for outsiders the distinct work of evaluators. Additionally, we explain how this conceptual tool is flexible and could be extended by evaluation practitioners in a myriad of ways.
Keywords: evaluation knowledge; evaluation skill; profession; professionalization
Despite the rising popularity of big data, there is speculation that evaluators have been slow adopters of these new statistical approaches. Several possible reasons have been offered for why this is the case: ethical concerns, institutional capacity, and evaluator capacity and values. In this method note, we address one of these barriers and aim to build evaluator capacity to integrate big data analytics into their studies. We focus our efforts on a specific topic modeling technique referred to as latent Dirichlet allocation (LDA) because of the ubiquitousness of qualitative textual data in evaluation. Given current equity debates, both within evaluation and the communities in which we practice, we analyze 1,796 tweets that use the hashtag #equity with the R packages topicmodels and ldatuning to illustrate the use of LDA. Furthermore, a freely available workbook for implementing LDA topic modeling is provided as Supplemental Material Online.
A budding area of research is devoted to studying evaluator curriculum, yet to date, it has focused exclusively on describing the content and emphasis of topics or competencies in university-based programs. This study aims to expand the foci of research efforts and investigates the extent to which evaluators agree on what competencies should guide the development and implementation of evaluator education. This study used the Delphi method with evaluators ( n = 11) and included three rounds of online surveys and follow-up interviews between rounds. This article discusses on which competencies evaluators were able to reach consensus. Where consensus was not found, possible reasons are offered. Where consensus was found, the necessity of each competency at both the master’s and doctoral levels is described. Findings are situated in ongoing debates about what is unique about what novice evaluators need to know and be able to do and the purpose of evaluator education.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.