This review of the international literature on evaluation systems, evaluation practices and metrics (mis-)uses was written as part of a larger review commissioned by the Higher Education Funding Council for England (HEFCE) to inform their independent assessment of the role of metrics in research evaluation (2014-2015). The literature on evaluation systems, practices and effects of indicator uses is extremely heterogeneous: it comprises hundreds of sources published in different media, spread over disciplines, and with considerable variation in the nature of the evidence. A condensation of the state-of-the-art in relevant research is therefore highly timely. Our review presents the main strands in the literature, with a focus on empirical materials about possible effects of evaluation exercises, 'gaming' of indicators, and strategic responses by scientific communities and others to requirements in research assessments. In order to increase visibility and availability, an adapted and updated review is presented here as a stand-aloneafter authorisation by HEFCE.
The range and types of performance metrics has recently proliferated in academic settings, with bibliometric indicators being particularly visible examples. One field that has traditionally been hospitable towards such indicators is biomedicine. Here the relative merits of bibliometrics are widely discussed, with debates often portraying them as heroes or villains. Despite a plethora of controversies, one of the most widely used indicators in this field is said to be the Journal Impact Factor (JIF). In this article we argue that much of the current debates around researchers’ uses of the JIF in biomedicine can be classed as ‘folk theories’: explanatory accounts told among a community that seldom (if ever) get systematically checked. Such accounts rarely disclose how knowledge production itself becomes more-or-less consolidated around the JIF. Using ethnographic materials from different research sites in Dutch University Medical Centers, this article sheds new empirical and theoretical light on how performance metrics variously shape biomedical research on the ‘shop floor.’ Our detailed analysis underscores a need for further research into the constitutive effects of evaluative metrics.
This document presents the Bonn PRINTEGER Consensus Statement: Working with Research Integrity—Guidance for research performing organisations. The aim of the statement is to complement existing instruments by focusing specifically on institutional responsibilities for strengthening integrity. It takes into account the daily challenges and organisational contexts of most researchers. The statement intends to make research integrity challenges recognisable from the work-floor perspective, providing concrete advice on organisational measures to strengthen integrity. The statement, which was concluded February 7th 2018, provides guidance on the following key issues:
Providing information about research integrityProviding education, training and mentoringStrengthening a research integrity cultureFacilitating open dialogueWise incentive managementImplementing quality assurance proceduresImproving the work environment and work satisfactionIncreasing transparency of misconduct casesOpening up researchImplementing safe and effective whistle-blowing channelsProtecting the alleged perpetratorsEstablishing a research integrity committee and appointing an ombudspersonMaking explicit the applicable standards for research integrity
The rise of new modes evaluating academic work has substantially changed institutions and cultures of knowledge production. This has been reflected and criticized in the literature in STS and beyond. For STS scholars, these debates (should) however have an even more specific dimension. Many of us are experts on aspects of these changes. But at the same time, we too are part of the processes we are analyzing, and often criticizing. To put it slightly provocatively, often we cannot avoid playing the very same game that we scrutinize. This creates tensions that many of us reflect on, and it certainly has created many implicit and explicit normative stances on how to deal with them. Yet it seems that so far there has been little room in our field to reflect on and exchange this particular kind of experience-based knowledge. There are many different ways to engage with the dynamics of evaluation, measurement and competition in contemporary academia, or to play what we refer to colloquially here as the "indicator game." With this debate, we would like to give room to the expression and discussion of some of these ways. This text is the introduction and prompt to an experimental debate. We discuss the state of the academic discussion on the impact of indicator-based evaluation on academic organization, epistemic work and identities. We use insights from these debates to raise questions for how STS and STSers themselves deal with the indicator game. In conclusion, we summarize our contributors' arguments and propose the concept of "evaluative inquiry" as a new way of representing the quality of STS work in evaluative contexts.
How are "interesting" research problems identified and made durable by academic researchers, particularly in situations defined by multiple evaluation principles? Building on two case studies of research groups working on rare diseases in academic biomedicine, we explore how group leaders arrange their groups to encompass research problems that latch onto distinct evaluation principles by dividing and combining work into "basicoriented" and "clinical-oriented" spheres of inquiry. Following recent developments in the sociology of (e)valuation comparing academics to capitalist entrepreneurs in pursuit of varying kinds of worth, we argue that the metaphor of the portfolio is helpful in analyzing how group leaders manage these different research lines as "alternative investment options" from which they were variously hoping to capitalize. We argue portfolio development is a useful concept for exploring how group leaders fashion "entrepreneurial" practices to manage and exploit tensions between multiple matrices of (e)valuation and conclude with suggestions for how this vocabulary can further extend analysis of epistemic capitalism within science and technology studies.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.