As the academic community has become increasingly concerned about the drifts of research evaluation, mostly researchers’ evaluation, because of the overreliance on metrics, many expert groups have made recommendations to improve the way researchers should be evaluated. In this study, we focus on the recommendation to use narrative curriculum vitae (CVs). We review 28 opinion pieces and 7 experiments to better understand what a narrative CV can refer to, and to explore whether the narrative function that is specific to this kind of CV is proving effective in response to the concerns raised by evaluation practices. A close reading of these documents reveals the conceptual basis of the narrative CV and the problems it is intended to solve; we propose five commonly reported features of the narrative CV: avoid lists, contextualize achievements, fight metrics, enlarge the spectrum of contributions taken into consideration and foster diversity and inclusion. But the promoters of the narrative CV pay little to investigate how the narrative feature itself can lead to any benefits. However, the feedback collected from both applicants and evaluators is quite positive. Regardless of whether it is justified or not, the enthusiasm aroused by the implementation of this new type of CV undeniably has the advantage of opening up the debate, raising awareness and calling to question the bad practices and biases that exist in the researchers’ assessment processes. The narrative nature of the CV is, in the end, just a pretext for raising interest and working towards the adoption of good practices.
We use several sources to collect and evaluate academic scientific publication on a country scale, and we apply it to the case of France for the years 2015–2020, while presenting a more detailed analysis focused on the reference year 2019. These sources are diverse: databases available by subscription (Scopus, Web of Science) or open to the scientific community (Microsoft Academic Graph), the national open archive HAL, and databases serving thematic communities (ADS and PUBMED). We show the contribution of the different sources to the final corpus. These results are then compared to those obtained with another approach, that of the French Open Science Barometer (Jeangirard, 2019) for monitoring open access at the national level. We show that both approaches provide a convergent estimate of the open access rate. We also present and discuss the definitions of the concepts used, and list the main difficulties encountered in processing the data. The results of this study contribute to a better understanding of the respective contributions of the main databases and their complementarity in the broad framework of a country-wide corpus. They also shed light on the calculation of open access rates and thus contribute to a better understanding of current developments in the field of open science.
Peer Review
https://publons.com/publon/10.1162/qss_a_00179
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.